previous day
next day
all days

View: session overviewtalk overview

08:45-10:30 Session 2: Plenary session

Plenary Session

Location: Gran Cancún 1
Opening Ceremony
Climate Change: Science, Policy and Solutions
SPEAKER: Mario Molina

ABSTRACT. Climate change is the most serious environmental challenge facing society in the 21st century. The average temperature of the Earth's surface has increased so far by about one degree Celsius since the Industrial Revolution, and the intensity of extreme weather events such as heat waves and floods is also increasing, most likely as a consequence of this temperature change. The consensus among climate change experts is that it is very likely that human activities, mainly burning fossil fuels, are causing the observed changes in the Earth’s climate in recent decades. The basic science is very clear, although there are scientific uncertainties, because the Earth’s climate is a complex system. The risk of causing dangerous changes to the climate system increases rapidly if the average temperature rises more than two or three degrees Celsius; society faces an enormous challenge to effectively reduce greenhouse gas emissions to avoid such dangerous interference with the climate system. To achieve this goal, it is important to consider not only the science, but also economic, social and policy issues connected with climate change.

Cities as Open-Ended Complex Adaptive Systems

ABSTRACT. There are many examples of complex systems sharing both common and different properties, from organisms to ecosystems, or from firms to cities. In this talk, I will emphasize particular open-ended complex systems - such as ecosystems or cities - and the especially interesting demands they pose on modeling and theory. I will show how many properties of cities can be understood in terms of modern network models of coupled social and spatial processes, such as scaling or agglomeration effects. However, I will also demonstrate that such models are not sufficient to understand the evolution and growth of cities overtime, nor aspects of their heterogeneity and inequalities.  

To address these issues, I will show that one requires a new type of statistical mechanics that bridges statistical physics and evolutionary theory. Such theory has universal properties that allow us to derive the general statistics of cities, even in the presence of fast and open-ended growth.

10:30-11:00 Session : Coffee Break

Coffee break & poster session

Location: Cozumel A
11:00-13:00 Session 3A: Information and Communication Technologies

Parallel session

Location: Xcaret 1
Characterization of the adopters of the Bitcoin

ABSTRACT. Bitcoin is a cryptocurrency which became very popular during the recent years. The bitcoin transactions are recorded in a public ledger, the bitcoin blockchain, and can be collected since its creation. The bitcoin blockhain works without a central authority but is based on a peer-to-peer system. The transactions take place between users directly and not necessarily with a transaction fee. More exactly the bitcoins exchanged during a transaction are sent from an address to another, each address belonging to a user. One of the challenge to study such a system and understand the behaviour of users in this new transaction system is the possibility for any user to generate multiple addresses. Here we propose a preliminary analysis to get a sense of the users of the bitcoin despite the high degree of anonymity inherent to the system. We do not attempt to identify single users but to charaterize the spatial distribution of users at the country level. Even though it is not possible to link an IP address to the authors of a transaction, one can obtain the IP of the first user which relays the transaction. The current literature on the bitcoin blockchain would tend to support that this IP has some chance to be the IP of the source of the transactions. As a first attempt we thus assume that these IP addresses allow to estimate where the users come from. In order to test this assumption, we compare the countries of provenance of the IP addresses to the geolocalization of the user downloading wallets which are softwares to access the bitcoin technology. We observed a good agreement between the two. This confirms that the IP address that relay transactions can be used to some extent as a good proxy to identify the provenance of the users. However due in particular to the increasing broadcasting of transactions by services instead of single users, the IP addresses are only accessible for a given time interval. For this reason, in order to characterize the evolution of the adoption in different countries we use Google Trends data to quantify the collec- tive attention for the bitcoin, this appears as good proxy when compared to the unique IP addresses. Looking at the evolution of the bitcoin search through Google trends for different countries we can have an hint on the early and new adopters. To complete the study we also build the network of users where a link is given by a transaction and look at the bilateral exchanges among countries using the IP addresses mentioned earlier as well as some heuristic developed in the literature to assign a country to each transaction. We finally compare this to a null model to extract what are the preferred relationships between countries for transactions. To summarize, we propose a way to measure the collective interest for the bitcoin at the level of the country and to understand how the transactions are distributed among the countries.

Community-based anomalous event detection in temporal networks
SPEAKER: Pablo Moriano

ABSTRACT. Communication networks exhibit community structure based on demographic, geographic, and topical homophily. Members of each community tend to have common interests and to share most contents primarily within their community, exhibiting behavioral characteristics of complex contagion. By contrast, previous studies have shown that viral contents tend to behave like simple contagion, easily crossing community boundaries. We hypothesize that contents about an important event tend to be viral because they are relevant to a large fraction of people in the network, and that the increase in viral contents due to the event will trigger more communication across existing communities. We confirm our hypothesis and demonstrate that it can be used to detect anomalous events from temporal communication network structure, namely by monitoring and comparing the communication volume within and across communities. We use two examples-the collapse of Enron and the Boston Marathon bombing-to show that the communication volume across communities indeed increases when the information about the events was spreading across the communication network.

Telecommunications as a Socio-Technical System: The Transport of Information from a Complexity Perspective

ABSTRACT. This contribution attempts to describe, by means of the principles of complexity, the telecommunications ecosystem of a particular context. To accomplish our purpose, we consider telecommunications –the transport of information- as a socio-technical system. The dynamics of this system is highly dependent on the interactions and articulations with the environment, i.e., on its inner subsystems and boundary conditions. We open the possibilities to addressing the problems of the telecommunications sector from an interdisciplinary approach to provide operational solutions that take into account the participation of different areas and agents to arrive at alternative readings of reality beyond the current disciplinary views and tools. Our challenge is how to instill complexity thinking into the telecommunications empirical domain. In other words, “acting” on complexity to look for pragmatic socio-technical interventions and implementations exposed to an overwhelming technology change. We suggest that a socio-technical system, such as telecommunications, should not be regarded only as a mechanism or infrastructure, but as an entity with feedback processes, emergence and interdependence, which are distinctive features of dynamic complex systems, as well as human interactions. Likewise, this perspective can lead to further findings toward the construction of frameworks involving the roles and information flows of the most relevant agents in the various levels of operation and management of this ecosystem. We propose that the telecommunications ecosystem can be fruitfully analyzed in three dimensions: Information Theory, Network Theory and Convergence. In that regard, our analysis covers the structure, content, and nature and scope of the ecosystem interactions.

Mapping the Higher Education System

ABSTRACT. It is widely accepted to exist a perception gap between theskills acquired by individuals during higher education and thedemands of the labor market. This has a profound impact inthe individuals’ careers, by constraining their future earningsand labor mobility, but it also presents a signicant constraintto a country economic growth.In recent years, the impact of degree programs and/orhigher education institutions on graduates future payoffs hasbeen broadly studied. In contrast, the quantication of the per-ception that different agents such as, policy makers, highereducation candidates, and employers, have on the relationshipamong skills available, has received little attention. An ap-proach to quantify the perception gap of skills between candi-dates and employers, would benet and facilitate the effective-ness on education and labor public policies.TheInternational Standard Classication of Education(ISCED) classication scheme – proposed and maintained byUNESCO – is the standard adopted by many countries andacademic studies as the basic structure for producing com-parable statistical data between different educational systems.The ISCED represents thede factostructure as perceived bypolicy makers and it is constructed through a rationale of com-paring the contents of the different degree programs. How-ever, this classification presents some problems. Take for in-stance Architecture, although classified as part of the Engi-neering field it is, in fact, much closer to Arts and Designwhen applicants preferences patterns are taken into accountIn this work, we use data on 380.375 candidates’ applica-tions to degree programs between 2008 and 2015 from thePortuguese Public Higher Education System. Each appli-cation corresponds to a list of up to six preferences of de-gree programs. We propose a complex networks approach toexplore the relationships between different degree programsbased on the perceptions of the candidates. We identify prox-imity between degree programs by nding which pairs of pro-grams exhibit a statistically signicant pattern of co-occurrencein the candidates preferences list. The resulting structure has649 nodes and 3,067 edges, and it shows a strong modu-lar character with 10 communities exhibiting a modularity ofQ=0.70. Our data-driven approach shows significant differ-ences in respect to the ISCED scheme.Although this structure does not represent skills, it repre-sents the the choice space of candidates. Indeed, degree pro-a)Electronic mail: flaviopp@mit.edugrammes are essentially a package of skills and are an impor-tant aspect of the applicants’ decision making when applyingto higher education. In that context, we compare the impact ofour structure by studying the spatial patterns of gender dom-inance, applicants’ scores and expected unemployment levelsby program. All showing a strong positive assortment, i.e.spatial correlations. These findings leads us to conclude twothings. First, the choices of candidates are strongly condi-tioned to the candidates’ properties (gender and scores), andsecondly, the choice of degree program strongly constrains theentry in the labor market of candidates and can hinder their fu-ture.Ongoing work aims at quantify how employers perceivethe relationship between the degree programs; to quantify themismatches between the perceptions of policy makers, candi-dates, and employers; and finally the economic impact of suchdisparities.

Information spreading on Twitter network
SPEAKER: Sijuan Ma

ABSTRACT. Online social networks are becoming major platforms for people to exchange opinions and information. As social media marketing becomes more and more popular, their spreading dynamics is fully tractable. This enables us to study the fundamental mechanisms driving their spreading, and use such knowledge for better understanding and prediction. Based on the analysis of empirical user-generated data from the Twitter network, we find that the spreading of messages follows a mechanism that differs from the spread of disease. We introduce model extended from the FSIR model for the spreading behavior. In this mech- anism, users with more friends have less probability to retweet a specific message due to competition for limited attention. By analyzing multiple popular tweet messages and their retweet process, we find those characteristic properties are consistent with the proposed FSIR model, including the degree distribution of retweeted users. Besides, we introduce the variable attractiveness to describe how attractive the messages are for users and this determines the overall willingness of retweet. Intuitively, attractive messages have higher probabilities to be retweeted. We find that the attractiveness decrease with time. Based on FSIR model and retrieved attractiveness of specific tweet, our future work is to do classification and prediction of final retweet population.

Mapping Social Dynamics on Online Social Media
SPEAKER: Fabiana Zollo

ABSTRACT. Information, rumors, and debates may shape and impact public opinion. In the latest years several concerns have been expressed about social influence on the Internet and the outcome that online debates might have on real-world processes. Indeed, on online social networks users tend to select information that is coherent to their system of beliefs and to form groups of like-minded people --i.e., echo chambers-- where they reinforce and polarize their opinions. In this way the potential benefits coming from the exposure to different points of view may be reduced dramatically and individuals' views may become always more extreme. Such a context fosters misinformation spreading, which has always represented a socio-political and economic risk. The persistence of unsubstantiated rumors suggests that social media do have the power to misinform, manipulate, or control public opinion. Current approaches such as debunking efforts or algorithmic-driven solutions based on the reputation of the source seem to prove ineffective against collective superstition. Indeed, experimental evidence shows that confirmatory information gets accepted even when containing deliberately false claims, while dissenting information is mainly ignored, influences users' emotions negatively and may even increase group polarization. Ideed, confirmation bias has been shown to play a pivotal role in information cascades. Nevertheless, mitigation strategies have to be adopted. To better understand the dynamics behind information spreading, we perform a tight, quantitative analysis to investigate the behavior of more than 300M users w.r.t. news consumption on Facebook. Through a massive analysis on 920 news outlets pages, we are able to characterize the anatomy of news consumption on a global and international scale. We show that users tend to focus on a limited set of pages (selective exposure) eliciting a sharp and polarized community structure among news outlets. Indeed, there is a natural tendency of the users to confine their activity on a limited set of pages. According to our findings, news consumption on Facebook is dominated by selective exposure. Similar patterns emerge around the Brexit --the British referendum to leave the European Union-- debate, where our analysis underlines the spontaneous emergence of two well-separated and polarized communities around Brexit pages. Our findings provide interesting insights about the determinants of polarization and the evolution of core narratives on online debating. Our main aim is to understand and map the information space on online social media by identifying non-trivial proxies for the early detection of massive informational cascades. Furthermore, by combining users traces we are finally able to draft the main concepts and beliefs of the core narrative of an echo chamber and its related perceptions.

11:00-13:00 Session 3B: Cognition and Linguistics - Linguistics

Parallel session

Location: Xcaret 2
Coevolution of synchronization and cooperation in costly networked interactions

ABSTRACT. The synchronization of coupled oscillating systems is a phenomenon that has received considerable attention from the scientific community given the wide range of domain. More specifically, the pattern of interaction among the oscillators has been proven to play a crucial role in promoting the conditions for the emergence of a synchronized state. Such interaction pattern may be encoded as a graph and several studies investigating the emergence of synchronization have been performed in groups of oscillators on complex networks.

Despite the amount of studies made so far, all the approaches were based on the hypothesis that the variation of the state for an oscillator, which is a fundamental requirement to attain synchronization, is costless. Yet, it seems reasonable to assume that when an oscillator alters its state this frequency variation involves an adjustment cost that, in turns, impacts on the dynamics. The introduction of such adjustment cost leads to the formulation of a dichotomous scenario. In this framework, an oscillator may decide to pay the cost necessary to alter its state and make it more similar to that of the others or, alternatively, keep it unaltered hoping that the others adjust their states to its own. The former behavior can be considered as an act of cooperation while the latter as a defection one; both of them constitute the basic action profiles of a Prisoner's Dilemma game. Hence, the emergence of synchronization may be seen as the outcome of an evolutionary game in which the oscillators can strategically decide which behavior they will adopt according to the payoff they received in the previous synchronization stage.

Complex networks play a key role in the emergence of cooperation and, in particular, the presence of hubs in scale-free networks fosters even more such phenomenon. Thus, it is worth to study the underlying mechanisms responsible for the onset of synchronization in systems where the single oscillators are placed on the nodes of a network and can decide to cooperate, by synchronizing their state with that of their neighbors, or not. This leads to a coevolutionary approach where the synchronization dynamics and the evolution of cooperation influence each other. Coevolutionary approaches represent the natural extension of the actual models in order to achieve a better description of complex systems. More specifically, we consider a system of Kuramoto oscillators that are able to decide which strategy between cooperation and defection they will adopt upon the evaluation of their payoff. We consider that an oscillator assumes a cost that is tuned by a model parameter alpha and it is proportional to the absolute value of the difference between the previous and the current frequency of the oscillator. On the other hand, the positive payoff, i.e. the benefit, is given by the synchronization achieved within the oscillator neighborhood. The emergence of both cooperation and synchronization is studied for three different topologies, namely: Erdös-Rényi random graphs, Random Geometric Graphs and Barabási-Albert scale-free networks.

The results of this study are published on Physical Review Letters.

The morphospace of language networks

ABSTRACT. Language can be described as a network of interacting objects with different qualitative properties and complexity. These networks include semantic, syntactic, or phonological levels and have been found to provide a new picture of language complexity and its evolution. A general approach considers language from an information theory perspective that incorporates a speaker, a hearer, and a noisy channel. A key common element is a matrix (often binary) encoding which words name each one of the objects that exist [1]. This tool allows us to measure the cost of communication across the channel for hearers and speakers, and is at the core of a rich literature on least-effort language exploring the optimality of communication codes. Computational results suggest that human languages might lay at the critical point of a phase transition, simultaneously coping with several optimality constraints [1, 2, 3, 4]. The critical nature of this phase transition has not been confirmed before, and these results remain speculative as they lack empirical support from real languages.

The matrix at the heart of this theoretical framework naturally introduces networks mediating the communicating agents, but no systematic analysis of the underlying landscape of possible language graphs has been developed. Here [5] we present a detailed analysis of network properties on this generic model of communication codes. A rather complex and heterogeneous morphospace of language networks is revealed. This morphospace can be analyzed from the perspective of Pareto (or multiobjective) optimization attending to the different needs of speakers and hearers. Pareto optimality has been linked to statistical mechanics [6], which allows us to prove, for the first time and analytically, that the system does include a critical point separating a first order phase transition – also known as a hybrid phase transition. Additionally, we use curated data of English words to locate and evaluate a real language, for the first time, within this morphospace of communication codes. Our findings indicate a surprisingly simple structure unless referential particles are introduced in the vocabulary. This also opens up the exploration of the heterogeneity of the language morphospace.

[1] Ferrer i Cancho R and Solé R, 2003. Least effort and the origins of scaling in human language. Proc. Natl. Acad. Sci., 100(3), pp.788-791. [2] Prokopenko M, Ay N, Obst O, and Polani D, 2010. Phase transitions in least-effort communications. J. Stat. Mech., 11, P11025. [3] Salge C, Ay N, Polani D, and Prokopenko M, 2013. Zipf's Law: Balancing Signal Usage Cost and Communication Efficiency. SFI working paper: 13-10-033. [4] Solé R, Seoane LF, 2014. Ambiguity in Language Networks. Linguist. Rev., 32(1): 5-35. [5] Seoane LF and Solé R, 2017. The morphospace of language networks. In preparation. [6] Seoane LF, 2016. Multiobjective Optimization in Models of Synthetic and Natural Living Systems. Universitat Pompeu Fabra, Barcelona, Spain.

Optional realisation of the French negative particle (ne) on Twitter: Can big data reveal new sociolinguistic patterns?

ABSTRACT. From the outset, real-world data has been essential to the empirical and theoretical development of the field of sociolinguistics (Labov, 1975). It is thus not surprising that this field recently joined the movement of computational social sciences (Lazer et al., 2009) that results from the ability to collect and model vast digital datasets concerning the behaviour of individuals in collective contexts. The emerging field of computational sociolinguistics (Nguyen et al., 2016) works on data obtained by sensors (proximity sensors, wearable recorders) or the digital communication that permits automatic, ongoing and unsupervised recording through the collection of traces on the web, in social media, or via portable devices.

In this work we exploit these advancements and use massive datasets to reveal sociolinguistic patterns that would remain unseen otherwise. More precisely, we analyse a dataset comprising of 15\% of all Twitter communications in France, recorded between June 2014 and January 2017 along with meta information emanating from the profiles of users. We focus on the usage of the sociolinguistic variable ``ne", assigning the correct form negation particularly in the French language. This variable appears with optional usage of the first morpheme of the negation (\textit{Je fume pas} vs. \textit{Je ne fume pas} - I do not smoke) for three reasons: ``ne" is a well-documented sociolinguistic marker of spoken French (Armstrong and Smith, 2002); inclusion and omission of ``ne" are visible in the written tweets; and finally ``ne" is always included within standard writing, enabling us to assess the adherence of each user to the written norms. We complete our dataset by identifying the home locations of $400$K Twitter users to match their locations with spatially enabled socio-economic data collected during the 2016 census in France. This dataset, provided by INSEE\footnote{}, includes the age structure of the population as well as the annual income for people located in square grids of 200m per 200m throughout the French territory.

Using these combined datasets we extract tweets that include a negative construction to present results in three directions. First, we signal the overall score of ``ne" realisation and its regional variation in France (approx. 16\% in the North and 28\% in the South). Second, we show that correct usage of negation continuously varies over the time of days and weeks, increasing in the morning, while decreasing during the night. Finally, we show evidences of a positive correlation between annual income and rate of usage of ``ne", i.e. the level of correctness in the way users express themselves. Our analysis focuses on the sociolinguistic implications of the results, including the close examination of the risk of bias. As a final argument we defend that thick data should be combined with big data in order to explain such unforeseen patterns of sociolinguistic variables (Wang, 2013).

Kinship structures create persistent channels for language transmission
SPEAKER: Cheryl Abundo

ABSTRACT. Languages are transmitted through channels created by kinship systems. Persistence of these kinship channels leave traces in the genetic structure of a population. In the traditional societies of Sumba and Timor, movements among communities are driven by kinship practices which in turn shape both linguistic and genetic variations. Finely resolved co-phylogenies of languages and genes reveal persistent movements between stable speech communities facilitated by kinship channels, which play a critical role in stabilizing deep associations between languages and genes at small scales. This association, routinely found from local to global scales, can therefore be understood through microscopic cultural processes that define pathways of joint inheritance such as sustained adherence to kinship practices.

The fall of the empire: The Americanization of English

ABSTRACT. With roots dating as far back as Cabot's explorations in the 15th century and the 1584 establishment of the ill-fated Roanoke colony in the New World, the British empire was one of the largest empires in Human History. At its zenith, it extended from North America to Asia, Africa and Australia deserving the moniker "the empire where the sun never sets". As history has shown countless times, the rise and fall of empires is a constant driven by a conundrum of internal and external forces. In the case of the British empire, it concurs the curious coincide that its preeminence faded as the United States of America --one of its first colonies-- took over its role in the global arena. As the empire spread so did its language and thanks to both its global extension, and the rise of the US as a global actor, English currently has an undisputed role as the global lingua franca serving as the default language of science, commerce and diplomacy. Given such an extended presence, it is only natural that English would absorb words, expressions and other features of local indigenous languages.

The transfer of political, economical and cultural power from Great Britain to the United States has progressed gradually over the course of more than half a century with the World War II being the final stepping stone in the establishment of American supremacy. The cultural rise of the United States also implied the export of their specific form of English resulting in a change of how English is written and spoken around the world. In fact, the "Americanization" of (global) English is one of the main processes of language variation in contemporary English. As an example, if we focus on spelling, some the original differences between British and American English orthography are somehow blurred and, for instance, the tendency for verbs and nouns to end in -ize and -ization in America is now common on both sides of the Atlantic. Likewise, a tendency for Postcolonial varieties of English in SE Asia to prefer American spelling over the British one, has also been pointed out. Electronic communication has indeed been considered to play a role in linguistic uniformity. This paper makes a contribution to the study of the Americanization of English, using a corpus of 213,086,831 geolocated tweets will be used to study the spread of American English spelling and vocabulary throughout the globe, including regions where English is used as a first, second and foreign language.

We study both the spatial and temporal variations of vocabulary and spelling of English using a large corpus of geolocated tweets and the Google Books datasets corresponding to books published in the US and the UK. We find that American English is the dominant form of written English outside the UK and that its influence is felt even within the UK borders. Finally, we analyze how this trend has evolved over time and the impact that some cultural events have had in shaping it.

11:00-13:00 Session 3C: Economics and Finance - Macroeconomics and economic Policy I

Parallel session

Location: Cozumel 1
Following the Product Progression Network to Escape from the Poverty Trap

ABSTRACT. Is there a common path of development for different countries, or each one must follow its own way? In order to produce cars, one has to learn how to produce wheels before? How can countries escape from the poverty trap? Let us represent countries as walkers in a network made of goods, defined such that if a country steps on one product, it will export it. Initial configurations and paths can be very different: while Germany has already explored much of the available space, underdeveloped countries have a long road ahead. Which are the best paths in the product network? To answer these questions, we build a network of products using the UN-Comtrade data about the international trade flows. A possible approach is to connect two products if many countries produce both of them. Wanting to study the countries' dynamics, we want also our links to indicate if one product is necessary to produce the other. So our network is directed: a country usually goes from one product to another, but not vice versa, indicating a natural progression. We introduce an algorithm that, starting from the empirical bipartite country-product network, is able to extract this kind of information. In particular, we project the bipartite network in a filtered monopartite one in which a suitable normalization takes into account the nested structure of the system. We studied the temporal evolution of countries, finding that they follow the direction of the links during industrialization, and spotting which products are helpful to start to export new products. These results suggest paths in the product progression network which are easier to achieve, and so can drive countries' policies in the industrialization process and to exit from the poverty trap. In the standard view of the industrialization process of countries, these have to face a barrier to escape from the poverty trap, which is a monetary threshold defined in terms of average wage or physical capital. When such a threshold is reached, a self-feeding process quickly brings the country from one state of equilibrium (the poverty trap) to another, catching up with the fully developed countries. We use a non-monetary measure of the economic complexity of a country, called Fitness, and we show that complex economies start the industrialization process with a lower threshold. On the contrary, if the Fitness is low, a sustainable growth can be reached only if a higher standard, monetary threshold is reached. As a consequence, we can introduce the concept of a two-dimensional poverty trap: a country will start the industrialization process if it is not complex but rich, or if it is poor but very complex (exploiting this new dimension and laterally escaping from the poverty trap), or a linear combination of the two. Finally, we show that following the recommendations given by the product progression network is correlated with a systematic increase of Fitness, showing that such a strategy can lower the barrier to exit from the poverty trap.

Macroeconomic Crises, Macroeconomic Analysis and Complex System Modeling

ABSTRACT. The celebrated notion “More is Different” certainly applies in social/ economic contexts. Economic activity consists of a highly intricate maze of countless interactions between agents with cognitive abilities comparable to those of the researcher, who are involved for practical purposes in processing information about their environment, just as the economist does in an analytical context. Such a system immediately raises questions about its features of self- organization and, reciprocally, about its potential for generating economy- wide coordination failures. Macroeconomics is intrinsically concerned with this tension. Given its subject matter, macro analysis must necessarily rely on drastically schematized representations of individual behaviors and patterns of interactions. If the all- encompassing theory is out of reach, one should expect a coexistence of models with variable features and presumed ranges of validity. The effort to adapt the analytical approach to the problem at hand would contrast with both dogmatism and loose eclecticism. In order to be relevant, macroeconomics must search for suitable approximations, especially regarding the study of large-scale, socially costly disturbances like deep recessions, debt crises or high inflations. Standard macroeconomics has mainly been based in the last decades on constructs that assume general equilibrium (conditioned by «frictions»), with a presumption that agents decide optimally given constraints, and forecast their future opportunities with rational expectations (typically viewed as correspondence between the perceived and actual distributions of the variables of interest). The massive volume of work done within that general framework cannot be ignored. But the approach presents serious shortcomings and limitations, of pertinence, especially for the study of critical phenomena, and also of logic. The presentation will discuss briefly, in particular terms, some of these issues, especially concerning consistency problems in the usual rational expectations models, which have concrete implications for the representation of the dynamics of the anticipations and beliefs of agents when dealing with processes associated with macro crises. Complex system analysis, especially in the form of multi- agent models (ABM´s), seems to offer a natural avenue for progress in macroeconomics. Indeed, quite interesting advances have been made in that direction, especially regarding the propagation of impulses in credit and input-output networks. However, the diffusion of macro ABM´s as an everyday tool has been relatively slow. The presentation will address specific topics concerning the evolution and prospects of macro theory, with reference to the study of large economic fluctuations and financial disturbances. The discussion will include: • Concrete illustrations, using old- standing elementary questions, of the centrality of coordination and information processing issues in macroeconomic settings, and of the pertinence of paying special attention to the range of applicability of arguments and models. • Evidence indicating analytically relevant elements of big macroeconomic fluctuations/disturbances: changing trends, frustrated expectations, buildups and sudden transitions. • Reflections on the potential of complex- system modeling in macroeconomics, and comments on some open issues regarding the representation of individual behaviors and systemic mechanisms.

Collateral Unchained: Rehypothecation networks, complexity and systemic effects.

ABSTRACT. This paper investigates how the structure of rehypothecation networks affects the dynamics of endogenous total liquidity and the emergence of systemic risk within the financial system. Rehypothecation consists in the right of reusing the collateral of a transaction many times over. Rehypothecation increases the liquidity of market players, as those players can use the collateral received to honor another obligation. At the same time, rehypothecation lowers parties actual coverage against counterparty risk, because the same collateral secures more than one transaction, and it can therefore be a source of systemic risk. To study the above issues, we build a model where banks are linked by chains of repo contracts and use or re-use a fixed amount of initial collateral. In the model each bank sets the amount of collateral to hoard using a VaR criterion, and the fraction of collateral hoarded is a function of the fraction of collateral hoarded by the bank's neighbors. In this framework, we show that, first, the additional amount of collateral endogenously created in the system is positively related to the density of the network, revealing an important effect of market integration and diversification processes on collateral and liquidity creation. Second, the emergence of long chains and especially of cyclic structures is key to create a level of collateral that may far exceed the initial level of proprietary collateral of banks in the network. Furthermore, we study the amount of liquidity hoarding and of total collateral losses following uncertainty shocks hitting a small fraction banks in the system under different network structures. We show that core-periphery networks allow on the one hand the creation of endogenous collateral. On the other hand, they are more exposed to larger cascades of liquidity hoarding and to larger losses in collateral. This indicates that rehypothecation networks involving an unequal distribution of collateral in the system are also characterized by a trade-off between liquidity and systemic risk.

A Macroeconomic Computational Agent Based Model and its Social Accounting Matrix representation

ABSTRACT. This paper explore the complementarities between two techniques to analyze an economy represented as a social complex system. On the one hand a Macreconomic Agent Based Model is built and simulated. On the other hand a Social Accounting Matrix (SAM) of the artificial economy is computed in each one of the simulated period. The Macroeconomic Agent Based Model allows a constructive representation of an economy, taking into account multiple nonlinearities and hetereogeneity related to adaptive behavior and endowments of the agents in the economy. The Social Accounting Matrix is a framework both for models of how an economy work as well as for the data which is useful to monitor its working. SAMs are widely employed in countries as the final product of the accounting of the society and the basic data representation to analyze the anticipate the effect of public policy. Therefore being able to explore how this system of information capture and represent an (artificial) economy provides useful insight for futures improvement in its representation. In the paper, some of the multipliers of the SAM matrix are used to anticipate the evolution of the simulated economy and preliminary conclusions are reported about how to improve the predictive capacity of these multipliers.

client based view on the transmission of monetary policy

ABSTRACT. “Secular stagnation” currently constitutes one of the main debates in economics: Low economic growth, low inflation and low interest rates have become commonplace in many developed economies. One potential driving factor of secular stagnation is that monetary policy is, in the presence of ageing, no longer an effective tool to counter a deficient demand for investment goods in the economy. The question becomes then how monetary policy, as usual through the banking sector, impacts ultimately the behaviour of individual savers and investors.

We exploit a proprietary data set covering millions of clients over 10 years with all their personal characteristics, income, portfolio composition, loans and mortgages. In addition it contains all their financial transactions including transfers and withdrawals, giving a unique insight into their economic behaviour.

Our work focuses on saving and investment (S&I) decisions given the heterogeneity of individuals with regard to age, expected life expectancy, civil state, income level, income stability, wealth, indebtedness, spending patterns, etcetera. This question requires methods outside the realm of traditional economics. We use several tools from Statistical Learning and more traditional econometrics models such as local projection modelling.

With this approach we are able to gain insight into past behaviour, uncovering patterns in groups of individuals that are not obvious from aggregate observables. One example is the substitution of bonds for saving accounts during decreasing interest rate periods, which is the result of combining distinct behaviour of separate groups, not simply the extrapolation of a single behaviour (as with representative agents).

More importantly, we are able to predict future S&I decisions for each individual, under given future scenarios of macroeconomic conditions such as interest rate. Special care has been given to avoid overfitting. The behaviour arises solely from the experience of other agents without solving any structural economic model as assumed in traditional economics. Our approach has been employed with remarkable out-of-sample success (1 year ex post) on millions of real-world agents. Preliminary results show changing (non-linear) behaviour with regard to changing characteristics.

Our approach would be very valuable for understanding the effects of managing macroeconomic conditions such as monetary policy as well policies that directly impact the characteristics of individuals or their economic environment.

The complexity in multi-regional economic systems
SPEAKER: Zeus Guevara

ABSTRACT. The complexity of economic systems can be understood as the level of interdependence between the elements that compose these systems (Lopes et al. 2012). This property is important to the analysis of economies as it is related to their dynamic response to exogenous shocks. Small, less sophisticated or close economies are thought to be less complex than large, more sophisticated or open (globalized) economies, respectively. Moreover, an economy consists of several regions, which are becoming more and more connected. The more interdependence connections are between regions, the more complexity that the economic system is expected to have. This raises the question about how the complexity evolve in a multi-regional economic system. In this respect, the objective of this study is to understand the trend of complexity in multi-regional economic systems with respect to the number of regions that compose them. To do so, we use the input-output methodological framework in combination with the CAI indicators, developed by Ferreira do Amaral et al. (2007), as a measurement of complexity. On the one hand, the input-output method offers a simple representation of structural interdependence between elements of the economy, described by inter-sectoral economic flows. On the other hand, the CAI indicators considers a network effect, based on the number of direct and indirect connections between elements; and a dependency effect, based on the influence of connections to the behavior of each element. It is worth mentioning that Lopes et al. (2012) compares the CAI indicators to other complexity indicators and concludes that the former are the most suitable to study complexity in input-output analysis. Our case study consists of the estimation of CAI indicators for the WIOD multi-country economic data for 2014 (Timmer et al. 2012), which is composed by 43 countries and the rest of the economy. First, indicators of each country are separately calculated. Then, to build the trend of complexity, indicators are calculated for multi-country systems, from a 2-country to a 43-country system (adding one country in each iteration). It is hypothesized that the trend of complexity of multi-regional systems grows geometrically with respect to the number of regions that compose these systems. The results of this study could provide insights into the understanding of complexity in multi-regional systems, of the effect of increased inter-regional connections on individual regions, and of the growing complexity of a globalized world.

References Lopes, J.C., Dias, J. and Ferreira do Amaral, J., 2012. Assessing economic complexity as interindustry connectedness in nine OECD countries. International Review of Applied Economics, 26(6), pp.811-827. Ferreira do Amaral, J., Dias, J. and Lopes, J.C., 2007. Complexity as interdependence in input–output systems. Environment and Planning A, 39(7), pp.1770-1782. Timmer, M., Erumban, A.A., Gouma, R., Los, B., Temurshoev, U., de Vries, G.J., Arto, I.A., Genty, V.A.A., Neuwahl, F., Francois, J. and Pindyuk, O., 2012. The world input-output database (WIOD): contents, sources and methods. Institute for International and Development Economics.

11:00-13:00 Session 3D: Infrastructure, Planning and Environment- Urban Flows and Tranport Systems I

Parallel session

Location: Xcaret 3
Formulating a Resilience Index for Metro Systems

ABSTRACT. In this invited talk, we present our latest work* on the formulation of the resilience index for mass rapid transit systems. To the authors' knowledge, there is no globally accepted indicator that can comprehensively and holistically quantify the resilience of metro rail networks. Most existing research in the field have focused on either adapting network centrality measures or by inferring various assumptions on the dynamics within the MRT. In our work, we propose several indicators that capture key attributes related to transit system resilience including vulnerability and redundancy. These attributes then guide the development of resilience indicators, which then compose the composite resilience index presented herewith. The framework proposed is able to account for non-homogenous rail networks incorporating considerations of lines with different capacities and stations with different demand characteristics. We apply our framework to multiple configurations of the Singapore rapid transit network and show that it effectively captures not only the overall improvements in resilience but also highlights areas within the network that require additional attention.

* This work is currently under review for presentation at TRB Annual Meeting 2018.

Share journey application summary (Puma ride)
SPEAKER: Marco A Rosas

ABSTRACT. Given the conditions of growth and development, mobility plays a preponderant role as a factor of change in the communities and a strategy to generate conditions of development and urban prosperity. In this context, the city in terms of mobility, is defined by two contrasting features, one is the importance of collective public transport, walking and cycling, and the other, the growing increase in motorization and congestion, investing a high percentage of public resources in urban and inter-urban infrastructure for private transport. The objective of this project is to develop a tool that allows the interaction to share trips, in the first instance within the university community, with the intention of not necessarily be an application to share vehicles, if not, in a wider way, share trips by public transport, bicycle or walking. Hoping that the interaction stimulates the coexistence in the university community within an environment of consciousness in the need of the improvement of the mobility, as much inside university city as in the city of Mexico. The development of the application will be in an open platform that allows not to depend on the use of commercial software licenses. These platforms are: • DJango, Leaftlet, Python, Android, Qgis, Open Street Maps In the interaction of application with the user two main algorithms are used, one for the generation of routes in which the criteria are the following: Mode of transport (car, public transport, pedestrian and bicycle), Travel Distance, Travel time, Number of transport mode changes (transshipments) The second algorithm is the pairing algorithm, the proposed criteria are as follows: Closest existing alternative routes, Distance to the origin of the alternative route, Distance to alternate route destination, Pairing timeout. The main platform is a GIS platform (geographic information systems) in which thematic layers will be built to a dynamic transport model in which dynamic routes and multimodal routes can be obtained. The UNAM's own mobility infrastructure is integrated into the Open Street Maps topology such as PUMAbus, cycle paths and PUMAbici Parking. The modes of transport included in the model and the joint transport topology are: Car, Public Transport (metro, metrobus, electric and buses), Pedestrian, Bicycle. The possible combinations for multimodal routes are: Auto-Bike, Public Transportation-Pedestrian, Pedestrian-Bicycle, Bicycle-Public Transport. The layers that compose the transport models and that allows the implementation of algorithms of routes are the following: • Roads (streets, pedestrian walkways, pedestrian bridges), cycle paths (including CU), Metro lines, Metrobus Lines, Metro stations, Metrobús stations, Electrical transportation, Collective transport lines, Collective transportation stops, Pumabus routes, Pumabus stops, PUMA bike modules, Points of interest (location of schools, institutes and infrastructure). Another important part of the application is the "CHAT" module through which users can interact to agree on places and times to carry out shared trips. This module allows the functionality to add, remove or block other users as well as manage the application history of each application.

Modelling delay dynamics on railway networks

ABSTRACT. Railways are a key infrastructure for any modern country, so that their state of development has even been used as a significant indicator of a country's economic advancement. Moreover, their importance has been growing in the last decades either because of the growing railway traffic and to governments investments, aiming at exploiting railways in order to reduce CO2 emissions and hence global warming. To the present day, many extreme events (i.e. major disruptions and large delays compromising the correct functioning of the system) occurs on a daily basis. However these phenomena have been approached, so far, from a transportation engineering point of view while a general theoretical understanding is still lacking. A better comprehension of these critical situation from a theoretical point of view could be undoubtedly useful in order to improve traffic handling policies. In this paper, we shall focus on the study of railway dynamics, addressing in particular the inefficiencies caused by delays of trains and their spreading dynamics throughout the network. {Research on delays and their dynamics in not anew. In (fleurquin 2013 systemic, campanelli 2014 modeling) the diffusion of delays in the Air Transport Network among different flights has been studied and modeled in the US and in the European Union, while in the railway transport system, a Rail Traffic Controller model has been used to asses the different factors contributing to the delay of a specific train, and a model aiming at predicting the positive delay in urban trains has been developed in (higgins 1998 modeling).

Here, we propose a novel approach by considering delay spreading as a proxy of contagion phenomena in a network of interacting individuals, which in our case are represented by trains diffusing over a railway network. This interpretation allows for the application of conceptual schemes and methodologies that have already been proved as fruitful in the study of epidemic spreading. We applied these tools to scheduled and real train timetables for Italy and Germany, which display some of the most dense railroad networks in Europe. We gathered these datasets through data-mining procedures by relying on official public APIs that constantly monitor the current situation of trains in the whole networks. We analysed the static network of stations and then dealt with the dynamics of real trains in order to unravel the mechanism of delay transmission from station to station, train to train, and its effects on the macro scale. We measured the delay distributions and cluster size of delayed trains, to give explicit and quantitative account of the effects of delays on the overall system performance. Finally, we conceived a simple model to simulate the delay spreading dynamics on real schedules and provided evidence for its accuracy and stability in terms of its forecasting features. By determining the criticalities in the network, we could identify major delay spreaders whose behavior crucially decreases the global performances of the railway transportation networks.

A Measurement of the Scale of Activity in Places with a High Proportion of Non-Work Destination Trips

ABSTRACT. This paper identifies places that concentrate a large proportion of non-work trips in Singapore, and then characterizes and classifies them according to their physical and social attributes. A more precise characterizations of non-work destinations is needed to improve activity based transportation models and the specification of travelers’ choice sets.

Singapore is a transit oriented highly dense urban area. Over the last years, the city has been shifting towards a polycentric structure. Today, most places are accessible by transit; jobs cluster around various centers; mixed commercial and residential areas are common; while residential economic and ethnic segregation has been kept low. This spatial configuration impacts on people´s choices about non-work and non-residential destinations. For instance, people from a range of lifestyles find it accessible to congregate around popular non-work destinations, even if they are not always nearby home and work locations.

Methodologically, the problem lies in that non-work leisure destinations exhibit different temporal and spatial scales. As a result, the types of activities that take place, and the kinds of visitors whom they attract, tend to cluster at different spatio-temporal scales. For instance, a peak of activity might occur from 7 to 10 pm in one place, but 4-6:30 pm in another. Likewise, the peak of activity could encompass a one-block shopping mall, or a 2-kilometer long commercial and retail strip. Additionally, the distribution of human activity over a non-work destination does not distribute uniformly across such spatio-temporal scales. Nested sub-clusters are identifiable only at a finer scale of observation. The composition of sub-clusters inside a non-work destination adds another dimension to questions about social mixing and shared activities. The methodology has six parts. First, a spatial-temporal algorithm of pattern detection identifies about 120 patches in the city using cellphone data from SpotRank. The algorithm draws geographic and temporal boundaries for each cluster. Second, a transportation survey validates place selection and boundaries, ensuring that clusters contain a major portion of leisure destinations in the sample. Third, geographical data scrapped from Google Places API characterizes commercial activity within clusters (e.g. shopping malls, mix of establishments, among others). Fifth, analysis of scrapped data, combined with a synthetic population and basemaps, identifies and characterizes sub-patches within of each cluster. Finally, a k-means algorithm is used to classify the clusters, in order to develop a typology of non-work destinations that accounts for variations in spatial-temporal scales, sub-cluster structure, and activity mix.

Our study shows that there is rich variation in the spatial and temporal scales of non-work destinations, as well as their internal distribution and mix of activities. This difference brings us some clues about the lifestyles, income groups, and activities that these places seek to attract. The evidence also suggests that transportation surveys need an even finer level of geographical detail to characterize appropriately the activity patterns that occur at heterogeneous, popular non-work destinations in highly dense and mixed areas.

A Multilayer perspective for the analysis of urban transportation systems
SPEAKER: Alberto Aleta

ABSTRACT. Public urban mobility systems are composed by several transportation modes connected together. Most studies in urban mobility and planning often ignore the multi-layer nature of transportation systems considering only aggregated versions of this complex scenario. It is easy to find very recent studies that rely on a complex notation to incorporate multiple modes or time, or that simply aggregate the whole network losing information regarding transfer times. The few previous studies on urban transportation systems as multiplex networks focus on addressing their multimodal nature, considering each layer as a transportation mode. Although useful for several purposes, this representation totally neglects transfer and waiting times, or requires complex notation to be able to incorporate them.

In this work we present a model for the representation of the transportation system of an entire city as a multiplex network in which each line is a layer. We will then see how we can group these layers according to the transportation mode they belong to, in a sort of superlayer, to recover the results of previous to studies. Using these two different perspectives, one in which each line is a layer and one in which lines of the same transportation mode are grouped together, we study the interconnected structure of 9 different cities in Europe ranging from small towns to mega-cities like London and Berlin. In particular, we will show that modifying slightly the definition of interdependency of a multiplex network we can proof that metro networks naturally speed up the system even without taking into account their greater average velocity or carrying capacity.

Finally, for the city of Zaragoza in Spain, we will add some publicly available data (in particular, data about service schedule and waiting times) to our model. This will allow us to create a simple yet realistic model for urban mobility able to reproduce real-world facts and to test for network improvements.


A Multilayer perspective for the analysis of urban transportation systems. Alberto Aleta, Sandro Meloni and Yamir Moreno. Scientific Reports 7, 44359 (2017). doi:10.1038/srep44359

Modeling the Co-evolution of Urban Form and Transportation Networks

ABSTRACT. Urban settlements and transportation networks are widely admitted to be co-evolving in the thematic and empirical studies of territorial systems. However, modeling approaches of such dynamical interactions between networks and territories are less developed. We propose to study this issue at an intermediate scale, focusing on morphological and functional properties of the territorial system in a stylized way. We introduce a stochastic dynamical model of urban morphogenesis that couples the evolution of population density within grid cells with a growing road network. With an overall fixed growth rate, new population aggregate preferentially to a potential for which parameters control the dependance to various explicative variables, namely local density, distance to the network, centrality measures within the network and generalized accessibility. A continuous diffusion of population completes the aggregation to translate repulsion processes generally due to congestion. Because of the different time scales of evolution for urban scape and networks, the network grows at fixed time steps, with rules that can be switched in a multi-modeling fashion. A fixed rule ensure connectivity of newly populated patches to the existing network. Two different heuristics are then compared: one based on gravity potential breakdown for which links are created if a generalized interaction potential through a new candidate link exceeds a certain times the potential within the existing network; a second one implementing biological network growth, more precisely a slime mould model. Both are complementary since the gravity model would be more typical of planned top- down network evolution, whereas the biological model will translate bottom-up processes of network growth. The model is calibrated at the first order (indicators of urban form and network measures) and at the second order (correlations) with Eurostat population grid coupled with street network from OpenStreetMap through the following workflow: indicators (Moran index, mean distance, hierarchy, entropy for morphology, mean path length, centralities, performance for network) are computed on real areas of width 50km for all Europe (what corresponds to the typical scale of processes the model includes); parameter space of the model is explored using grid computing (with OpenMole model exploration software), from simple synthetic initial configurations (few connected punctual settlements), computing indicators on final simulated configurations; among candidate parameters for given contiguous (in space and indicator space) real areas on which correlations can be computed, the one with the closest correlation matrix computed on repetitions is chosen. We obtain a full coverage of real configurations with simulation results in a principal component plan for indicators, for which most of them a close correlation structure is found. Both network heuristics are necessary for the full coverage. The model is thus able to reproduce existing urban form and networks, but also their interaction in the sense of correlations. We furthermore study dynamical lagged correlations between normalized returns of population and network patch explicatives variables, exhibiting a large diversity of spatio-temporal causality regimes, where network can drive urban growth, the contrary, or more complex circular causalities, suggesting that the model effectively grasps the dynamical richness of interactions.

Developing an Effective Model for Traffic Dynamics with Realistic Driving Behaviours

ABSTRACT. It is of great interest, both theoretically and for practical applications, to understand via simple models the emergent behaviours of the complex systems containing a large number of interacting components. Examples like crowd dynamics, highway traffic systems, and in this particular case the urban traffic flows have attracted physicists, engineers and behavioural scientists for decades. In contrast to the traditional many-body physical systems, the traffic systems lack almost any symmetry at the microscopic level: even individual components are different from one another, with intrinsic stochasticity.

There remains a lot of controversy in how to model the traffic flow properly, especially when the traffic is congested. Most, if not all, of the commercially available softwares use rather simplified interactions between vehicles, and the resulting simulations are only realistic when the traffic flow is moderate or small (so that the interactions are not so important, since the vehicles are far away from each other). To this end, we have collected and analysed more than eight months of data for the driving behaviours on the expressways in Singapore. We have shown that the data captures some of the salient features of traffic flow in the flow-density plot including the transitions from free flow to synchronized flow to wide moving jam. Further analysis of the data, allowed us to understand the specific roles played by the stochasticity among vehicles and drivers, and the extent at which we need to add stochastic terms in our traffic models from a practical point of view. In particular, we found that collective sensitivity of the drivers on their own velocity and the relative velocity is generally nonlinear and strongly dependendent on headways or gaps between moving vehicles. We use these observation to construct realistic effective models for traffic flows using renormalisation techniques and describe how such insights can be used to understand which detailed driving behaviors need to be modeled for the numerical simulations to be useful for practical implications.

11:00-13:00 Session 3E: Biological and (Bio)Medical Complexity - Celular dynamics and neurobiology I

Parallel session

Location: Tulum 1&2
Regulatory Biochemical Signaling Networks Related to Fertilization.

ABSTRACT. Fertilization is one of the fundamental processes of living systems. In this talk we address external marine fertilization and comment on some recent mammal studies. Our work on how sea urchin sperms swim towards the egg is based on experiments which have shown that fallegella [Ca2+] concentration, triggered by the binding of chemicals from the oocyte surroundings, modify sperm navigation and in some cases produce chemotaxis - transport guided by chemical gradients. For a better understanding of this process, we have constructed a family of logical regulatory networks for the [Ca2+] signaling pathway [1,2,3,4,5]. These discrete models reproduce electrophysiological behaviors previously observed and have provided predictions, some of which we have confirmed within our research group with new experiments, as shown in a couple of videos. We have gained insight on the operation of drugs that modify the calcium fluctuation temporal behavior and hence control sperm navigation, in some cases producing disorientation [2,3]. With our theoretical studies we have been able to predict that CatSper is the dominant calcium channel [5]. This channel, related to male contraception in mammals, is a matter of intense research. Overall, our results may be relevant to fertility issues. We also present preliminary work on mammals. Our systems biology approach poses issues such as network dynamics stability, redundancy and degeneracy. The finding that the discrete network dynamics operates at a critical regime, where robustness and evolvablity coexist is a matter for reflection and may be of evolutionary interest. Taking into account the dynamics attractor landscape and criticality considerations we have implemented a network reduction method. It is encouraging to find that the outcome of this process is a network that coincides with an alternative step by step constructive continuous model.

[1] Espinal, J., Aldana, M., Guerrero, A., Wood, C. D., Darszon, A., and Martínez-Mekler, G. (2011). Discrete dynamics model for the speract-activated Ca2+ signaling network relevant to sperm motility. PloS ONE 6(8): e22619. [2] Guerrero, A., Espinal, J., Wood, C.D., Rendón, J.M., Carneiro, J., Martínez-Mekler, G., Darszon, A. (2013) Niflumic acid disrupts marine spermatozoan chemotaxis without impairing the spatiotemporal detection of chemoattractant gradients, Journal of Cell Science 126(6):1477 [3] Espinal J, Darszon, A., Wood, C., Guerreo A, Martínez- Mekler G, (2014) In silico determination of the efect of multi-target drugs on sea urchin spermatozoa motility. PLoS ONE 9(8): e104451 [4] Wood, C., Guerrero, A., Priego D., Martínez-Mekler G., Jorge Carneiro, J., Darszon, A., (2015) Sea Urchin Sperm Chemotaxis, in Flagellar Mechanics and Sperm Guidance, Jaky J. Cosson, J.J, Betham Books, pp 132-207. [5] Espinal J., Darszon A., Beltrán C., Martínez-Mekler G., Network model predicts that CatSper is the main Ca2+ channel in the regulation of sea urchin sperm motility, submitted to Scientific Reports.

The architecture of an empirical genotype-phenotype map
SPEAKER: Leto Peel

ABSTRACT. Characterizing the architecture of an empirical genotype-phenotype (GP) map is an outstanding challenge in the life sciences, with implications for our understanding of evolution, development, and disease. For most biological systems, our knowledge of the corresponding GP map remains too poor to tackle this challenge. Recent advances in high-throughput technologies are bringing the study of empirical GP maps to the fore. Here, we use data from protein binding microarrays to study an empirical GP map of transcription factor (TF) binding preferences. In this map, genotypes are DNA sequences and phenotypes are the TFs that bind these sequences. We study this GP map using genotype networks, in which nodes represent genotypes with the same phenotype, and edges connect nodes if their genotypes differ by a single small mutation: either a point mutation or an indel that shifts the entire, contiguous binding site by a single base.

We describe the structure and arrangement of genotype networks within the space of all possible binding sites for 527 TFs from three eukaryotic species encompassing three kingdoms of life: animal, plant, and fungi. Specifically, we examine 190 TFs from Mus musculus, 218 TFs from Arabidopsis thaliana, and 119 TFs from Neurospora crassa. These TFs collectively represent 45 unique DNA binding domains, which can be thought of as distinct biophysical mechanisms by which TFs interact with DNA.

We show that these genotype networks have a short characteristic path length relative to the diameter, and the high clustering coefficients, indicating that genotype networks of TF binding sites tend to fall within the family of “small world” networks. The majority of the genotype networks are assortative by degree, which may result in phenotypic entrapment, where the probability of leaving a genotype network decreases over time. We also find that these networks can be partitioned in multiple meaningful ways, and ubiquitously overlap and interface with one another. We thus provide a high-resolution depiction of the architecture of an empirical GP map and discuss our findings in the context of regulatory evolution.

Dynamics without kinetics: recovering perturbation patterns of biological networks from topology

ABSTRACT. Modern biology has been greatly impacted by the advent of high-throughput technologies, allowing to gather an unprecedented wealth of quantitative data underlying the makeup of living systems. Notably, the systematic mapping of the relationships between biochemical entities has fueled a rapid development of network biology as a suitable framework to describe disease phenotypes and drug targeting. Yet, the gap to reach a predictive dynamical modeling framework seems daunting, due in part to the limited knowledge of the kinetic parameters underlying these interactions and the difficulty to foresee their systematic measurement before long. Here, we tackle this challenge by showing that kinetic-agnostic biochemical network spreading models are sufficient to precisely recover the strength and sign of the biochemical perturbation patterns observed in 87 diverse biological models for which the underlying kinetics is known. Surprisingly, a simple distance based model achieves on average 65% accuracy. We show that this predictive power is robust to topological and kinetic parameters perturbations, and highlight key network properties that foster high recovery rate of the ground truth perturbation patterns. Finally, we validate our approach in the chemotactic pathway in bacteria by showing that the network model of perturbation spreading predicts with ~80% accuracy the directionality of gene expression and phenotype changes in knock-out and overproduction experiments. These findings show that the steady refinement of biochemical interaction networks opens avenues for precise perturbation spread modeling, with direct implications for medicine and drug development.

Simulations of Phase Transitions in Gene-Gene Interaction Networks

ABSTRACT. A key aim of biomedical research is to systematically catalogue all genes and their interactions within a living cell. The commonly adopted approach to study such complex interacting systems, reductionism, has dominated biological research and has given in depth insights about individual cellular components and their functions such as the formative work by [1]. A theoretical framework for the global linear response of such interaction networks on low local stress has been established as well [2]. An efficient approach to calculate the large scale response features from data without explicit unraveling of the detailed network base has been developed based on a coupling of linear response theory and statistical thermodynamics [3].

However it is increasingly clear that formulations where a single biological function attributed to an individual component are not accurate since most biological characteristics arise from strong interaction of biological entities in complex networks leading to unexpected biological responses on local modes of action. Therefore a thorough understanding as well as the availability of efficient tools for analysis and simulation of the behavior of the large scale networks with these generic biological topologies is a key for further progress in a wide variety of applications in computational biology.

Here we apply a new approach, at the intersection of network biology and statistical mechanics, to study the effect of local perturbations on the phase transition profiles of an Ising model network with sensitivity analysis of real-life data. The effects of stress is modeled as external noise that could lead to irreversible phase transition. The evolution of Ising states of network show critical points of phase transitions in analogy to thermodynamics. We present results for perturbations at interaction level, for different initial conditions of the Ising states; and perturbations at the topological level, the influence of a node or link in the network phase transition.

[1]. J. Gao et al. Universal resilience patterns in complex networks. Nature 530.7590 (2016), pp. 307-312 [2]. T.S. Gardner et al. Inferring genetic networks and identifying compound mode of action via expression profiling. Science. 301.5629 (2003). pp. 102-5 [3]. S. Schneckener et al. An elastic network model to identify characteristic stress response genes. Computational biology and chemistry 34.3 (2010). pp. 193-202

Topological analysis of multicellular complexity in the plant hypocotyl
SPEAKER: George Bassel

ABSTRACT. Multicellularity arose as a result of adaptive advantages conferred to complex cellular assemblies. The arrangement of cells within organs endows higher-order functionality through a structure-function relationship, though the organizational properties of these multicellular configurations remains poorly understood. We investigated the topological properties of multicellular organ construction by digitally capturing global cellular interactions in the plant embryonic stem (hypocotyl), and analyzing these using network science. Cellular organization in the hypocotyl is tightly controlled both within, and across, diverse wild-type genetic backgrounds and species, indicating the presence of developmental canalization at the level of global multicellular configurations with this organ. By analyzing cellular patterns in genetic patterning mutants, the quantitative contribution of gene activity towards the construction these multicellular topological features was determined. This quantitative analysis of global cellular patterning reveals how multicellular communities are structured in plant hypocotyls, and the principles of each optimized and possible cellular complexity in organ design.

The role of multiple feedback circuits in the regulation of the size and number of attractors in Boolean networks

ABSTRACT. The study of gene regulation has demonstrated that genes and their products form intricate networks of regulatory interactions. Previous works reported the presence of statistically over-represented regulatory structures, or motifs, within gene regulatory networks. Such over-representation suggests that motifs have important functions in gene regulatory networks. In particular, it has been demonstrated that positive and negative feedback circuits are necessary for the appearance of multiple and cyclic attractors, respectively. Attractors represent important meaningful properties of biological systems. In gene regulatory networks, attractors represent the gene expression patterns observed in different cell types. Despite the fact that previous studies have analyzed the importance of feedback circuits on attractors, it is still not clear how multiple interconnected feedback circuits regulate the number and size of the attractors. Consequently, studying how the number and size of the attractors is determined in networks containing multiple feedback circuits is fundamental for a more general understanding of gene regulatory networks, which generally.

In this work, we study the effect of multiple feedback circuits over the size and number of attractors. In order to do this we build pathway-like networks. Pathways-like networks are networks that contain a lineal structure of regulatory interactions with some additional interactions that create feedback circuits. We use a Boolean formalism and computational tools to analyze the size and number of attractors of millions of pathway-like networks with 3 and 5 nodes. Then, we characterize each feedback circuit by its sign and functional cardinality (i.e., the network states where a feedback has an effect over the network dynamics), which is an extension of previous definitions of circuits’ functionality (Thomas and Kauffman, Chaos, 2001). Later, we define the coupling between multiple feedback circuits as the set of signs and functional cardinality of all feedback circuits contained in a network. We find that the particular way in which feedback circuits couple largely determines both the number and size of the attractors. Importantly, we observe that networks with different structures that share the same feedback circuits coupling, in general produce the same number and size of attractors, while networks with the same structure, but different feedback circuits coupling can produce a different number and size of attractors. These findings suggest that inference of gene regulatory interactions by traditional methods, such as epistasis analysis, are limited, as they cannot distinguish between different networks that produce the same result, neither between different networks with the same structure that produce different results. Our study hence shows, that the characterization of feedback circuits in a regulatory network, combined with the network dynamics and structure, allows for a better characterization of regulatory structures.

Conserved homological cores of structural and functional brain networks across age

ABSTRACT. We analyse the homological structure of the structural and functional connectivity of the human brain [1]. Persistent homology [2,3], a technique in topological data analysis, captures multiscale high-order patterns (in the form of simplicial complexes, panel (a)) in structural and functional con- nectomes [4,5]. In this contribution, we show evidence that the brain’s algebraic topological struc- ture is in large part conserved over time at the expenses of its geometry. To do this, we analyse the persistent homology properties of brain structural and functional networks and their evolution in a cohort of 50 healthy patient across the age range 18-80. We find that structural connectomes display mild global topological changes with age (panel (b)): against a general reduction in overall connectivity, the brain regions of structural disconnection tend to reinforce their boundaries, intuitively akin to a sponge drying over time. On the contrary, functional connectomes do not show any large-scale homological change with age (panel (c)). We find that this overall topological conservation is obtained by a reorganization of the localization of homological features (geometry): the connectomes’ homological scaffolds [7], surrogate networks that summarise the connectomes’ homological information, show a backbone of links that are shared across age groups with distinct weight modulations related to ageing (panels (d) and (e)). While the ob- served changes in the structural connectomes are well reproduced by simple data-driven model, pointing to a natural decay based on independent fiber cell death, the same model erroneously predicts strong global changes in the functional topology, suggesting the existence of an adaptive meso-scale mechanism modulating brain functional connectivity to preserve the healthy functional topology despite the underlying structural changes.

[1] Bullmore, E., & Sporns, O. (2009). Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10(3), 186198. [2] Dabaghian, Y., Mmoli, F., Frank, L., & Carlsson, G. (2012). A topological paradigm for hip- pocampal spatial map formation using persistent homology. PLoS Comput Biol, 8(8), e1002581. [3] Petri, G., Scolamiero, M., Donato, I., & Vaccarino, F. (2013). Topological Strata of Weighted Complex Networks. PloS One, 8(6), e66506. [4] Lord, L. D., Expert, P., Fernandes, H. M., Petri, G., Van Hartevelt, T. J., Vaccarino, F., ... & Kringelbach, M. L. (2016). Insights into brain architectures from the homological scaffolds of functional connectivity networks. Frontiers in Systems Neuroscience, 10. [5] Petri, G., Expert, P., Turkheimer, F., Carhart-Harris, R., Nutt, D., Hellyer, P. J., & Vaccarino, F. (2014). Homological scaffolds of brain functional networks. Journal of The Royal Society Interface, 11(101), 20140873.

11:00-13:00 Session 3F: Socio-Ecological Systems (SES) - Modeling Socio-Ecological Couplings

Parallel session

Location: Cozumel 3
Linking Human Health to Planetary Boundaries for Chemical Pollution

ABSTRACT. In their seminal work on a Safe Operating Space for Humanity, Rökstrom et al. (2009) introduce the concept of planetary boundaries (PB). These scientifically based limits on human perturbation to nine essential earth system processes – climate; biodiversity; nitrogen and phosphorus cycles; stratospheric ozone; ocean acidification; global freshwater use; land use; chemical pollution; and atmospheric aerosol loading – are proposed to ensure the sustainability of a hospitable world for future human populations. Key to defining this safe operating space is identifying appropriate control variables and system boundaries. Examples of control variables are atmospheric CO2 concentration for climate change, and area of forested land as a percentage of original forest cover for land-use change, with corresponding values for associated PBs set at 350 ppm and 75%, respectively. Defining measurable control variables and boundaries, even in the face uncertainty, allows for prioritization and development of concrete policy proposals. For chemical pollution, however, control variables and boundaries remain undefined.

Other groups have offered responses to the PB framework from the perspective of environmental chemistry. Diamond et al. (2015) submit that defining a PB for chemical pollution requires describing and quantifying the impact of an anthropogenic stressor on all ecosystems as a function of a measurable control variable related to a measurable response variable. Yet challenges persist, due to the high number and poorly identified mixtures of chemicals to which all organisms are currently exposed, our lack of knowledge regarding the toxic effect of complex mixtures, especially in the long term, and the difficulty of controlling exposure given the multitude of fate and transformation processes that occur between emissions and exposure. Persson and colleagues (2013) suggest a hazard-based approach, which takes into account the inherent properties of a chemical and is therefore independent of exposure, can facilitate action in the face of uncertainty. However this fails to address complex mixtures, and the authors acknowledge the difficulty of foreseeing potential unwanted earth system effects about which we are ignorant.

Here I propose a focus on two human health endpoints as directly observable and measurable effect variables for this particularly wicked PB, through which we might identify useful control variables: human fertility rates and the incidence of childhood disease. Recent work has revealed a dramatic decline in male fertility and increasing incidence of childhood disease, both linked to environmental pollution. Based on these response variables, I explore evidence that such a boundary for chemical pollution exists, and how control variables to limit the impact of environmental pollution could be defined. Based on concepts from resilience and the study of tipping points in ecological and social systems, can we identify thresholds for chemical burdens in populations? Must these thresholds be defined independently for each chemical considered, or are there ways to define broader (and perhaps less uncertain) classes of chemical pollution? Finally, how can these thresholds help us to motivate the most impactful and attainable policy or behavioral interventions to reduce exposures in the most impactful way?

Household Energy Use and Behavior Change Tracking Framework: From Data to Simulation
SPEAKER: Leila Niamir

ABSTRACT. Climate is changing and its adverse impacts are felt worldwide. The greenhouse gas emissions from human activities are driving climate change and continue to rise. Consumers are one of the main drivers of energy transition. The distributed nature of renewables, the increasingly competitive costs of renewable technologies, and new developments in smart grids and smart homes make it possible for energy consumers to become active players on this market (EU, 2017). However, quantifying aggregated impacts on behavioral changes is a challenging task. Often behavioral shifts among households are modelled very rudimentary assuming a representative consumer (group), rational optimization choices and instantly equilibrating markets. The growing body of empirical literature in social sciences indicates complex behavioral processes among household who consider changes in their energy consumption and related investments. There is a number of barriers and drivers, which could trigger households to make a decision and change their behavior, for example regarding their energy use. In particular, a large body of empirical studies in psychology and behavioral economics shows that consumer choices and actions often deviate from these assumptions of rationality, and there are certain persistence biases in human decision making, which lead them to have different behavior. We did an extensive literature review of relevant theories in psychology (environmental psychology specifically) to identify theoretical basis for these barriers and drivers as well as existing empirical evidence. In this paper we aim to quantify and assess impacts that behavioral changes of households may have on the cumulative energy use and emission reduction. Towards this end, we run a comprehensive survey among households and combine it with agent-based modeling techniques. Our survey carried out in the Netherlands and Spain in 2016 is rooted in psychology theories that allow us to elicit behavioral and cognitive factors in households’ decision making in addition to traditional economic factors. The survey is designed to elicit the factors and stages of a behavioral process with respect to the three types of energy-related actions households typically make: (1) invest in an energy saving equipment, (2) energy conservation due to a change in energy consumption habits, and (3) switching to another energy source. To reach to any of these decisions a household is assumed to follow the three main steps: knowledge activation, motivation, and consideration. At each step, several psychological factors, economical, socio-demographic, social, and structural and physical drivers and barriers are considered and estimated. In parallel, we develop an agent-based BENCH model grounded in the Norm Activation Theory and the survey data. BENCH was designed to integrate behavioral aspects into a standard economic decision of an individual regarding household energy use and to study the cumulative impacts of these behavioral changes at a regional level as well as dynamics of these changes over time and space. BENCH is parameterized based on the survey run in the Netherlands (N=1000 households). We run the empirical BENCH model for a period of 2016-2050 under different behavioral assumptions and two shared socioeconomic pathway (SSP) scenarios (business as usual and high technology cost).

Understanding Lags, Thresholds and Cross Scale Dynamics in Social Ecological Systems: Cascading Impacts of Climate and Land Use Adaptation on Missisiquoi Bay

ABSTRACT. While a growing amount of modeling and experimental research in sustainability science and environmental sciences has identified the importance of understanding cross-scale dynamics, many issues about modeling cascades of lags, inertia and thresholds (phase transitions) in coupled natural and human systems remain unresolved. Situated in Social Ecological Systems (SES) theoretical and empirical framework, this paper addresses the following question: How do lags, inertia and thresholds (phase transitions) affect the evolution of state variables in SES that interact across multiple scales of space and time? We investigate this question in the context of modeling the cascading impacts of global climate change and land use land cover change (LULCC) on coupled riverine and lake system of the Missisquoi Watershed and its bay for 2000-2100 timeframe. The Missisquoi River at Swanton is a 2,200 km2 watershed within the Vermont and Québec portions of the Lake Champlain basin. A monitoring stream flow data record (1990-present) is available for the watershed outlet and long term monitoring record of water quality in the bay is available since 1992. We developed a novel Integrated Assessment Model (IAM) framework to explore cross-scale interactions in a coupled natural human system to uncover quantitative thresholds (phase transitions) of the interacting climate and land-use system which drive water quality in Missisquoi Bay over the 2000-2100 timeframe. To construct the IAM, first statistical downscaling, bias correction, and topographic downscaling of 4 GCMs for three representative concentration pathways (RCPs) was used to generate an ensemble of 12 future climate simulations of daily temperature and precipitation at 30 arc second (approximately 0.8km x 0.8km ) resolution for the study site. In parallel, an LULCC agent based model (ABM) operating at the landowner parcel scale was developed to generate four extreme 30m x 30m land-use change scenarios for the Missisquoi watershed representing outcomes of different land-use adaptation policies; these scenarios are called business as usual, pro-forest, pro-agriculture and pro-urbanization. Both the 12 downscaled climate change scenarios and 4 LULCC forecast scenarios (total 48 scenarios) were used to drive a distributed hydrological model (RHESSys) to generate daily time-scale forecasts of hydrological discharges and nutrient flows from the Missisquoi River into Missiquoi Bay of the Lake Champlain. The bay is simulated by a 3D coupled system of biogeochemical and hydrodynamic lake models. The model interactions in IAM are transformed into an abstract computational workflow using the Pegasus Workflow Management System. We find that the best-case land use adaptation scenario through maximum amount of forest conservation in the watershed may not be able to avoid a phase transition of Missisquoi bay from currently eutrophic to hyper-eutrophic state under the worst-case global climate change scenario. We also find that under worst-case greenhouse gas emissions scenario [RCP85], the likelihood of algal blooms in the shallow bay will slowly expand to early summer and late fall months irrespective of the land use adaptation scenario. This study highlights the sustainability challenges that arise due to lags, inertia and phase transitions in smaller scale SES due to cascading effects from large scale SES.

Sustaining Economic Exploitation of Complex Ecosystems in Computational Models of Coupled Human-Natural Networks
SPEAKER: Neo Martinez

ABSTRACT. Sustaining socio-ecosystems requires balancing ecological and economic mechanisms that structure and drive the dynamics of these complex systems. This is particularly true of fisheries, the global collapse of which attests to profound imbalance among these mechanisms. Such lack of sustainability is at least partly due to insufficient understanding of the complexity and nonlinear dynamics of these socio-ecological systems. We have developed a multilayer network approach to mechanistically model the consumer-resource and supply-demand interactions amongst many ecological species and economic activities within fisheries. This approach integrates “allometric trophic network” models of ecological interactions and “resource economic” models of yield, profit and loss. Both models have successfully predicted the dynamics of particular systems. For example, resource economic models greatly aid the successful management of ecologically simple “donor-controlled” systems (e.g., forests and crab fisheries) whose target species critically depend on supply rates of resources (e.g., rain or detritus) that are largely determined by factors other than the target species themselves. In contrast, these models have failed, often spectacularly, in more complex ecosystems such as fisheries based on exploiting species at higher on the food chain that greatly affect the supply of their prey below them.

Our socio-ecological models of the exploitation of fish higher in food chains express a spectrum of dynamics from low profit regimes of collapsed fisheries with few fish to high profit regimes of thriving fisheries with many fishes. Compared to classic socio-ecological models of “maximum sustainable yield” from vigorous exploitation of moderately abundant fish, our findings show higher profit at lower exploitation rates of more abundant fish. More dramatically, target fish go extinct at the vigorous exploitation that the classic models suggest yield maximum profit. The parameters, such as cost of fishing effort, that determine a system’s dynamic regime are relatively easy for policies to influence in order to sustain productive fishery dynamics. Such policies can subsequently leverage other, less controllable market mechanisms (e.g., supply and demand) to achieve sustainability. Additionally, highly counterintuitive dynamics known as the “hydra-effect” (where increasing exploitation increases, rather than decreases, fish stock) are observed. Key to the significance of these findings is that they occur within empirically realistic parameter spaces including growth, feeding and fishing rates as well as the structure of and dynamics of ecological networks and the role of humans within these networks. More broadly, our findings help increase our understanding of critically important socioecosystems while informing management strategies that help sustain and optimize of fisheries and other complex ecological networks exploited for economic gain.

Utilization of Computer Simulation and Serious Games to Inform Livestock Biosecurity Policy and Governance

ABSTRACT. This work presents novel approaches to integrating experimental gaming data into agent-based models of spatially explicit social ecological systems. The integration of behavioral science and computer science into the strategic, tactical and operational decision making of public, private and nonprofit managers has taken on increased attention in recent years. The uses of experimental, “serious” games and computer simulation models to describe, evaluate and inform changes in complex governance systems remain in their nascent stages, despite the long standing use of simulations to generate scenarios in business and military settings, and more recent interests in applying behavioral science approaches to “nudge” citizen utilization of government services. Drawing on an example of the computer simulation and gamification of portions of the United States’ pork industry, the authors in this paper ask and answer: How are simulations and games being used to identify vulnerabilities in the nation’s pork supply and what incentives can be pursued to bring greater biosecurity? How are stakeholders being partnered with in the design and use of simulation and gaming results to inform policy making and governance design?

This paper provides an overview of the empirically calibrated, spatially explicit agent based model of the pork industry within three regions of the United States. Details of a serious game developed to study how the use of information about biosecurity threat and biosecurity best management practice adoption of members of a network informs the propensity to adopt best management practice are provided, as well as details of a second game designed to study the use of messaging and incentives to ensure compliance of biosecurity measures at the farm level. Details of the integration of game results into the ABM are provided. This paper also explores how cutting edge applications of computer simulation and gamification have been processed in a series of stakeholder workshops of service providers, industry regulators, and street level producers. Tuning, strategic and operation implications, and new scenario and treatment protocols designed to link model and gaming outputs to policy and governance initiative for the livestock production chains are shared.

Participatory Modeling of the Systemic Impacts of Climate Change on Grain Production in Nigeria

ABSTRACT. The impacts of climate change on the agricultural sector in Nigeria going forward are expected to be severe, but so far there is a dearth of systemic analysis of how these impacts would develop over time, or how they would interact with other drivers impacting Nigerian agriculture. Moreover, there is a gap between the experience of farmers and policy-makers on the ground who are witnessing multiple aspects of climate change impacts in a highly localized context, and scientific reports which tend to be highly technical and general. Participatory modeling efforts could contribute to adaptation efforts by identifying policy mechanisms that serve as system ‘levers’ to effect change given the considerable uncertainty associated with both the socio-economic and ecological aspects of climate change. They could also bring together scientific information with locally contextual information on the direct and indirect impacts of climate change on complex agricultural systems. This study provides a systemic analysis of the impact of climate change on agricultural production in Nigeria using a participatory research method. We convened a workshop of key stakeholders with diverse and in-depth knowledge of Nigerian agriculture in Ibadan, Nigeria, in June, 2016. Using a causal loop diagramming (CLD) technique, we grouped these stakeholders by region and led them through an exercise in which they drew diagrams depicting impacts of climate change on Nigerian agricultural development. CLD is a method used in system dynamics modeling, and it is effective for identifying causal relationships between variables as well as feedback mechanisms. As expected, there were interesting differences across the 6 geopolitical zones of Nigeria reflecting their agro ecological differences. However, all groups identified both direct and indirect impacts of climate change on agricultural productivity, including heat, drought, flooding, pest and disease outbreaks, and climate-induced migration leading to conflict between pastoralists and farmers. Initial quantitative modeling results based on the CLDs demonstrate an impact of climate change on maize production in Nigeria that is consistent with other regional forecasts. We argue that this type of model, because it was constructed in a manner that is relevant and accessible to decision-makers, is highly useful for policy-making under complexity and uncertainty.

11:00-13:00 Session 3G: Complexity in Physics and Chemistry - Nonequilibrium thermodynamics

Parallel session

Location: Xcaret 4
Novel thermodynamic properties of computational processes
SPEAKER: David Wolpert

ABSTRACT. Recent breakthroughs in nonequilibrium statistical physics provide powerful tools for analyzing far-from-equilibrium processes. These tools have done much to clarify the relationship between thermodynamics and information processing. In particular, they have been used to formalize and then extend Landauer’s pioneering work on the thermodynamics of the simplest kind of computation, bit erasure.

However despite our understanding of the thermodynamics of computation in the full sense, extending beyond simple bit erasure, is its infancy, and a great number of foundational open question remain. Here we present our preliminary investigations of three such questions, using the recently developed tools of nonequilibrium statistical physics.

First, for a physical process to be thermodynamically reversible it must be in thermal equilibrium at all but a countable number of instants. We show that this means that there are some computations that cannot be implemented in a thermodynamically reversible process, unless some additional “auxiliary” states are available to the process, in addition to the ones specifying the input/output map of the computation. For example, we show that no physical process over only N states can permute those states without dissipating work – however if the process has access to just a single extra, buffer state, then there is no such difficulty. We also investigate the broader issue of how many such additional auxiliary states are required to implement a given computation thermodynamically reversibly, both for deterministic and stochastic computations. We then use this analysis to motivate a physically-inspired measure of the complexity of a computation.

Second, we consider implementing computations with logic circuits, i.e., by networks of gates, each with a limited fan-in. In general, many different circuits can be used to implement the same computation. We show that the precise circuit used to implement a given computation affects the minimal work required to run the computation, and also affects the amount of work that is dissipated by running the computation. We also relate these minimal work requirements to information-theoretic measures of the complexity of the computation’s input and output distributions.

Finally, we analyze the thermodynamic properties of any physical process that implements a Turing machine. The shortest input program p to a given Turing machine that causes the machine to compute a desired output string v – the Kolmogorov complexity of v – can be arbitrarily large. However we show that the smallest amount of thermodynamic work required to run the Turing machine on some input program that computes v has a finite upper bound, independent of the output. On the other hand, the average over all input programs (not just optimal ones) of the amount of thermodynamic work used by the Turing machine is infinite.

Entropy production for coarse-grained dynamics

ABSTRACT. Countless works in the literature have investigated how coarse-graining influences our prediction of the physical properties of a system. In systems out of equilibrium, it is still unclear what is the role of the entropy production, and for systems described by a Master Equation, it can be estimated using Schnakenberg's formula. On the other hand, some years ago Seifert derived an analogous formula for dynamics described by a Fokker-Planck equation. In this work, we aim at connecting both formulations, and starting from a Master-Equation system we calculate how Schnackenberg's entropy production is influenced by coarse-graining. We show that such a value can be reduced to the Seifert's formula for some simple choices of the dynamics, but, surprisingly enough, we demonstrate that, in general, microscopic fluxes circulating in the system can give a macroscopic contribution to the entropy production. In consequence, neglecting information leads to an underestimation of the entropy production, and only a lower bound can be provided when the dynamics is coarse-grained.

Asymptotic equivalence between generalized entropies' phase space volume scaling and anomalous diffusion scaling of the corresponding Fokker-Planck equation

ABSTRACT. A way to characterize non-ergodic and non-markovian stochastic processes is through generalized entropy functionals corresponding to their stationary distributions. These generalized entropies can be classified based on their asymptotic phase space volume scaling, which provides a classification of the processes themselves according to their stationary behavior. On the other hand, these processes can also be classified according to their anomalous diffusion scaling describing their non-stationary behavior. Here we show that if the dynamics is governed by a nonlinear Fokker-Planck equation consistent with the generalized entropy describing the stationary behavior of the process, the anomalous diffusion scaling exponent of the process and the entropy's phase space scaling exponent bijectively determine each other asymptotically at large times/volumes. This implies that these basic measures characterizing the stationary and the non-stationary behavior of the process provide the same information in the asymptotic regime.

Leveraging Environmental Correlations: The Thermodynamics of Requisite Variety
SPEAKER: Alec Boyd

ABSTRACT. Key to biological success, the requisite variety that confronts an adaptive organism is the set of detectable, accessible, and controllable states in its environment. We analyze its role in the thermodynamic functioning of information ratchets a form of autonomous Maxwellian Demon capable of exploiting fluctuations in an external information reservoir to harvest useful work from a thermal bath. This establishes a quantitative paradigm for understanding how adaptive agents leverage structured thermal environments for their own thermodynamic benefit. General ratchets behave as memoryful communication channels, interacting with their environment sequentially and storing results to an output. The bulk of thermal ratchets analyzed to date, however, assume simple memoryless environments that generate input signals without temporal correlations. Employing computational mechanics and a new information-processing Second Law of Thermodynamics (IPSL) we remove these restrictions, analyzing general finite-state ratchets interacting with statistically complex structured environments that generate correlated input signals. On the one hand, we demonstrate that a ratchet need not have memory to exploit a temporally uncorrelated environment. On the other, and more appropriate to biological adaptation, we show that a ratchet must have memory to most effectively leverage complex structure in its environment. The lesson is that to optimally harvest work a ratchet¹s memory must reflect the input generator's memory.

Infinite-Time and -Size Limit of the Large Deviation Function Estimator

ABSTRACT. Population dynamics provide a numerical tool allowing the study of rare trajectories of stochastic systems, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. Such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. We present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories in order to extract its infinite-time and infinite-size limit.

Non equilibrium stationary states of a dissipative kicked linear chain of spins

ABSTRACT. We consider a linear chain made of spins of one half in contact with a dissipative environment for which periodic delta-kicks are applied to the qubits of the linear chain in two different configurations: kicks applied to a single qubit and simultaneous kicks applied to two qubits of the linear chain. In both cases the system reaches a non-equilibrium stationary condition in the long time limit. We study the transient to the quasi stationary states and their properties as function of the kick parameters in the single kicked qubit case and report the emergence of stationary entanglement between the kicked qubits when simultaneous kicks are applied. For doing our study we have derived an approximation to a master equation which serves us to analyze the effects of a finite temperature and the zero temperature environment.

11:00-13:00 Session 3H: Foundations of Complex Systems - Information

Parallel session

Location: Cozumel 2
The Roots of Synergy
SPEAKER: Klaus Jaffe

ABSTRACT. Synergy, emerges from synchronized reciprocal positive feedback loops between a network of diverse actors. For this process to proceed, compatible information from different sources synchronically coordinates the actions of the actors resulting in a nonlinear increase in the useful work or potential energy the system can manage. In contrast noise is produced when incompatible information is mixed. This synergy produced from the coordination of different agents achieves non-linear gains in energy and/or information that are greater than the sum of the parts. The final product of new synergies is an increase in individual autonomy of an organism that achieves increased emancipation from the environment with increases in productivity, efficiency, capacity for flexibility, self-regulation and self-control of behavior through a synchronized division of ever more specialized labor. Synergistic is the interdisciplinary science explaining the formation and self-organization of patterns and structures in partially open systems far from thermodynamic equilibrium. Understanding the mechanisms that produce synergy helps to increase success rates in everyday life, in business, in science, in economics and in increasing, yet to named areas. A mechanism discovered by biological evolution favoring the achievement of synergy in addition to division of labor is assortation: the combination of similar or compatible agents or information, to reduce the chances of noisy mismatches. Empirical evidence in many domains show that assortative information matching increases the probability of achieving synergy. This mechanism is so fundamental and unique that it has emerged as a product of ongoing biological evolution of sexual reproduction among living organisms. The roots of synergy are the features of that promote an increase the information content or negentropy of the system, and its power to produce useful work.

Local Information Decomposition Using the Specificity and Ambiguity Lattices
SPEAKER: Conor Finn

ABSTRACT. The recently proposed Partial Information Decomposition (PID) of Williams and Beers provides a general framework for decomposing the information provided by a set of source variables about a target variable. For example, the information provided by a pair of sources decomposes into the following Partial Information (PI) terms: the unique information provided each source; the shared or redundant information provided by either of the sources; and the complementary or synergistic information only attainable through simultaneous knowledge of both sources. In general, PID induces a lattice over sets of sources, providing a structured decomposition of multivariate information. Although PID fixes the structure of the decomposition, it does not uniquely specify the value of each PI term: for this one must separately define a measure of one of the PI terms in the decomposition.

To date, no satisfactory definition exists for the general case of an arbitrary number of variables. All currently proposed approaches suffer from at least one of the following crippling problems: (a) the resulting measure of shared information does not capture the “same information” about specific target realisations (events) but rather only the “same amount of information” about the target variable; (b) the measure is only consistent for the bivariate case (two source variables); or (c) the measure does not provide local or pointwise measures of the PI terms for specific realisations of the target and source variables.

We propose a new approach to multivariate information decomposition based upon PID, but which is distinct in several ways. In our approach the local information is the primary citizen—we seek to decompose local information into local PI terms. We also reveal the two orthogonal types of information that individual local source realisations can provide about a local target realisation, i.e. being either positive or negative information (or indeed both at the same time). This perspective enables us to rigorously define when sources carry the “same information” as opposed to merely the “same amount of information”. The framework of PID can then be applied separately to each of these two types of information, providing a lattice over each—namely the specificity and ambiguity lattices. Just as in PID, this fixes the structure of the information decomposition (on each lattice) but does not uniquely define the value of the PI terms.

To achieve uniqueness, we define a local measure of redundancy (on both lattices), and justify how it captures the “same information” by respecting both the locality and the type of information. This local redundancy measure is then used to uniquely define all local PI terms on both the specificity and ambiguity lattices. Crucially, this information decomposition can be applied to arbitrarily large sets of sources, addressing a major stumbling block in this domain. We apply the decomposition to a variety of examples which provides new insights into classic examples, and demonstrate the unique ability to provide a local decomposition. Finally, interpreting these results sheds light on why defining a redundancy measure for PID has proven to be so difficult—one lattice is not enough.

Multivariate Dependence Beyond Shannon Information
SPEAKER: Ryan James

ABSTRACT. Accurately determining dependency structure is critical to discovering a system's causal organization. We recently showed that the transfer entropy fails in a key aspect of this---measuring information flow---due to its conflation of dyadic and polyadic relationships. We extend this observation to demonstrate that this is true of all such Shannon information measures when used to analyze multivariate dependencies. This has broad implications, particularly when employing information to express the organization and mechanisms embedded in complex systems, including the burgeoning efforts to combine complex network theory with information theory. Here, we do not suggest that any aspect of information theory is wrong. Rather, the vast majority of its informational measures are simply inadequate for determining the meaningful dependency structure within joint probability distributions. Therefore, such information measures are inadequate for discovering intrinsic causal relations. We close by discussing more nuanced methods of determining the dependency structure within a system of random variables.

Idiosyncratic correlations and non-Gaussian distributions in network data

ABSTRACT. During the last decades, complex social, economical and biological systems have been studied using agent-based models (ABM). ABM are a powerful tool to discover analytic truths at the macroscopic-level when simple rules at the agent-agent interaction level are assumed. Despite great achievements in discovering analytic truths in complex systems and a large number of large datasets. Statistical models aimed to discover factual truths in complex systems have not reached the rigorous approach of econometrics models. In this paper, we introduce a network model, power law random graph model (PRGM) formulated at the agent-agent interaction level via the concept of q-conditional independence where q can be interpreted as idiosyncratic correlations or an interaction term between well-defined social mechanisms. We show that the exponential random graph models (ERGM) are the subclass of PRGM with q = 1. Motivated by the derivation of ERGM via the Boltzmann-Shannon entropy by Park and Newman, we present a second formulation of the PRGM via Tsallis entropy. Next, we construct a subclass of PRGM, called q-Markov graph models, defined by simple dependency assumptions and that violates Gaussian approximation of the network statistics. The violation of Gaussian approximation is caused by competitive social mechanisms, and it enriches PRGM with distributions ranging from bimodal, skewed and flat. Our findings open the question What warrants Gaussian approximations used to justify factual evidence in complex systems? Finally, with the help of the subclass q-Bernoulli random graph models and using two networks datasets of friendships between students in classrooms in Switzerland and the US, we show how the idiosyncratic correlation q helps to address the problem of models placing too much probability mass around a few type of networks. Although the problem of placing too much probability is well documented in poor-fitting network models, we show that this problem is inherited from the exponential decay of rare events in ERGM, and it occurs in poor-fitting- as well as overfitting models.

Entropy-based approach to the analysis of bipartite networks
SPEAKER: Fabio Saracco

ABSTRACT. Bipartite networks are ubiquitous in many different disciplines: they appear in the study of the world trade web, in social networks, in on-line retailer platform recommendation systems, in sociology and in many other fields. Nevertheless, in front of their spread presence in research, there is a general lack of proper tools for their analysis. The aim of this talk is to provide a general overview of the entropy-based approaches to the analysis. Recently the Bipartite Configuration Model (BiCM), an entropy-based null-model, provided the proper tool to reveal the non trivial information in bipartite systems. Constraining the entropy of bipartite graphs, it is possible to discount the information of the degree sequence; thus the BiCM is, at the same time, general (no hypothesis is tailored on the network analysed), unbiased (since it is entropy based) and analytical. Comparing the presence of bipartite cliques with its expectation provides major insights about the network structure: when applied to the network of trade, the BiCM reveals the presence of drastic structural changes few years before the system contagion from the world financial crisis, taking the whole system to a more random configuration. The BiCM also provides the most natural benchmark in order to avoid information losses whenever projecting a bipartite network on the layer of interest. The application of this method to a social data set of Users and Movies permits to reveal films clusters based on the audience composition (“family movies”, “underground films”, “cult movies”, and so on); in the same way, the application of our method to the network of trade, permits to uncover non trivial communities of countries, based on their technological level and cluster of wares, according to the development of their exporters. In our presentation we exhaustively review all applications of entropy-based approaches to the analysis of bipartite networks.

Beyond the limits of modularity: the pseudo-modularity bipartite community detection
SPEAKER: Fabio Saracco

ABSTRACT. Bipartite networks are ubiquitous in many disciplines [1], but despite their presence the amount of statistical tools for the analysis is really poor. To the best of our knowledge, actually just two sound community detection algorithms are present for bipartite systems [2, 3], both based on the extension of modularity [4] to bipartite networks. The approaches and the outputs of the two differ substantially. In [2] the author applies by brute force the standard modularity to bipartite networks, thus obtain- ing communities composed by nodes belonging to both layers. On the other hand, the approach of [3] is to ex- tend the null-model implemented in standard modularity [5] to bipartite networks and then discounts this information to reveal non random co-occurrences of links in the projected network on one of the two layers. In the present paper we extend the concept of modularity, substituting the (bipartite) null-model with the recently proposal extension of Exponential Random Graphs to bi- partite systems [6]. Actually, our proposal catch the main philosophy of claiming for the existence of a community once the number of intra-community link is more than the expectation, but also overcome some limits of the previous proposals. As in [3], our method returns communities of nodes belonging to the same layer by comparing the expected co-occurrences of links with the expectation from a bipartite null-model. Differently from [2, 3], our method does not have evident limit of resolution [7]. We explicitly prove that our definition satisfies almost all the properties of standard modularity [8] and we discuss the missing ones; following an approach similar to standard Louvain algorithm [9], we test our algorithm on the network of Southern Women [10], finding results that agree with the original social analysis. We face other bigger datasets, like the bipartite version of World Trade Web and Movielens data sets, finding countries partitioned according to their technological development, exported ware clusters based on their exporters and films communities based on the audience; the results are compared with results from the other methods [2, 3] and deeply discussed.

[1] J.-L. Guillaume and M. Latapy, Information Processing Letters 90, 215 (2004). [2] M. J. Barber, Physical Review E 76 (2007). [3] R. Guimera`, M. Sales-Pardo, and L. A. N. Amaral, Phys- ical Review E 76 (2007). [4] M. Newman, PNAS 103, 8577 (2006). [5] F. Chung and L. Lu, Annals of Combinatorics 6, 125 (2002). [6] F. Saracco, R. Di Clemente, A. Gabrielli, and T. Squar- tini, Scientific Reports 5, 10595 (2015). [7] S. Fortunato and M. Barth ́elemy, PNAS 104, 36 (2007). [8] U. Brandes, D. Delling, M. Gaertler, R. Go ̈rke, M. Hoe- fer, Z. Nikoloski, and D. Wagner, IEEE Transactions on Knowledge and Data Engineering 20, 172 (2008). [9] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre, Journal of Statistical Mechanics: Theory and Experiment 10008, 6 (2008). [10] A. Davis, B. B. Gardner, and M. R. Gardner, Deep South A Social Anthropological Study of Caste and Class (1941) p. 557.

Benard Cells as a model for Entropy Production, Entropy Decrease and Action Minimization in Self-Organization

ABSTRACT. In self-organization, complex systems increase the entropy in their surroundings and decrease their internal entropy, but the mechanisms, the reasons and the physical laws leading to this processes are still a question of debate. Energy gradients across complex systems lead to change in the structure of systems, decreasing their internal entropy to ensure the most efficient energy transport and therefore maximum entropy production in the surroundings. This approach stems from fundamental variational principles in physics, such as the principle of least action. It is coupled to the total energy flowing through a system, which leads to increase the action efficiency. In the simplest physical system of Benard Cells, we compare energy transport through a fluid cell which has random motion of its molecules, and a cell which can form convection cells. We examine the signs of change of entropy, and the action needed for the motion inside those systems. The system in which convective motion occurs, reduces the time for energy transmission, compared to random motion. For more complex systems, those convection cells form a network of transport channels, for the purpose of obeying the equations of motion in this geometry. This leads to decreased average action per one event in the system. Those transport networks are an essential feature of complex systems in biology, ecology, economy and society.

11:00-13:00 Session 3I: Socio-Ecological Systems (SES) - Social behavior in high resolution

Parallel session

Location: Cozumel 5
High-resolution social networks: state of the art and perspectives
SPEAKER: Ciro Cattuto

ABSTRACT. Digital technologies provide the opportunity to quantify specific human behaviors with unprecedented levels of detail and scale. Personal electronic devices and wearable sensors, in particular, can be used to measure the structure and dynamics of human close-range interactions in a variety of settings relevant for research in complex systems. This talk will review the experience of the SocioPatterns collaboration, an international effort aimed at measuring and studying high-resolution social networks using wearable sensors. We will discuss technology requirements and measurement experiences in diverse environments such as schools, hospitals, households and low-resource rural settings in developing countries. We will discuss salient features of empirical data, reflect on challenges such as generalization and data incompleteness, and review modeling approaches based on ideas from network science, epidemiology and computer science.

Robust Tracking and Behavioral Modeling of Pedestrian Movement in Ordinary Video Recordings
SPEAKER: Hiroki Sayama

ABSTRACT. Collective behaviors of animals, such as bird flocks, fish schools, and pedestrian crowds, have been the subject of active research in complex systems science and behavioral ecology in particular. Previous literature on this topic mostly focused on studying individual behaviors in isolation, or in response to other individuals in the vicinity through simple kinetic interaction rules. A commonly adopted assumption is that the same set of behavioral rules applies to all individuals homogeneously, while not much attention was paid to modeling and analyzing heterogeneous behavioral states and their interactions in the collective. To address this theoretical/methodological gap, we previously proposed a generalizable computational method to detect heterogeneous discrete behavioral states and their interactions among individuals from externally observable spatio-temporal trajectories of those individuals [1,2]. Our method assumes that individuals are acting as finite state machines, and constructs a stochastic model of their state transitions that depend on both the internal state of the individual and the external environmental context (including presence and states of other individuals nearby). However, the method was tested only with a small-sized population of termites in a well-controlled experimental setting in a laboratory, while its applicability to more complex, noisy, dynamically changing collectives “in the wild” remained unclear. In this study, we have developed a robust object tracking system that can track the movements of pedestrians from an ordinary low-resolution video recording taken from an elevated location, and have applied our behavioral modeling method to the trajectories of pedestrians obtained with this tracking system. As the input data, we recorded pedestrians walking in a university campus during a lunch break (recording was conducted with the Binghamton University IRB approval). To enhance the robustness of pedestrian tracking, our system used a hybrid approach that combined image processing (for motion detection and perspective transformation; implemented with OpenCV) and real-time, online agent-based simulation (for motion prediction in a noisy, dynamic environment; implemented with Python). From the resulting trajectories, a total of 17 distinct behavioral states were identified, based on the speed and the direction of movement. The trajectories labeled with these behavioral states were then fed into the proposed behavioral modeling method. Figure 1 shows the overview of the whole process. The resulting behavioral transition model was given as a 17×17×18 tensor (Fig. 1D). The majority of transitions were detected in the main-diagonal (= maintenance or minor change of direction) and sub-diagonal (= change of speed) parts of the tensor. A few notable interactions were detected between different states (e.g., having neighbors moving in other directions slows down fast-moving individuals, etc.), but state interactions were generally much less among pedestrians than among termites in previously reported results. This work demonstrated that the developed tracking system and the modeling method can handle noisy real-world collective behaviors. Future directions of research include further improvement of robustness and accuracy of the object tracking system and detection of behavioral anomalies using models generated by this method. This material is based upon work supported by the US National Science Foundation under Grant No. 1319152.

Global Patterns of Human Communication

ABSTRACT. Social media are transforming global communication and coordination. The data derived from social media can reveal patterns of human behavior at all levels and scales of society. Using geolocated Twitter data, we have quantified collective behaviors across multiple scales, ranging from the commutes of individuals, to the daily pulse of 50 major urban areas and global patterns of human coordination. Human activity and mobility patterns manifest the synchrony required for contingency of actions between individuals. Urban areas show regular cycles of contraction and expansion that resembles heartbeats linked primarily to social rather than natural cycles. Business hours and circadian rhythms influence daily cycles of work, recreation, and sleep. Different urban areas have characteristic signatures of daily collective activities. The differences are consistent with a new emergent global synchrony that couples behavior in distant regions across the world. A globally synchronized peak that includes exchange of ideas and information across Europe, Africa, Asia and Australasia. We propose a dynamical model to explain the emergence of global synchrony in the context of increasing global communication and reproduce the observed behavior. The collective patterns we observe show how social interactions lead to interdependence of behavior manifest in the synchronization of communication. The creation and maintenance of temporally sensitive social relationships results in the emergence of complexity of the larger scale behavior of the social system.

Alfredo J. Morales, Vaibhav Vavilala, Rosa M. Benito, Yaneer Bar-Yam, Global patterns of synchronization in human communications, Journal of the Royal Society Interface (March 1, 2017), doi: 10.1098/rsif.2016.1048.

Measuring cultural value of Ecosystem Services using geotagged photos

ABSTRACT. There is an increasing interest in the ecosystem-based approach to land management that calls for operational cost-effective methods for assessing ecosystem services (ES) at different spatial scales. When focusing on intangible ES, such as cultural ecosystem services (CES), it is particularly challenging to assess both the capacity of ecosystems to provide them and the extent of their use by people. While it is at the centre of human dimension, cultural value of ecosystem services such as forests, rivers or green urban areas are not straight forward. Interactions between people and natural spaces, through leisure or tourism activities, form a complex socio-ecological network changing over time and space in response to economic, technological, social, political, spatial and cultural drivers. Unthinkable until recently, the increasing availability of large databases generated by the use of geolocated information and communication technologies (ICT) devices allow us to gain a better understanding of complex socio-ecological interactions. Within this context we propose to identify emergent patterns of spatial distribution of CES based on the presence of visitors inferred from geolocalized Flickr photos collected in 16 sites in Europe. These spatial patterns will be used to assess and understand preferences and factors that determine their provision from local to broader scales. In particular, explanatory variables related to landscape settings but also to the Flickr users’ timeline will be extracted to investigate how CES beneficiaries interact with their environment and natural settings according to its complexity and their mobility behaviours in space and time. This will allow us to gain a better insight into CES and the complex interrelation perceived across time and space by people.

How networks shape fake narratives
SPEAKER: Giacomo Livan

ABSTRACT. The rise of “fake news” has been a defining feature of the latest year of political events. While the spreading of misinformation has been a recurring phenomenon in human societies, the proliferation of social media has accelerated it by increasing the ability of individuals to transmit and consume information. This work aims at providing a sound theoretical framework to this phenomenon by incorporating inputs from the behavioral literature into a network model. In particular, we seek to examine how the psychological phenomenon of motivated reasoning contributes to the aggregation and propagation of misinformation through social networks. Motivated reasoning refers to the tendency of individuals to radicalize and strengthen pre-established convictions when presented with confuting information, which ultimately leads to selectively recruit, process, and recall information in order to cohere with such convictions. Our model relies on an artificial society of agents connected by a social network who communicate and exchange the scattered information available to them about a binary world event (e.g., whether climate change is real or not) for which a ground truth exists. The crux of the model is that while the majority of agents are rational, i.e. update their opinions based on new information according to Bayes’ rule, a small minority of the agents are motivated reasoners (MRs). Such agents behave rationally until their conviction on a possible outcome of the issue at stake exceeds a threshold (i.e. until they “make up their minds”). When this happens, MRs begin to replace the incongruent information (i.e. the information contradicting their beliefs) with congruent signals which they later communicate to their neighbors. We provide a fully analytic solution of the model’s dynamics on a regular network, which we validate with extensive numerical simulations, and we numerically investigate more complicated network topologies. We show that the network’s dynamics neatly separates into two regimes: after an initial rational phase characterized by the seamless transmission of unfiltered information, at a critical time the MRs’ convictions begin to consolidate and give rise to a post-rational phase where MRs actively propagate distorted information. Within this framework we are able to predict under what conditions a society will either reach consensus or stabilize around a polarized state. We show that the long run outcome of the information diffusion process is entirely determined by the fraction of MRs that are for or against the ground truth when the transition time is hit. Furthermore, we are able to characterize analytically the distribution of agents’ belief at any point in time. This allows to fully track the evolution of antagonistic communities, a result which provides a quantitative description of the widely debated “echo chamber” phenomenon.
 Our model offers no more than a caricature of the complexity of real world information sharing. Yet, our relatively simple picture of motivated reasoning allows us to examine quantitatively how social networks can effectively aggregate individual biases and distort the collective interpretation of facts and news.

Complex networks in archaeology: Community structure of copper supply networks
SPEAKER: Jelena Grujic

ABSTRACT. Complex networks analyses of many physical, biological and social phenomena show remarkable structural regularities, yet, their application in studying human past interaction remains underdeveloped. Here, we present an innovative method for identifying community structures in the archaeological record that allow for independent evaluation of the copper using societies in the Balkans, from c. 6200 to c. 3200 BC. We achieve this by exploring modularity of networked systems of these societies across an estimated 3000 years. We employ chemical data of copper-based objects from 79 archaeological sites as the independent variable for detecting most densely interconnected sets of nodes with a modularity maximization method.

We designed two distinctive networks: one, that had artefacts for nodes, and the other, where archaeological sites acted as nodes. Our Artefacts and Sites networks were defined exclusively on data (selected trace elements for 410 copper artefacts) isolated from any geographical, cultural or chronological information, in order to secure an independent estimate of economic and social ties amongst copper-using societies in the Balkans within the observed time. Our network was built in two discrete steps: 1) we grouped the data in ten distinctive chemical clusters (Network 1: Artefacts); 2) placed a connector between the sites that contain pairs of artefacts from the same cluster and analysed the modularity of the final network (Network 2: Sites). In both steps we used the Louvain algorithm to obtain community structures (modules), and bootstrapping to test the significance of acquired results. Our results reveal three dominant modular structures across the entire period, which exhibit strong spatial and temporal significance (Figure 1). The three community structures, exhibit high correlation with the known spatial and chronological dynamics of various cultural phenomena – archaeological cultures in the Balkans between the 7th and the 4th millennium BC. The earliest known copper-based artefacts are included in the Module 0 assemblage, identified as copper minerals from the Early Neolithic Starčevo culture horizons at Lepenski Vir, Vlasac and Kolubara-Jaričište, dated from c. 6200 to c. 5500 BC. These fall within the same module as copper minerals and beads from the early Vinča culture occupation at the sites of Pločnik (Period 2, 5500 – 5000 BC), but also Gomolava and Medvednjak (Period 3, 5000-4600 BC).

We select two key arguments to highlight the novelty of our model for studying community structures in the human past record. One is that it is based on a variable independent of any archaeological and spatiotemporal information, yet provides archaeologically and spatiotemporally significant results. The second is that a study of community structure property of networked systems in the past produces coherent models of human interaction and cooperation that can be evaluated independently of established archaeological systematics. Despite the imperfect social signal extracted from the archaeological record, our method provides important new insights into the evolution of the world’s earliest copper supply network and establishes a widely applicable model for exploring technological, economic and social phenomena in human past, anywhere.

Evolution of communities on twitter during the 2017 French presidential election
SPEAKER: Noé Gaumont

ABSTRACT. Twitter acquired a central place as a mean of communication for political parties in the last year. For example, Donald Trump has 16.1 million follower on twitter while on a smaller scale François Hollande has 1.9 million follower on the 17th march 2017. This allow them to reach a very large audience. Not only is Twitter a large scale broadcasting platform, it also allows people to react easily with replies, mentions, quotes and retweets. Thus, the impact of each tweet can be measured by the analysis of all the reactions it generated.

In this study, we are focusing on the French presidential election and the evolution of the supporters of each candidates during the campaign. Through the Twitter streaming API, we are collecting all the activity surrounding a list of 3 500 accounts, from august 2016. These accounts correspond to deputies, mayors, senators and all the candidates in the French presidential election. This collect includes all the tweets from these 3 500 accounts but also the reactions\footnote{Because of Twitter API limitation, only a fraction of the reaction are available.} (replies, quotes and retweets) generated by these tweets. Thus on 1st of March 2017, we already collected more than 21 million tweets from more than 1.2 million unique user. Parts of the data are available: \url{}.

Most of the studies on Twitter uses the information of followers/followees to deduce political support, this is a powerful approach and led to meaningful insights on the structure of the Twitter network such as the existence of community structure. However, follow information is binary and not very dynamic. We use the notion of retweet, instead. A single retweet may not be a sign of agreement, however multiple retweets in a short time period are evidences of a strong relations between two persons. Like previous study, we find evidence of community structure inside the weighted graph of retweets because people close to a candidate hardly ever retweet people from other communities. As retweets are highly dynamic, we are able to have a more fined-grained description of the structure by analyzing the temporal network as a series of graphs on overlapping time windows. By applying community detection algorithm on each graph, we follow how the communities grow, split, merge or decline over time, see Figure~\ref{fig:alluvialcouleurlienetnoeud}. The novelty of our approach is being able to track these evolutions to existing events. This reflects how much the twitter medium reacts to event in the real life (official annoucement, debate, presidential primary). For example, Figure~\ref{fig:alluvialcouleurlienetnoeud} focus presidential primary of the right involving mainly Fillon, Juppé and Sarkozy. After the first round won by Fillon and Juppé, Sarkozy lost a lot supporters in favors of Fillon. After the second round won by Fillon, the main process occurred and Juppe lost a lot of supporter. Another interesting evolution is the fusion of the community of Sarkozy and Fillon which could be explained by Sarkozy's choice to support Fillon between the first and second round of the primary.

13:00-14:30 Session : Lunch

Buffet lunch & poster session

Location: Gran Cancún 2
14:30-16:00 Session 4: Plenary session
Location: Gran Cancún 1
A Complex Systems approach to study Human Mobility

ABSTRACT. I present a complex system approach applied to large data sets. I
characterize how humans interact with built environment and to plan for
better usage of urban resources.  First I present a modeling framework,
TimeGeo, that generates individual trajectories in high spatial-temporal
resolutions, with interpretable mechanisms and parameters capturing
heterogeneous individual travel choices at urban scale. Then I assign
these trips to the streets. I demonstrate that the percentage of time
lost in congestion is a function of the proportion of vehicular travel
demand to road infrastructure capacity, and can be studied in the
framework of non-equilibrium phase transitions. 

Prebiotic evolution and the emergence of life: Did it all happen in a warm little pond?

ABSTRACT. Analysis of carbon-rich meteorites and the laboratory simulations of the primitive Earth suggest that prior to the emergence of the first living systems the prebiotic environment was indeed rich in a large suite of organic compounds of biochemical significance, many organic and inorganic catalysts, purines and pyrimidines, i.e., the potential for template-dependent polymerization reactions; and membrane-forming compounds. The remarkable coincidence between the monomeric constituents of living organisms and those synthesized in Miller-type experiments appears to be too striking to be fortuitous and strongly supports the possibility that life emerged from such a mixture. There is little doubt that self-organization phenomena played a role in the emergence of life from such primitive soup as shown, for instance, by the remarkable spontaneous assembly of amphiphiles into micelles and bilayer membranes, as well as the dynamical self-assembly properties of nucleic acids. Biological evolution, however, requires an intracellular genetic apparatus able to store, express and, upon reproduction, transmit to its progeny information capable of undergoing evolutionary change. Current biology indicates that the biosphere could have not evolved in the absence of a genetic replicating mechanism insuring the stability and diversification of its basic components. How did such replicating genetic polymers appear?

Price of Complexity and Complexity of Price in Financial Networks

ABSTRACT. Financial institutions form multiplex networks by engaging in contracts with each other and by holding exposures to common assets. As a result probabilities of default and prices of assets are interdependent. While some level of financial complexity is useful it comes at the cost of several unintended consequences, including financial instability, inequality and allocation of capital at odd with the goal of sustainability. What can we learn from network science to make the financial system more resilient to shocks and bubbles, and to make it better serve society by channeling funds towards environmentally and socially sustainable investments?

16:00-16:30 Session : Coffee Break

Coffee break & poster session

Location: Cozumel A
16:30-18:30 Session 5A: Foundations of Complex Systems - Dynamics

Parallel session

Location: Cozumel 1
Synchronization in populations of moving oscillators

ABSTRACT. Here we will show results obtained in our group concerning synchronization of populations of moving oscillators. On the one hand, populations of identical Kuramoto oscillators that move randomly on a plane, without considering excluded volume effects, enables to obtain analytical results for the time needed to synchronize [1]; later on, we hace extended this framework to locally interacting self-propelled particles for which synchronization generically proceeds through coarsening verifying the dynamic scaling hypothesis, with the same scaling laws as the the 2d XY model following a quench [2]. Our results shed light into the generic nature of synchronization in time- dependent networks, providing an efficient way to understand more specific situations involving interacting mobile agents. Alternatively, we have also investigated synchronization in populations of integrate and fire oscillators, showing that under restrictive conditions of connectivity, the time needed for the population to synchronize is not a monotonous function of velocity [3] [1] Naoya Fujiwara, Jürgen Kurths, and Albert Díaz-Guilera. Synchronization in networks of mobile oscillators. Phys. Rev. E 83, 025101(R) (2011). [2] D. Levis, I. Pagonabarraga, A. Diaz-Guilera. Synchronization in dynamical networks of locally coupled self-propelled oscillators. Phys. Rev X (in press). [3] L. Prignano, O. Sagarra, and A. Díaz-Guilera. Tuning Synchronization of Integrate-and-Fire Oscillators through Mobility. Phys. Rev. Lett. 110, 114101 (2013).

Attractor switching patterns of nanoelectromechanical oscillators
SPEAKER: Martin Rohden

ABSTRACT. Networks of nanoelectromechanical oscillators have been realized experimentally and are therefore a complex system which can be studied both experimentally and analytically [1]. However, the architecture of their attractors and basins are not well understood. Here, we study a system consisting of eight nonlinear amplitude-phase nanoelectromechanical oscillators arranged in a ring topology with reactive nearest-neighbor coupling which is simple and connects directly to experimental realizations. The system possesses multiple stable fixed points and we determine escape times from different fixed points under the influence of Gaussian white noise applied to each of the oscillator’s amplitudes. Furthermore, we study switching patterns between different stable states by applying Gaussian white noise. We compare our findings with analytical results based on the Freidlin-Wentzell potential and determine which states are most stable for which parameter setting. This work serves as a step towards a controlled switching mechanism.

[1] J. Emenheiser, A. Chapman, M. Posfai, J. Crutchfield, M. Mesbahi and R. D’Souza, Chaos 26, 094816 (2016).

Physical Aging, Emerging Long-Period Orbits and Self-similar Temporal Patterns in Deterministic Classical Oscillators

ABSTRACT. Physical aging is understood as the breaking of time translation invariance in the measurement of autocorrelation functions and long intrinsic time scales. In previous work [1] we had shown physical aging of repulsively coupled classical oscillators under the action of noise. Noise led to the migration of oscillator phases through a rich attractor space. To explore the role of stochastic fluctuations in physical aging, we here [2] replace noise by a quenched disorder in the natural frequencies. Again we identify physical aging, now in a deterministic rather than stochastic system of repulsively coupled Kuramoto oscillators, where the attractor space is explored quite differently. Tracing back the origin of aging, we identify the long transients that it takes the deterministic trajectories to find their stationary orbits in the rich attractor space. The stationary orbits show a variety of different periods, which can be orders of magnitude longer than the periods of individual oscillators. Most interestingly, among the long-period orbits we find self-similar temporal sequences of temporary patterns of phase-locked motion on time scales, which differ by orders of magnitude. So the self-similarity refers to patterns in time rather than static fractals in space. The ratio of time scales is determined by the ratio of widths of the distributions about the common natural frequency, as long as the width is not too large. The effects are particularly pronounced if we perturb about a situation in which a self-organized Watanabe-Strogatz phenomenon is known to happen, going along with a continuum of attractors and a conserved quantity. We expect similar phenomena in coupled FitzHugh-Nagumo elements with a certain disorder in the model parameters and antagonistic couplings as guarantee for a rich attractor space.

References: [1] F. Ionita and H. Meyer-Ortmanns, Aging of classical oscillators, Phys. Rev. Lett.112, 094101 (2014). [2] D. Labavic and H. Meyer-Ortmanns, Emerging long orbits and self-similar temporal sequences in classical oscillators, arXiv: 1701.0688, submitted in an extended version for publication (2017).

The effective structure of complex networks: Canalization in the dynamics of complex networks drives dynamics, criticality and control
SPEAKER: Luis M. Rocha

ABSTRACT. Network Science has provided predictive models of many complex systems from molecular biology to social interactions. Most of this success is achieved by reducing multivariate dynamics to a graph of static interactions. Such network structure approach has provided many insights about the organization of complex systems. However, there is also a need to understand how to control them; for example, to revert a diseased cell to a healthy state in systems biology models of biochemical regulation.

Based on recent work [1,2] we show that the control of complex networks crucially depends on redundancy that exists at the level of variable dynamics. To understand the effect of such redundancy, we study automata networks−both systems biology models and large random ensembles of Boolean networks (BN). In these discrete dynamical systems, redundancy is conceptualized as canalization: when a subset of inputs is sufficient to determine the output of an automaton. We discuss two types of canalization: effective connectivity and input symmetry [2].

First, we show that effective connectivity strongly influences the controllability of multivariate dynamics. Indeed, predictions made by structure-only methods can both undershoot and overshoot the number and which sets of variables actually control BN. Specifically, we discuss the effect of effective connectivity on several structure-only controllability theories: structural controllability, minimum dominating sets, and feedback vertex sets [1,3].

To understand how control and information effectively propagate in such complex systems, we uncover the effective graph that results after computation of effective connectivity. To study the effect of input symmetry, we further develop our dynamics canalization map, a parsimonious dynamical system representation of the original BN obtained after removal of all redundancy [2]. Mapping canalization in BN via these representations allows us to understand how control pathways operate, aiding the discovery of dynamical modularity [4] and robustness present in such systems [2].

We also demonstrate that effective connectivity is a tuning parameter of BN dynamics [5], leading to a new theory for criticality, which significantly outperforms the existing theory in predicting the dynamical regime of BN (chaos or order). Input symmetry is also shown to affect criticality, especially in networks with large in-degree. Moreover, we argue that the two forms of canalization characterize qualitatively distinct phenomena, since Boolean functions cover the space of both measures and prediction performance of criticality is optimized for models which parameterize the two forms separately [6].

Finally, we will showcase a new Python toolbox that allows the computation of all canalization measures, as well as the effective graph and the dynamics canalization map. We will demonstrate it by computing the canalization of a battery 50+ systems biology automata networks.

[1] A. Gates and L.M. Rocha. [2016]. Scientific Reports 6, 24456. [2] M. Marques-Pita and L.M.Rocha [2013]. PLOS One, 8(3): e55946. [3] A. Gates, R.B. Correia and L.M. Rocha. [2017]. In Preparation. [4] A. Kolchinsky, A. Gates and L.M. Rocha. [2015] Phys. Rev. E. 92, 060801(R). [5] M. Marques-Pita, S. Manicka and L.M.Rocha. [2017]. In Preparation. [6] S. Manicka and L.M.Rocha. [2017]. In Preparation.

All at once: a global representation of δt-connected time-respecting paths in temporal networks

ABSTRACT. Temporal networks code the interaction dynamics of large number of entities, which lead to the emergence of complex structures observed in physics, biology, or social systems. Such time-varying structures largely determines the speed and final outcome of ongoing dynamical processes, including contagion, synchronisation dynamics, or evolutionary games. It has been particularly shown that the timings of contacts between nodes have major role in this sense (a) because any diffusion process has to follow causal, time-respecting paths spanned by sequences of contacts; and (b) because the speed of spreading is heavily influenced by temporal inhomogeneities, as well as timing correlations between contacts of nodes. Despite the recognised importance of time-varying interactions, our understanding about temporal networks is yet limited as neither a computationally effective representation nor methodologies to capture simultaneously temporal and structural correlations has been provided.

Correlations that frequently result in short waiting-times between the contacts of individual nodes are of special importance to a subclass of spreading processes. These processes are characterised by a waiting-time constraint, where the spreading quantity has to be transmitted within some given time $\delta t$. Examples of such spreading processes include variants of common disease spreading models such as the Susceptible-Infectious-Recovered and Susceptible-Infectious-Susceptible, social contagion processes, ad-hoc routing protocols for mobile agents, or passenger routing in transport networks. The waiting-time constraint of a dynamical process can be incorporated into time-respecting paths, which consist of successive contact events that share at least one common node and are separated by no more than $\delta t$ units of time. The connectedness of such paths is key to understanding the possible outcomes of a dynamical process: \emph{e.g.}, for very low values of $\delta t$, network-wide connectivity is unlikely to exist and spreading processes may not percolate the temporal network, while for large $\delta t$ the temporal structure may be connected allowing for the emergence of a global phenomena. However, the detection and analysis of time-respecting paths is computationally expensive especially in large temporal networks, where they really matter.

Here we introduce a new representation of temporal networks by mapping them to static weighted directed acyclic graphs, called event graphs. We show that event graphs can be related to directed percolation with the characteristic quantities showing some expected scaling behaviour. Event graphs provide a powerful tool that encapsulate the complete set of time respecting paths at once, even for very large temporal networks; can be used easily to study unlimited and limited-waiting-time processes on networks; they can capture the complete set of potentially affected nodes for spreading initiated from a given node and time, without requiring the expensive computation of average outcomes of stochastic simulations. They are easy to use to compute centrality scores for events, links, and nodes and to identify the complete set of $\delta t$-connected temporal components in a computationally economic way. We illustrate these benefits of event graphs by performing extensive simulation studies and analysing large-scale data sets using them. This representation opens new directions to study system level higher-order correlations in temporal networks.

Multiagent Coordination Dynamics: The Human Firefly Experiment
SPEAKER: Mengsen Zhang

ABSTRACT. Living systems often contain oscillatory activities on multiple spatiotemporal scales. Studying the coordination among diverse oscillatory processes is essential for understanding the organization of complex structures and their behavior. Theoretical modeling has tended to focus on systems at either end of the spectrum: large-scale networks for their propensity to globally synchronize (with less attention given toward segregation into local structures) or systems with very few coupled oscillators (N≤3) that often exhibit a rich mix of integrative and segregative tendencies, but lack the necessary number of components to uncover organization at multiple spatiotemporal scales. In real-life human interactions, much happens in between. We have developed an experimental paradigm, dubbed the “human fireflies” to study the formation and evolution of coordinative structures in small groups (N=8 people). We aim to (1) examine the effect of symmetry breaking on collective dynamics; (2) characterize the forms of coordination that emerge at a micro level; and (3) provide benchmark experimental observations against which the relevance of theoretical models may be tested. Participants (8 people ×15 groups) were seated around an octagonal table and performed a rhythmic tapping task. Each participant was equipped with a touchpad that broadcast his/her tapping behavior to others, and an array of eight LEDs that displayed tapping behavior of self and others as brief flashes. Before each 50s period of interaction, participants were paced with a metronome (10s), and instructed to continue tapping at its frequency throughout the interaction while being attentive to others’ behavior. We manipulated the spatiotemporal symmetry of this eight-component system through metronome assignments, effectively splitting people into two groups of four such that frequencies were identical within groups, but different between groups. Symmetry breaking in the system was parametrically controlled by the frequency difference between the two groups (δf = 0, 0.3 or 0.6Hz). We examined under which conditions the groups remained segregated as two coordinative structures behaving at distinct frequencies, or integrated into a single superstructure in which the initial division into two groups was lost. Inferring from the relation between within- and between-group phase coordination, we identified the critical frequency difference that borders the regimes of integration and segregation. Close examination of the dynamics revealed that integration did not take the canonical form of stable phase relations (i.e. phase-locked synchronization). Rather a form of metastable relations emerged (i.e. recurrent dwells approaching stable phase relations, interleaved with escapes from said relations). In multiagent systems, such metastable coordination dynamics enables a component to switch between multiple coordinative structures in a recurrent fashion. Our results supply future theoretical studies of the dynamics of multiagent coordination with an empirical reference: plausible models ought to exhibit metastable coordination dynamics and contain a critical level of symmetry breaking that demarcates the boundary between segregation and integration. Such models will be useful to explore the consequences of different forms of coordination in terms of, inter alia, the complexity and stability of large-scale social networks, and more generally to aid discovery of laws of human behavior and their interaction.

Time augmented bond percolation for statistical inference in complex networks
SPEAKER: Dijana Tolic

ABSTRACT. We propose a novel method for solving source and model inference problems on arbitrary graphs and for a bigger class of contagion network processes, such as time homogeneous compartmental contagion models with non-recurrent states. Different source detection estimators vary in their assumptions on the network structure or on the spreading process models. Furthermore, the problem of source inference on arbitrary graphs is computationally hard (\# P-complete class), which is why most state-of-the-art methods, such as belief propagation and message passing consider only network structures that are locally tree-like.

Given a contact network and a snapshot of a dynamical process at a certain time, we propose a mapping of spreading dynamics to weighted networks, where edge weights represent interaction time delays. This mapping is constructed in such a way that the time respecting paths (shortest paths) in the weighted network preserve the causality of spreading.

We overcome the limitations of current methods such as: message passing, mean-field like approximation and kinetic Monte Carlo methods and establish the connection of our mapping with bond percolation theory. Our method is relevant for broader class of inference problems, such as localizing the total set of source nodes which generate dynamical process on complex network. Multiple source inference is even more challenging problem with a sizable practical importance. We show that, under certain assumptions, the proposed methodology is able to locate the multiple sources using only single source solutions.

Recovery time after localized perturbations in complex dynamical networks

ABSTRACT. Maintaining the synchronous motion of dynamical systems interacting on complex networks is often critical to their functionality. However, real-world networked dynamical systems operating synchronously are prone to random perturbations driving the system to arbitrary states within the corresponding basin of attraction, thereby leading to periods of desynchronized dynamics with apriori unknown durations. Thus, it is highly relevant to have an estimate of the duration of such transient phases before the system returns to synchrony, following a random perturbation to the dynamical state of any particular node of the network. We address this issue here by proposing the framework of single-node recovery time (SNRT) which provides an estimate of the relative time scales underlying the transient dynamics of the nodes of a network during its restoration to synchrony. We utilize this in differentiating the particularly slow nodes of the network from the relatively fast nodes, thus identifying the critical nodes which when perturbed lead to significantly enlarged recovery time of the system before resuming synchronized operation. Further, we reveal explicit relationships between the SNRT values of a network, and its global relaxation time when starting all the nodes from random initial conditions. Earlier work on relaxation time generally focused on investigating its dependence on macroscopic topological properties of the respective network. However, we employ the proposed concept for deducing the microscopic relationships between topological network characteristics at node-level and the associated SNRT values. The framework of SNRT is further extended to a measure of resilience of the different nodes of a networked dynamical system. We demonstrate the potential of SNRT in networks of Rössler oscillators on paradigmatic topologies and a model of the power grid of the United Kingdom with second-order Kuramoto-type nodal dynamics, illustrating the potential practical applicability of the proposed concept.

16:30-18:30 Session 5B: Economics and Finance - Banking, financial markets, risk & regulation I

Parallel session

Location: Xcaret 1
Signatures of dynamical regimes in the temporal Bitcoin transaction networks

ABSTRACT. Half a decade after Bitcoin became the first widely used cryptocurrency, blockchains are receiving considerable interest from both industry and the research community. We study the dynamical and structural properties of the temporal Bitcoin network, which can be reconstructed from the historical Bitcoin transactions between the users. Applying the widely used method of Non-negative Matrix Factorization, we obtain a low-rank approximation of the temporal Bitcoin network in the unsupervised clustering framework. We are able to quantify the change of dynamical regimes and characterize the situations when the cluster assignment probability distribution is concentrated or dispersed introducing the Shannon entropy measure. Each of the basis vectors, or networks, has a clear physical interpretation due to the non-negativity constraint introduced in the structural decomposition. The regime is formally defined as a cluster or group of similar BTC transaction snapshots. The total number of regimes is set to be equal to the number of clusters, which are estimated from the spectrum of the matrix which describes the BTC transaction dynamics. With the proposed information theoretic and spectral measure, we quantify points in time where the systems demonstrates the regime switching dynamics. We compare and analyze the Bitcoin exchange prices towards the USD, CNY and EUR with the changes of dynamical regimes.

Reshaping Financial Network: Central Clearing Counterparties and Systemic Risk

ABSTRACT. Over-the-counter (OTC) derivatives constitute a complex network of risk transfers. Due to their opacity and lack of regulation, these markets played a significant role in the global financial crisis of 2007-2009 (Haldane, 2009). As a response, the G20 leaders committed to make derivatives safer by increasing their transparency and mandating clearing for certain classes of OTC derivatives through central clearing counterparties (CCPs). Interposing themselves between existing nodes in the network, central clearing is expected to reduce systemic risk by acting as a buffer between different nodes, decreasing exposures via multilateral netting, and ensuring loss-mutualisation. However, the introduction of CCPs drastically modifies the underlying network structure, creating a more star-like architecture. This paper investigates whether and under which conditions mandatory central clearing improves financial stability. In particular, we show that when a shock is sufficiently large, a highly diversified and insufficiently capitalized CCP connects all its members rather than insulates them. We analyse how central clearing redistributes wealth among OTC market. We show how the participation of each member in a CCP creates externalities to all other members on three different levels: risk, netting benefits, and capital costs. First, we show that central clearing transfers losses from riskier to safer members. The size of exposure between members cannot be fully controlled and managed by any single member, since it is not determined by bilateral transactions between them but by positions and credit quality of all the other CCP members. Second, we analyse the redistribution effects arising from multilateral netting. Bilateral netting is beneficial for those agents whose credit quality is high relative to the average quality of their counterparties, however one party gains the exact amount another party loses. In a CCP, netting effect comes along with the counterparty effect, i.e. the change of the average quality of the CCP. The combination of these two effects makes it possible for both counterparties to increase expected payoffs by transferring part of the negative counterparty effect to other members. We show that members with relatively high quality tend to make their position towards the CCP flatter, while riskier members keep their positions more directional. This makes the CCP more dependent on the payments from risky members and might increase systemic risk. Third, we show that since default fund requirements associated with an additional unit cleared by the big member are shared among all members, the small members pay relatively more per one unit of cleared notional. These negative externalities can be avoided by setting individual requirements in accordance with the member’s contribution to the total capital requirement. Last, we provide an empirical EU-wide multi-layer network of CCPs and their members focusing on common memberships. In general, our work points towards the need to further understand the impact of policies on the network structure and in particular, the redistributional effects of mandatory central clearing, how they change banks’ strategies and influence systemic risk.

Liquidity crises in the limit order book: a tale of two time scales

ABSTRACT. We present an empirical analysis of the microstructure of financial markets and, in particular, of the static and dynamic properties of liquidity. We find that on relatively large time scales (15 min) large price fluctuations are connected with the failure of the subtle mechanism of compensation between the flows of market and limit orders: in other words, the missed revelation of the latent order book breaks the dynamical equilibrium between the flows, triggering the large price jumps. This behavior naturally leads to a dynamical definition of liquidity. On smaller time scales (30 s), instead, the static depletion of the limit order book is an indicator of an intrinsic fragility of the system, which leads to a strongly nonlinear enhancement of the response, in terms of price impact, to incoming orders, even if their volume is small. In order to quantify this phenomenon, we introduce a static measure of the liquidity imbalance present in the book and we show that this quantity is correlated to both the sign and the magnitude of the next price movement. These empirical findings prove that large price fluctuations are due to different mechanisms that act at different time scales. In both cases, the volumes of the incoming orders play a minor role with respect to the fragility of the system and, in particular, to the possible typologies of liquidity crises we discuss. In conclusion, the effective liquidity should be defined in relation to the time interval one wants to consider.

Statistically validated network of portfolio overlaps and systemic risk
SPEAKER: Giulio Cimini

ABSTRACT. Common asset holding by financial institutions, namely portfolio overlap, is nowadays regarded as an important channel for financial contagion with the potential to trigger fire sales and thus severe losses at the systemic level. In this paper we propose a method to assess the statistical significance of the overlap between pairs of heterogeneously diversified portfolios, which then allows us to build a validated network of financial institutions where links indicate potential contagion channels due to realized portfolio overlaps. The method is implemented on a historical database of institutional holdings ranging from 1999 to the end of 2013, but can be in general applied to any bipartite network where the presence of similar sets of neighbors is of interest. We find that the proportion of validated network links (i.e., of statistically significant overlaps) increased steadily before the 2007-2008 global financial crisis and reached a maximum when the crisis occurred. We argue that the nature of this measure implies that systemic risk from fire sales liquidation was maximal at that time. After a sharp drop in 2008, systemic risk resumed its growth in 2009, with a notable acceleration in 2013, reaching levels not seen since 2007. We finally show that market trends tend to be amplified in the portfolios identified by the algorithm, such that it is possible to have an informative signal about financial institutions that are about to suffer (enjoy) the most significant losses (gains).

Quantification of systemic risk from overlapping portfolios in Mexico

ABSTRACT. Financial markets are exposed to systemic risk, the risk that a substantial fraction of the system ceases to function, and collapses. Systemic risk can propagate through different mechanisms and channels of contagion. One important form of financial contagion arises from indirect interconnections between financial institutions mediated by financial markets. This indirect interconnection occurs when financial institutions invest in common assets and is referred to as overlapping portfolios. In this work we quantify systemic risk from overlapping portfolios. Having complete information of security holdings of major Mexican financial intermediaries and the ability to uniquely identify securities in their portfolios allows us to represent the Mexican financial system as a bipartite network of securities and financial institutions. This makes it possible to quantify systemic risk arising from overlapping portfolios. We show that focusing only on direct exposures underestimates total systemic risk levels by up to 50%. By representing the financial system as a multi-layer network of direct exposures (default contagion) and indirect exposures (overlapping portfolios) we estimate the mutual influence of different channels of contagion. The method presented here is the first objective data-driven quantification of systemic risk on national scales that includes overlapping portfolios.

Repeated Games, Decisions, and Universal Turing Machines
SPEAKER: Michael Harre

ABSTRACT. Turing machines have been used by Ken Binmore, Vela Velupillai and many others as a tool for understanding the interactions between economic agents with idealised computational abilities. This has lead to a better understanding of the limits of the computability of equilibria as well as to help define "complexity economics" using concepts from Turing, Shannon, Kolmogorov and Simon. This talk takes a similar approach to that of earlier work on repeated games and establishes some formal results regarding Turing Machines and economic theory. The starting point is agents playing repeated games in which the agents use past choices and outcomes as the information set for their next choice. These repeated games are separated into two distinct but coupled computational steps: the interaction step in which the agents' joint actions are used to derive payoffs and the decision step where agents use past interactions to decide what action to choose next. The decisions in the second step are easily thought of as an agent performing computations where we will assume only that the agents are only finite automata, not Turing Machines. On the other hand it is less common for the underlying game that defines the economic interactions (the first step) to be thought of as a form of computation per se. It is shown that after a suitable relabeling of the system elements the combination of interactions and decisions is isomorphic to classical logic gates for a large class of games. This provides a common formal language in which to analyse the relationship between strategies (an agent's 'internal' cognitive process) and the economic interaction (the logical structure of the game). Classical strategies such as 'tit-for-tat' and 'win-stay, lose-shift' will be used as examples. With this approach a number of interesting properties emerge and the consequences for economic theory are discussed, with a focus on showing that for a certain idealised model of the economy it can be readily shown that such an economy is a Universal Turing Machine.

Study of the opening and closing price dynamics in the NYSE using the Takens Embedding Theorem

ABSTRACT. For a long time there has been a controversy about whether the opening and closing prices in the markets have different dynamics. In effect, the incentives of economic agents are different in each case: at the close of the market, economic agents are pressured by not leaving open positions, while in the opening they are pressured by the uncertainty created by the news during the previous night in the markets. For a long time the following question has been debated: the dynamics of opening and closing prices are different? In this work we use of the Embedding Theorem to study the dynamical properties of the associated time series. The fundamental conclusion is that there are no perceptible differences between the two phenomena

16:30-18:30 Session 5C: Infrastructure, Planning and Environment - Urban Flows and Tranport Systems II

Parallel session

Location: Xcaret 3
Immigrant community integration in world cities

ABSTRACT. Immigrant integration is a complex process comprehending many different factors such as employment, housing, education, health, language, legal recognition as well as the built of a new social fabric. In the last years, there have been advances in the definition of a common framework concerning immigration studies and policies, although the approach to this issue remains strongly country-based. The outcome of the process actually depends on the culture of origin, the one of integration and the policies of the hosting country government. Traditionally, spatial segregation in the residential patterns of a certain community has been taken as an indication of ghettoization or lack of integration. While this applies to immigrant communities, it can also affect minorities within a single country. The spatial isolation reflects in the economic status of the segregated community and in social relationships of its members.

Immigrant integration has been the focus of many research studies using traditionally national census data and similar surveys. In parallel, in the last few years we have witnessed a paradigm shift in the context of socio-technical data. Human interactions are being digitally traced, recorded and analyzed in large scale. Sources are as varied and different as mobile phone data, credit card transactions, or Twitter data. Going beyond the urban scale, Twitter data have been used to detect the diffusion of human mobility and the languages spoken. Language identification related to the spatial location of Twitter users has been investigated, towards a more complete characterization of spatial local dialects. Finally, Twitter has been used as a statistical database for representations of demographical characteristics of users and language identification patterns. Several attempts have been made in order to identify, characterize and group international communities in cities based on Information and Communication Technologies (ICT) data and to perform social segregation analyses.

Here we present a novel approach to quantify the spatial integration of immigrant communities in urban areas worldwide by using social media information collected through the Twitter microblogging platform; first, we characterize immigrants through their digital spatio-temporal communication patterns, defining their residence place and most probable native language. The distribution of residence detected by Twitter has been validated with census data for three major cities: Barcelona, London and Madrid. Then we perform a spatial distribution analysis through a modified entropy metric, as a quantitative measure of the spatial integration of each community in cities and the corresponding relevance within countries. These results have been recently posted in a paper in arXiv (F. Lamanna, M. Lenormand, M. Henar Salas-Olmedo, G. Romanillos, B. Goncalves, Jose J. Ramasco, Immigrant community integration in world cities, arXiv: 1611.01056 (2016)). The lower the spatial entropy becomes, the more isolated the communities are. The cities can be classified in three major groups depending on the number of immigrant communities hosted and how well they spatially assimilate them. Along the same lines, one can also study which cultures integrate better in which hosting countries.

Estimating aviation passengers flows and airports catchment areas from geo-located tweets

ABSTRACT. The current practices in Air Transportation Analytics and Management substantially disregard the passengers' movements outside of the airport. This historical lack of perspective is a consequence of the intrinsic limitations in both accuracy and validity of the traditional methods used to obtain data on passengers' activities. As a consequence, managers and policy makers policy are often not provided with sufficient information to correctly weight their decisions on the basis of the of the consequences that are ultimately expected for the passengers.

The recent development and popularised use Information and Communication Technologies offer new alternative sources of information allowing for the precise derivation of individual mobility at different spatial scales, overcoming many of the limitations of traditional methods. Here, we investigate how one can extract precise information on the passengers' trajectories from a large database of geolocated tweets including over 14M users tracked for two years in the european continent.

We first extract statistics on the international movements. About 15% of the users tweet from more than a single country and we associate consecutive tweets in different countries to an international trip. Our estimate is validated by comparing the observed flows for the first quarter of 2015 with those two independent datasets: the tickets sales provided by Sabre Airline Solutions and the passengers between airports provided by Eurostat. The observed deviations are associated to the use of other modes of transport, as can be observed for neighbouring countries such as Austria and Slovakia. The match is instead closer where flying is the only viable option like between Spain and the United Kingdom.

We then perform an analysis at a smaller scale by associating observed long range displacements to the closest pair of airports (departure and arrival) between which domestic or international commercial flights are regularly available. This rationale allows us to estimate the airports' catchment area as long as airports do not compete for the same region. This is not the case large metropolitan areas served by more than one airport. To analyse this more complex scenario we integrate information on ticket prices and the travel-times between the user's home and the alternative airports (provided by Google's API) and describe the passenger decision behavior using a discrete choice (multinomial) logit model to estimate the value of time in the urban segments in aviation mobility.

This study shows how the availability of ICT data allows for a new comprehensive perspective on the door-to-door characteristic of aviation mobility. The air transport system interacts with other transport modes, both competing for same users or being integrated in a complex multi-layer mobility network. While passengers can still be described within the classical rational choice paradigm, new models must be developed to including the influence of those aspects in the passenger's travel decisions.

Efficient boarding and alighting in public transportation systems

ABSTRACT. Millions of people use public transportation systems (PTS) every day. An efficient regulation of passengers and the correct use of the infrastructure can lead to an improved performance. Crowded PTS without a suitable regulatory strategy for the boarding and alighting process can be a source of significant delay. In this research we present two main results: a computational simulation to test passenger regulatory strategies for boarding and the implementation of the best strategy in a station of the Mexico City Metro. Computer simulations of the Social Force Model (SFM) have been used to describe collective patterns that appear in real pedestrian motion. We use the SFM to implement a realistic simulation of passenger flow considering specifically the boarding and alighting processes. To calibrate the model variables, we performed a study of the dynamics in a Mexico City Metro line, where we obtained the boarding and alighting time, the train station time, and estimated delay time. We implemented a “default strategy” for boarding, which models current passenger dynamics. We compare this with an alternative regulatory strategy using exclusive doors for entry and exit, called “dedicated doors” for boarding and alighting; and the strategy “guide lines” to organize the passengers on the platform to create two flows for entry and one flow for exit. Using the computer simulation results, we implemented the strategy “guide lines” in a station of the Mexico City Metro. Our results show a reduction of boarding and alighting time in 10% to 15%, and a reduction of station delay in 15% to 25%. This new scheme has been accepted favorably by passengers. Our work evaluates an efficient boarding strategy, reducing passenger waiting times at stations and also travel times of all trains. The potential adoption of this strategy opens the opportunity to a low cost and high impact improvement in public transportation systems.

Biases and errors in the temporal sampling of random movements

ABSTRACT. New sources of ICT data allow to track individual trajectories at an unprecedented scale. However, as it is the case for any dataset, these new sources of information have limits and biases that need to be assessed.

Here, we study trajectories alternating rests and moves of random durations. Isolate and identifying these intertwined static and dynamic behaviours is an important statistical challenge and a growing array of segmentation methods based on spatio-temporal characteristics of the trajectories have been tailored for the specific dataset in question. These procedures are however limited from technological constraints that impose a temporal sampling of the trajectory, as one needs a time $\Delta$ between sampled points significantly smaller than the characteristic duration of rests and moves in analysis to reconstruct the trajectory.

This issue is particularly evident in human mobility data. Currently, the most common sources used are Call Detail Records (CDR) of mobile phone data and geo-located social media accesses, where the flaws described above are amplified by the random and bursty nature of human communications.

In this paper, we discuss the effect of periodical and bursty sampling on the measured properties of random movements. We consider trajectories as an alternating renewal process, a generalisation of Poisson processes to arbitrary holding times and to two alternating kinds of events, moves and rests, whose durations $t$ and $\tau$ are regarded as independent random variables. The sampling time interval $\Delta$ depends on the particular experiment and can be either constant or randomly distributed.

We analytically solve the ideal case of constant sampling and short-tailed distributions of rest and move durations with the naive assumption that every observed displacement is to be associated to a movement. We obtain explicitly the distribution $P(\ell^\ast)$ of sampled displacements and its first two moments, that also allow us to quantify difference between the real $\ell=vt$ and sampled $\ell^\ast$ displacement lengths. Moreover, we are able to provide an optimal sampling time $\hat\Delta=1.96\sqrt{\tbar\taubar}$ maximizing the fraction of correctly sampled movements. We then extend these results numerically, and show that sampling human trajectories in more realistic settings is necessarily worse. Finally, we use high-resolution (spatially and temporally) GPS trajectories to verify our predictions on real data. We find that for real cases, characterized by long-tailed rest durations, the fraction of correctly sampled movements is dramatically reduced. We test our results with high-resolution GPS trajectories of human, where constant sampling allows to recover at best $18\%$ of movements, while even idealized methods cannot recover more than $16\%$ of moves from sampling intervals extracted from real communication data.

These figures suggest that in the sampling of of human trajectories alternating rests and movements it is not possible to successfully reconstruct the real moves from the empirical sequence of displacement observed only through the lens of mobile phone communications. Further studies, taking advantage of the new analytical tools we provide here to evaluate the quality of a sampled individual trajectory, are certainly necessary to assess the bias induced by sampling on statistics aggregated at individual or collective level.

Quantifying the Relationship between Land-use and Transport Systems for Land Use Planning

ABSTRACT. Designing an efficient transport system for a city that can support the evolving activities of its people and its existing and planned infrastructures requires proper understanding of the interplay between land-use and transport or how land use utilization drives transport demand. Here, we quantify the spatiotemporal dependencies of ridership with land-use sector types and amenities using three machine learning methods: 1) decision tree, 2) support vector regression, and 3) item-based collaborative filtering method based on cosine similarity. We compare and contrast the methods based on accuracy, generalization, efficiency, and ``interpretability'' and discuss the implications of each method to strategic planning and urban design. While the accuracy and generalization of the three methods are comparable (<5% error), we note that decision tree methods are more intuitive and useful for policy makers as they provide immediate references to critical parameter values. In all cases, our results support the thesis that amenity-related features are better predictors than the more general ones suggesting that high-resolution geo-information data are essential for transport demand planning. We apply the framework to actual scenarios, specifically looking at Singapore’s urban plan toward 2030, which includes the development of “regional centers” across the city-state. Our model reveals that there is an initial increase in transit ridership as the amount of amenities is increased which eventually reverses with continued strategic growth in amenities. The transition of these two trends for the Singapore example is the region when the increase in amenities is about 55% — a number that is potentially valuable for urban planners and policy makers.

Complexity in Megacities Public transport system. The measure of efficiency by LART Model

ABSTRACT. The objective of this research is made a comparative analysis of the efficiency in the public transport system of Mexico City versus the transport systems of London and Madrid. Mexico City is comparable in terms of its demographic and territorial dimension, the complexity of its transportation systems in those two cities, and even its purchasing power parity, to the surprise of most Mexicans. The research method was the observational study and the documentary analysis using the case analysis method, evaluating three variables: Incentives to use public transport, disincentives to use private transport and public policies on transportation. As a result of the investigation it is concluded that there is a high efficiency in the case of Madrid and London and a low efficiency in the case of Mexico City whose system has had a sad involution in recent years. Three key actions are recommended to reverse the process: 1) Creation of a metropolitan consortium such as that exists in Madrid or London that integrates all models of transportation of the city, 2) To label the gas taxes to finance actions of improvement to the public transport. 3) The system of traffic restriction must reward the low emission of gases and the technologies that less pollute abandoned the populist drifts that have led to the involution of the transport system in Mexico City in recent years. The main public policy should be: Privilege and finance public transportation. The relevance of this work is that the control of efficiency is an issue in which there has not been work in Mexico and this is the first comparative inspection work published in Mexico. A model to measure and compere efficiency in Megacities is proposed as result of this research.

16:30-18:30 Session 5D: Biological and (Bio)Medical Complexity - Ecology and Evolution

Parallel session

Location: Tulum 1&2
Human impacts on multiple ecological networks act synergistically to drive ecosystem collapse

ABSTRACT. Understanding the consequences of biodiversity loss is one of the most urgent tasks faced by ecologists and conservation biologists at present. Highly biodiverse ecosystems worldwide are rapidly losing species diversity as a result of human overexploitation of natural resources. However, it is not known whether there is a critical threshold of species loss at which a particular ecosystem fails to recover, leading to its collapse. This study was conducted in the Tehuacán-Cuicatlán Valley located in south-central Mexico, which is one of the most biologically rich, semi-arid regions of the Western Hemisphere. This area is characterized by a high degree of endemism among different taxonomic groups, such as columnar cacti and Agave species. In the vicinities of Los Reyes Metzontla town, human overexploitation of natural resources such as wood used for firing ceramics, as well as agave species for mezcal production has increased considerably. By combining multiple ecological networks (including plant facilitation by which most plant species recruit under field conditions, cacti and agave pollination, as well as cacti seed dispersal), we document how an ecosystem may collapse through synergistic disruptions to these networks. We simulated coextinction cascades across these ecological networks by removing from the facilitation network the plant species that were being overexploited by local inhabitants. To do this, a quantitative scenario was used in which nurse species extinction produces coextinction of their facilitated species, which concomitantly affect pollination and seed dispersal services. In addition, we tested simulation accuracy by comparing the species predicted to become extinct or to survive with the species present or absent in the disturbed areas where human overexploitation occurs. Finally, we experimentally tested whether the co-extinction cascades will lead to a shortage of bat dispersers, thereby inhibiting the arrival of new seeds to the ecosystem. We find that coextinction simulations triggered by the removal of only 16% of species show that extinctions are dramatically accelerated. In addition, we show that ecosystem collapses when the nurse species habitat availability is reduced to below 76% of its original extent. Although the interdependence of different ecological networks is indicative of ecosystem fragility and low resilience, our findings allow the design of remediation efforts, thereby helping to bridge the gap between ecology and conservation biology.

Heterogeneous kinship practices generate an apparent cross-dressing in genetic diversity
SPEAKER: Cheryl Abundo

ABSTRACT. Migrations in traditional societies like Sumba and Timor are mostly motivated by marriage and the union of resources. Kinship practices, that define the post-marital residence rules of the villages, structure the way individuals move in the population. Individuals in endogamous villages prefer to marry within the existing clans of their village. Neolocality allows anyone regardless of gender to migrate out of their natal villages. Patrilocality and matrilocality have gender biased movements such that females frequently migrate out of patrilocal villages while males often choose to reside outside of their natal matrilocal village.

Uninformed hypothesis might lead us to expect individuals from matrilocal villages to be more related in the matriline since females stay after marriage, while individuals from patrilocal villages to have closer kins in the patriline since males reside in their natal village after marriage. However, genetic data from the Indonesian islands of Sumba and Timor tells us otherwise. And this is because villages cannot be considered in isolation. They exist within the context of a network of interacting villages, each possibly having a distinct kinship practice.

Using an n-deme model of kinship structured migrations, we observe the existence of the phenomena of trans-locality wherein the villages appear to cross-dress. That is, a village that in reality practices matrilocality shows a signal of ``patrilocality" in that it appears to be more related in the patriline. Similarly, a patrilocal village may show a signal of ``matrilocality" and appear more related in the matriline. This happens when (1) a mixture of different types of kinship practices exists in the population, (2) interaction of villages are limited such as when migration rates are low, or (3) kinship rules are not strictly followed.

Non-local interactions delay system's expansion and promote order in a collective motion model in open space
SPEAKER: Martin Zumaya

ABSTRACT. Collective motion is one of the more ubiquitous examples of coordinated behavior in nature and has been studied extensively in recent years both theoretically and empirically. Most of the current models of collective motion are defined within periodic boundary conditions or consider the system already in an ordered stationary state, so that when the particles' motion is unconfined and random initial conditions are taken into account, the system is not able to organize in a coherent moving group and all its components end up being spread out in space.

Addressing this issue, we propose a model of collective motion in open space based on local and non-local alignment interactions between particles, which is able to build up ordered states from random initial conditions and control the system expansion with very few non-local interactions per particle; the model also shows noise driven spontaneous collective changes of direction, an important feature observed in real systems.

We also show that the inclusion of non-local information in other models allows them to present the same behavior, suggesting that non-local information is an efficient mechanism to maintain the system’s order and cohesion over time.

The need of non-local interactions results controversial with the general accepted idea of the competence of only local information to build up collective states and requires further study.

Signatures of Criticality in Microbial Communities

ABSTRACT. Many of most interesting phenomena emerge from interactions among many elements. In particular, understanding the dynamics of complex ecosystems, such as microbiota, is one of the most challenging goals of our time. It has been argued that interactions between bacterial communities are dominated by two opposite regimes: a selection-dominated regime (the niche theory) and a stochasticity regime (neutral theory). In this work we use the rank-abundance distributions from stationary states in order to show that data are poised near to a critical point between these phases. Here we use distributions of OTUs’ abundances and analogues to ensembles in thermodynamics (Renyi entropies and statistical ensembles) to find free energies. Remarkably, the distributions that emerge from the data are located at a very special point in their parameter space a critical point. This result suggests there may be some deeper principle behind the behavior of these interactions.

Use of Information Theory in population structure analysis

ABSTRACT. There has recently been growing interest at borrowing both concepts and technical results from information theory for analysis in the biosciences. I will briefly review recent efforts at incorporating notions such as entropy and uncertainty, channel capacity, noise and distortion, and mutual information into biological settings. I highlight other manifestations of the notion of information, such as Kolmogorov complexity and Fisher information, of potential relevancy in a biological framework beyond merely as metaphorical tools. I then demonstrate novel conceptual and quantitative links between features of population genetic samples and a core information-theoretic property. In essence, long stretches of genetic variants may be captured as typical sequences of a nonstationary source modeled on the source population. This will provide motivation for constructing simple typicality-based population assignment schemes. I introduce the concepts of typical genotypes, population entropy rate and mutual typicality, and their relation to the asymptotic equipartition property. Finally, I propose a useful analogy between a communication channel and an inference channel, where channel noise results from fuzzy population boundaries and parameter estimation, and where the channel capacity closely corresponds to informativeness for population assignment.

Spatial dynamics of synthetic microbial hypercycles modulated by resource availability
SPEAKER: Daniel Amor

ABSTRACT. The hypercycle, the simplest model for autocatalytic cycles, early provided major theoretical insights on the evolution of mutualism. However, little is known about how natural environments could shape hypercycle dynamics. In order to explore this question, we used engineered bacteria as a model system for hypercycles. We recapitulate a variety of environmental scenarios identifying trends that transcend the specific model system, such an enhanced genetic diversity in environments requiring mutualistic interactions. Interestingly, we show that improved environments can slow down hypercycle range expansions as a result of genetic drift effects preceding local resource depletion. Moreover, we show that a parasitic strain is excluded from the population during range expansions (which acknowledges a classical prediction). Nevertheless, environmental deterioration can reshape population interactions, this same strain becoming part of a three-species hypercycle in scenarios in which the two-strain mutualism becomes non functional. Our results illustrate some evolutionary and ecological implications that will be relevant for the design of synthetic consortia for bioremediation purposes.

The evolution of reproductive helping through resource competition

ABSTRACT. Mathematical models have been widely and successfully applied in understanding the interplay of population structure and the evolution of social behavior. Here we ask whether helping and non-helping behaviour can co-exist in social groups, and importantly, what ecological factors affect this coexistence. We use two types of modelling techniques to examine this question. The first is an individual based model based on the lifecycle of social wasps and other colony founding species which compete for limited resource sites. The second is a mean field approximation derived from the individual based model. Both techniques use simple ecological parameters, such as number of offspring, effect of division of labour and dispersal distance. Using these two techniques, we find that the spatial structure of populations is critically important in allowing helping behaviour to evolve. Our broad approach to investigating helping behaviour highlights the importance of spatial effects in the evolution of social behaviours.

Bringing models of microbial systems closer to reality
SPEAKER: Jan Baetens

ABSTRACT. The functioning, dynamics and spatial structure of many microbial communities can be undoubtedly complex, and seemingly hard to describe mathematically. Still, mathematical biologists and modellers have succeeded in simulating their spatio-temporal dynamics in a convincing way using so-called individual-based models that track the features of every individual explicitly through space and time and account naturally for local interactions and spatial heterogeneities. Besides, it has been demonstrated that similar approaches are applicable to other organisms, such as mussels and nematodes, while it is generally acknowledged that individual-based models are well suited to formalize many types of complex systems. Typically, individual-based models of microbial communities merely incorporate the so-called mechanisms of life, being reproduction, competition and dispersal. At the same time, there is also a strong bias in literature towards communities consisting of only three species whose mutual interactions are governed by a deterministic competition structure, meaning that an individual of species A, for instance, will always outcompete one from species B. In reality, however, microbial communities consist of numerous species, the competitive strength of the species depends on its fitness, which, amongst other things, is governed by the substrate availability and prevailing environmental conditions, there is interaction between the microorganisms and their environment, and so on. Moreover, in microbiology it is now acknowledged that genetically identical bacterial cells in a well-mixed environment may have individually differing phenotypes.

In order to bring the existing individual-based models of microbial communities closer to reality, and hence to further our understanding of these complex systems, we explore in this work the effects of incorporating substrate uptake, varying community evenness and non-deterministic competition outcomes on the simulated community dynamics. In addition, we advance the existing models by accounting explicitly for the dimensions of the tracked individuals, another aspect that has been overlooked by most works in this direction. Our results indicate that long-term system behaviour is strongly dependent on initial evenness and the underlying competition structure. Generally speaking, a system with four species is unstable, but it appears that a higher initial evenness has a small stabilizing effect on the ecosystem dynamics by extending the time until the first extinction. Likewise, we observe a strong impact of introducing stochasticity with respect to the outcomes of competition events on the in silico dynamics in the sense that this stochasticity has a strong negative impact on the coexistence of species. Furthermore, we are able to show that there exists a trade-off between increasing biomass production and maintaining biodiversity, which is in agreement with experimental observations of a net negative biodiversity effect on biomass productivity.

Even though the above extensions make the individual-based models of microbial communities more realistic, they still neglect important mechanisms like adaptation, while also the interplay between the mechanisms of life and the environmental conditions remains veiled. Hence, we will pinpoint during our talk some promising avenues of further research in this exciting and rapidly evolving field.

16:30-18:30 Session 5E: Socio-Ecological Systems (SES) - SES, Adaptation and Resilience

Parallel session

Location: Cozumel 3
Dynamical system modelling of human-environment interactions: the case of the Classic Maya collapse
SPEAKER: Sabin Roman

ABSTRACT. Drought has often been invoked as a key reason for the collapse of the Classic Maya. However, socio-cultural processes must have played an integral part in the development of Maya society and subsequently its collapse. This study investigates the societal development of the Maya in the Southern Lowlands over a span of approximately 1400 years, which includes the Classic Period (300-900 CE). We propose a dynamical system model whose variables represent the major specialisations present within the society, the state of the land and the number of monuments built. Assuming a drastic rise in the practice of intensive agriculture, the model reproduces the time evolution of crude birth rates and population levels over 1400 years. Furthermore, the model also manages to reproduce the building rate of monuments throughout the Classic Period.

Parameter values were chosen to coincide with the literature values where available. We define a distance function by which we can measure the deviation of the model output from the empirical time series. We use this distance function to perform a thorough sensitivity analysis with respect to the key parameters in the model. What we find is that the model lies at a (local) minimum in the space of parameters and for changes in parameters values the deviation from the minimum is gradual and quasi-parabolic. Hence, no fine-tuning is present or needed and the sensitivity analysis shows that the model output is robust under parameter changes.

In addition, the model brings into question the role that drought played in the collapse. Our results indicate that a 50% reduction in rainfall does not significantly alter the outcome of the simulation with respect to the population levels. What the model is showing is that the land's production capacity might have already been severely exhausted and a reduction in crops unavoidable even in the absence of drought.

We have not tried to single out any one cause for the collapse of the Classic Maya but aimed at identifying a set of interlocking mechanisms that could spur a positive feedback in population growth and monument building. Also, we do not claim to have settled the long-standing problem of the Classic Maya collapse but rather hope to re-balance the discussion regarding the role of drought and socio-cultural factors.

Scale invariant behavior of cropping area losses

ABSTRACT. ABSTRACT This paper shows how agricultural disasters display Self-Organized critical behavior, which implies that under a wide range of circumstances, these disasters exhibit a power-law dependence on frequency in the affected area whose order of magnitude approximates those reported for extreme climate events. Self-Organized critical behavior has been observed in many extreme climate events as well as in the density and distribution of pests linked to crop production. Empirical proof is provided by showing that the frequency-size distribution of the cropland loss fits the Pareto and the Weibull models with scaling exponents statistically similar to the expected value. In addition, the test included comparisons of the expected value and the predicted value of the scaling exponents among different subsystems and among systems of the same universality class. Results show that the Pareto model fits the heavy tailed distribution of losses mostly caused by extreme climate events, while the Weibull model fits the whole distribution, including small events. The analyses show that crop losses adopt Self-Organized critical behavior regardless of the growing season and the water provision method (irrigated or rainfed). Irrigated systems show more stable behavior than rainfed systems, which display higher variability. The estimation is robust not only for calculating model parameters but also for testing the proximity to a power-law-like relationship. A long-term risk index by growing season and water provision method is derived as an application of this power-law behavior. The index is flexible, comparable among geographical units regardless its size and provides a direct measure of the probability to loss a cropping area.


ABSTRACT. Understanding the causes of land use dynamics and land cover change is fundamental in the light of targeting sustainable development. It requires insight into land use types and management practices and their complex relations to local farmers’ livelihoods. Tropical small-holder livelihoods are based on subsistence and commercial crop production; hence land use type, management, and crop selection may depend on climatic conditions and socio-economic opportunities or constraints. This study examined the diversity of livelihood types associated with sugarcane production in Laguna del Mante, located in the tropical region of San Luis Potosí, Mexico. In particular, we examined how socioeconomic, political, institutional, and biophysical drivers contributed to differentiation in livelihood development and its respective changes in land use and management. In this locality, farmers' livelihoods depend mainly on the production of sugar cane (60.4%) for a local sugar factory and maize (30.6%) for self-supply. Participatory observations and 70 structured interviews were conducted to identify the key characteristics of different households with specific economic activities. The focus group of interviewees included “ejidatarios” and children of “ejidatarios”, i.e. farmers with certain rights of communal land use. As we were interested in the analysis of different livelihood types in relation to land use/land use change, we linked land use variables to the following household categories: age, education, sources of income, and area of total land-holding and used for agriculture. To identify different livelihood groups considering the selected criteria hierarchical cluster analysis was applied. Based on the resulting dendrogram we distinguished among five types of livelihoods: sugar cane producers without and with irrigation system, diversifiers, sugar cane and livestock producers, and livestock producers. In Laguna del Mante, livelihoods mainly depend on sugar cane production in combination with wage labor opportunities both in a near-by foreign lime-factory and sugar cane harvest as an employee at the sugar cane mill. 55.3 % of the farmers changed from corn to sugar cane production in 1995, after the validity of the NAFTA, what caused changes in land property rights. 10,6 % of the farmers changed from corn to livestock production between 2000 and 2007 because of decreasing prices of corn. 76 % of the farmers decided to switch crops, because sugar cane was more profitable than corn or livestock, while 19,6 % were attracted by social benefits (pension, health insurance) provided by the sugar cane mill. External and internal socioeconomic drivers have been responsible for changes and adaptations in livelihood development and land use in Laguna del Mante. Over the last 30 years, the rise in agribusiness companies have fundamentally changed the nature of farming and thereby transformed diverse landscapes shaped by family-farming into monocultures of sugar cane at the high cost of eradicating the potential of agriculture-based livelihood diversification. Income from wage-labor in nearby factories may buffer temporary fluctuations in sugar prices, however these livelihoods are becoming increasingly vulnerable to unpredictable external drivers such as pest outbreaks, shifting markets, climate change, etc.

Specialization in a heterogeneous environment: The case of Stack Overflow
SPEAKER: Giacomo Livan

ABSTRACT. The spectacular growth of online knowledge-sharing platforms has created unprecedented opportunities both to access and to create knowledge. The prime and most successful example of such environments is Wikipedia, with a vast number of other platforms (e.g., Quora, Reddit, Yahoo Answers, etc) providing users with a multitude of options to develop knowledge online. Most of such environments are largely decentralized and rely on the voluntary contribution of large numbers of users. This naturally gives rise to very heterogeneous systems, where a small minority of engaged users frequently contribute to the platform, while the vast majority of users contribute occasionally. An additional source of complexity is also related to the interests and specialization of the users: whereas some users develop a broad set of interests and contribute to the production of knowledge in several of them, other users specialize in a limited and well defined set of topics. In this work we study the evolution of specialization in Stack Overflow, the flagship site of the Stack Exchange network. Stack Overflow provides a discussion platform based on questions and answers on a wide range of topics related to computer programming, and it currently boasts more than 4 million registered users. We investigate Stack Overflow data spanning 8 years, going from August 2008 (shortly after the launch of the platform) to July 2016. For each month in the data we form two bipartite networks associated with questions and answers. Links in the networks connect users and tags, i.e. the identifiers associated with questions and answers (e.g., C++, Python, Matlab), and their weights denote the number of questions or answers that a user has posted which contain the tag in question. Within this framework, we identify specialization by resorting to network statistical validation techniques: we associate a p-value to each link by measuring the likelihood of observing a link of the same weight under a null assumption of random link reshuffling which, however, takes into account the heterogeneity in the users’ activity. If a link’s p-value falls below a multivariate significance level we label the associated user as a specialist of the corresponding tag. Our results show that the platform is essentially split up between users who specialize in posting questions and users who specialize in answering them, with very small transition rates between the two groups. Furthermore, we show that specialization in the answers network is considerably persistent, i.e. when users specialize in answering questions on a certain topic, they tend to keep doing so for several months in a row. However, when analyzing tags our results show that user specialization has evolved towards an increased concentration on a relatively restricted set of highly popular tags. Symmetrically, we show an increased similarity in the expertise profiles of specialist users, which in turn leads to an increased competition to earn the reputation points awarded by the platform.

Adaptive recover of the ecosystem in a remote Mexican island in the Pacific. Complex Dynamics of the Human Impact.

ABSTRACT. Complex dynamic systems in the ecosystems can have abrupt shifts in the pelagic fish populations of islands and in the wildlife of the island, based on the human impact and new species introduced by man. The Clipperton Island in the Pacific, once Mexican territory is an excellent example of how marine ecosystem recovers from the impact of the periodic visits of human population during last century. The aim of this study explain the dynamics of this unique ecosystem near critical points, their generic properties, and transitions of possible bifurcation in the critical threshold that could become catastrophic. That will give us warning signs for the impact on ecosystems and at which point they can recover or adapt, before collapsing. The Clipperton Island had been studied by scientific expeditions from 1880 to the present and we are using this well-documented source of data about the fauna and flora of the island. The use of new technologies allows us to measure areas by satellite and aerial photography, from which comparison to photography of the 1900’s can be compared. Human presence in the island is well defined by time and impact, from a period of 25 years (1892 to 1917, with introduction of new species to the island and impact on the population of species, later in 1958 a specific intervention to eliminate one species change the ecosystem and recently the presence of rats, new to the island also had an impact on the Ecosystem. We are using models with Lotka-Voltera equations, Time series analysis and Verhulst Model into the long period data available to find critical transition points and possible collapsing or transition of the ecosystem in Clipperton. We will make a trip in 2018 to validate such model and the predictive capacity of ecosystems.

The hybrid modelling of socio-ecological systems epistemological issues and ontological considerations
SPEAKER: Linda Russell

ABSTRACT. This presentation reiterates Bourdieu´s argument concerning the importance of making manifest the epistemological basis for the construction of the research object. The issues arising from the hybrid modelling of SES are ones incumbent to the peculiarity of the Western world view based on the underlying ontological divide between the nature of being of human entities and that of the rest of the entities in the world. A host of epistemological incommensurability issues consequently arise from any attempt to explain the form of interaction of human and non-human systems using a hybrid model which reflects this basic divide of the naturalist ontology. One type of epistemological approach is characterised by the attempt to seek to sidestep these hybrid issues by adopting the concept of a material economy. One option is to consider the human solely in terms of bodies with material needs and hence “users” of “resources” and consider “nature” in terms of the supplier of those resources; so that positive change involves realistically pricing the resources nature “supplies”. Another option, is under the concept of social capital to consider human relations themselves as a resource which can be coupled in a variety of ways with ecosystem resources, and any change would need to be systemic. The second type of epistemological approach chooses to delineate hybrid or asymmetrical SES models, in which nature continues to be considered in terms of material biotic or abiotic systems, whilst human social systems are considered in non-material terms and, depending upon the theoretical framework, maintaining a particular form of interaction with the material world. This second epistemological group can be organised into three main theoretical types. The first option, considers human systems as constituted by rational autonomous reflexive agents, and their interaction with the natural world is on the basis of rational design with regard to an objective world of empirical facts, so that change must be cognitive, generally at the level of education and public policy. The second option, considers human systems as symbolic systems (or symbolic economies) into which humans are born and within which they assimilate the existing forms of interaction with other humans and non-humans, so that any option of change would need to be systemic. The third option, one which arises from Kant’s formulation of the difference between the, thing-in-itself and the thing-for-me, has been described as ineffabilist, and is based on the constructivist position regarding human knowledge with an ineffable realm beyond the limits of human cognitive systems, but possibly not beyond human interaction per se, whether it be artistic, spiritual, religious, or some form of bodily experience. Recently Descola, following Merleau-Ponty, suggests that the human body at an ontological level is a location of interaction with what Cooper refers to as the ineffable. The presentation considers the relevance of comparative ontological analysis, and also whether particular concepts such as that of memory (cognitive, material or genetic), body or habitus/habitat, possibly serve as a bridge to both epistemological and ontological divisions.

Combining Data Science Methods and Mathematical Modeling for Analyzing Complex and Uncertain Socio-Technological Systems: An Application to Technology Based International Climate Change Mitigation

ABSTRACT. Considering social and technological systems in an integrated way is becoming increasingly important as many of our most vexing policy challenges occur at the intersection of society and technology. Yet, contemporary socio-technological systems are inherently complex and deeply uncertain. This paper describes an analysis framework by which Data Science Methods and Mathematical Modeling can be combined to provide useful and policy relevant analysis.

This paper exemplifies this approach considering the inherently complex and uncertain context of international climate change mitigation. The findings suggests that the combination of these methods can lead to the identification of robust, adaptive strategies for triggering low cost international decarbonization. Specifically, the framework helps illuminate under which conditions multi-country technology based policies can successfully enable the international diffusion of sustainable energy technologies at reasonable costs.

The study combines four interconnected analytical components. First, optimal climate policy response is determined through an Exploratory Dynamic Integrated Assessment Model (EDIAM) that connects economic agents’ technological decisions across advanced and emerging nations with economic growth and climate change, making the system highly path dependent (i.e. chaotic property of sensitive dependence). Second, the EDIAM model is used in a mixed experimental design of a full factorial sampling of 12 general circulation models and a 300-element Latin Hypercube Sample across various technological properties of sustainable and fossil energy technologies (i.e. R&D returns, innovation propensity and technological transferability). Third, this large experimental database is analyzed using jointly two data mining techniques: scenario discovery methods (Bryant and Lempert, 2010) and high-dimensional stacking (Suzuki, Stern and Manzocchi, 2015; Taylor et al., 2006; LeBlanc, Ward and Wittels, 1990) which are used for characterizing quantitatively the vulnerability conditions of different policy alternatives. Finally, non-supervised learning algorithms are used to develop a dynamic architecture of low cost technology based climate cooperation. This dynamic architecture consists of adaptive pathways (Haasnoot et al., 2013) which begin with carbon taxation across both regions as a critical near term action. Then in subsequent phases different forms of technological cooperation are triggered in response to unfolding climate and technological conditions.

The application of Data Science Methods and rigorous Mathematical Modeling to the context of climate change demonstrates that optimal climate policy response is not an invariant proposition, but rather a dynamic one which adapts to unfolding climate and technological conditions. The analysis presented in this paper shows that different technological cooperation regimes across advanced and emerging nations are better suited for different combinations of climate and technological conditions, such that it is possible to combine different policies into a dynamic framework for low cost technological cooperation that expands the possibilities of success across the uncertainty space.

16:30-18:30 Session 5F: Complexity in Physics and Chemistry - Nonlinear dynamics

Parallel session

Location: Xcaret 4
Random Focusing in Complex Media - Is it possible to Forecast Tsunamis?
SPEAKER: Theo Geisel

ABSTRACT. Wave flows propagating through weakly scattering disordered media exhibit random focusing and branching of the flow as universal phenomena. Examples are found on many scales from ballistic electron flow in semiconductor nanostructures [1-4] to tsunamis traveling through the oceans. Even for very weak disorder in the medium, this effect can lead to extremely strong fluctuations in the wave intensity and to heavy-tailed distributions [4]. Besides statistically characterizing random caustics and extreme events by deriving scaling laws and relevant distribution functions we have recently studied the role of random focusing in the propagation of tsunami waves [5]. We model the system by linearized shallow water wave equations with random bathymetries to account for complex height fluctuations of the ocean floor and determine the typical propagation distance at which the strongest wave fluctuations occur as a function of the statistical properties of the bathymetry. Our results have important implications for the feasibility of accurate tsunami forecasts.

References: 1. Topinka, M. A. et al. Nature 410 (2001) 183. 2. Metzger, J. J., Fleischmann, R., and Geisel, T. Phys. Rev. Lett. 105 (2010) 020601. 3. Maryenko, D., Ospald, F., v. Klitzing, K., Smet, J. H., Metzger, J. J., Fleischmann, R., Geisel, T., and Umansky, V., Phys. Rev. B 85 (2012) 195329. 4. Metzger, J. J., Fleischmann, R., and Geisel, T., Phys. Rev. Lett. 111 (2013) 013901. 5. Degueldre, H, Metzger, J. J., Fleischmann, R., and Geisel, T., Nature Phys. 12 (2016) 259–262.

Synchronization Transitions Induced by Topology and Dynamics

ABSTRACT. We analyze structural transitions to synchronization in evolving complex topologies of Kuramoto oscillators. We give numerical evidence and analytical insight to a phenomena that is widely seen in nature. By constructing functionally equivalent networks and using mean field arguments, we are able to quantify the close relation between structural and dynamic perturbations in a quasi-static process, where the changes in the macroscopic response can be induced by the coupling strength and the topology of the network.

Periodic orbits in nonlinear wave equations on networks
SPEAKER: Imene Khames

ABSTRACT. We consider a cubic nonlinear wave equation on a network and show that inspecting the normal modes of the graph Laplacian, we can immediately identify which ones extend into nonlinear periodic orbits. Two main classes of nonlinear periodic orbits exist: modes without soft nodes and others. For the former which are the Goldstone and the bivalent modes, the linearized equations decouple. A Floquet analysis was conducted systematically for chains; it indicates that the Goldstone mode is usually stable and the bivalent mode is always unstable. The linearized equations for the second type of modes are coupled, they indicate which modes will be excited when the orbit destabilizes. Numerical results for the second class show that modes with a single eigenvalue are unstable below a treshold amplitude. Conversely, modes with multiple eigenvalues seem always unstable. This study could be applied to coupled mechanical systems.

Crossover between statistical-mechanical structures in the dynamics associated with chaotic attractors at band-splitting points

ABSTRACT. We consider both the dynamics towards and within the chaotic attractors at band-splitting points in the route out of chaos in unimodal maps [1, 2, 3]. We find two kinds of statistical–mechanical structures associated with the dynamics separated by a crossover episode. The structures correspond, respectively, to the dynamics towards and the dynamics within the attractor, and the crossover reflects the arrival at the attractor. In the first regime the partition function consists of the sum of the chaotic-band widths and the associated thermodynamic potential measures the rate of approach of trajectories to the attractor. The statistical weights are deformed exponentials. In the second regime the partition function is made of position distances within the attractor bands and the statistical weights become exponential. The time duration of the first regime increases as the number of bands 2N increases, and in the limit N →∞, the chaos threshold, it becomes the only statistics, when phase-space contraction leads to a set of zero measure. We discuss our findings in terms of the approach of a system to equilibrium.

[1] Diaz-Ruelas, A., Robledo, A., Emergent statistical-mechanical structure in the dynamics along the period-doubling route to chaos, Europhysics Letters 105, 40004 (2014).

[2] Diaz-Ruelas A., Fuentes, M.A., Robledo, A., Scaling of distributions of sums of positions for chaotic dynamics at band-splitting points, Europhysics Letters, 108, 20008 (2014).

[3] Diaz-Ruelas, A., Robledo, A., Sums of variables at the onset of chaos, replenished, European Physical Journal Special Topics 225, 2763 (2016).

16:30-18:30 Session 5G: Movie
Location: Tulum 4
SPEAKER: Javier Livas

ABSTRACT. Kubernetes is a fiction “edudrama” about the past, present and future of Cybernetics. The writer and producer of the film was a very close friend and disciple for more than 20 years of Stafford Beer, the creator of Management Cybernetics and author of many groundbreaking books, among them The Brain of the Firm and The Heart of Enterprise. Stafford Beer was also the chief scientist behind the creation of PROJECT CYBERSYN in the early 1970’s. The project died with the abrupt ending of Salvador Allende’s tenure as President of Chile in September 11th, 1973.

In this film, many of Stafford Beer’s ideas are discussed, including a look at the iconic “operations room”. The plot has ramifications to the 1970’s as a group of beautiful minds with very different cybernetic backgrounds are invited to a meeting to find a way to change the world by changing the way organizations of all types operate. Each one of the organizer’s guest brings a statement to the meeting. Once there, they ask questions about the issue of complexity, religion, government, Newtonian science, the purpose of human beings, criticize organizations and reflect on the possible existence of God. 

The screening will be followed by a Q&A session with the writer and producer of Kubernetes.

[Mexico, 2017, 95 min. In Spanish with English subtitles]

16:30-18:30 Session 5H: Economics and Finance - Industrial organization & market structure

Parallel session

Location: Xcaret 2
Understanding the dynamics and robustness of multilevel marketing via dendritic growth model

ABSTRACT. Multilevel marketing (MLM) scheme (a.k.a. network marketing) uses “word of mouth” strategy to sell products and grow its members through recruitment, exploiting the network connections of its members. MLM has been proven to be effective as such marketing program has allowed companies to sell products to the tune of 180 billion US dollars in 2014 alone, more than twice the sales of the video gaming industry and about a dozen times more of the music industry. Here, we demonstrate how biologically inspired dendritic network growth can be utilized to model the evolving connections of an MLM enterprise and develop insights on its inherent dynamics. Starting from agents situated at random spatial locations, a network is formed by minimizing a distance cost function controlled by a parameter, termed the balancing factor bf, that weighs the wiring and the path length costs of connection. The paradigm is compared to an actual MLM membership data and is shown to be successful in statistically capturing the membership distribution, better than the agent based preferential attachment or analytic branching process models. Moreover, it recovers the known empirical statistics of previously studied MLM, specifically: (i) a membership distribution characterized by the existence of peak levels indicating limited growth, and (ii) an income distribution obeying the 80-20 Pareto principle. Extensive types of income distributions from uniform to Pareto to a \winner-take-all" kind are also modeled by varying bf. Finally, the robustness of our dendritic growth paradigm to random agent removals is explored and its implications to MLM income distributions are discussed. Our research, well-anchored on the growth dynamics observed in actual dendrites, provides some groundwork in which the profitability and the equality of earnings among members of an MLM scheme can be evaluated.

The role of social interactions in market organization

ABSTRACT. What is the role of social interactions in trading? While it is clear that human beings rely on cooperation with others for their survival, the economic theory suggested for a while that auction markets, where the information is the same for all the actors and there is no possibility of arbitrage are an efficient way of organizing the exchanges. Following on, a vast literature has promoted the auction theory. More recently it has been argued that when goods are heterogeneous and there exists no signal of quality, a decentralized mechanism (bilateral transactions) allow people to gather information and better evaluate the intrinsic quality of goods. In order to study the role of markets’ structure one needs to compare centralized and decentralized markets functioning under the same conditions . This rare situation is found in the Boulogne-Sur-Mer Fish Market, where every day, the actors can freely choose to exchange either through a bilateral process or through an auction one, both sub-markets functioning simultaneously, at the same location. This old daily market, which had operated historically in a decentralized way, was led by EU regulations to adopt a centralized structure. This new regulation was firmly rejected by economic actors and, in 2006, it was finally admitted to allow the auction and bilateral negotiation sub-markets to function in the same place. Since then, detailed data concerning the daily transactions is registered, allowing for a comparison of both sub-markets under same economic, seasonal, climatic and social, conditions. Following Economic theory, if one market structure were more efficient than the other, one would expect that one market overtakes the other. It is then interesting to understand the reasons that explain their coexistence. In this work we focus on the interactions among buyers and sellers, therefore we map data on to a complex bipartite network. We analyse these networks using the tools developed for the study of mutualistic ecosystems , like plant-pollinator networks. The pattern of interactions observed in such systems displays a particular structure called nestedness. We investigate if a similar pattern, revealing some degree of organization is observed in either of the studied sub-markets. This method also allows us to define a loyalty index, measuring the relative frequency of interaction of a couple of sellers-buyers with respect to their total number of interactions during the period. We show that the loyalty distribution characterizes each market. It is scale-free in the bilateral negotiation market and on the contrary, it shows a characteristic value beyond which the loyalty rapidly decreases in the auction one. On the other hand, the auction market appears to be more robust face to targeted attacks that consist on eliminating, high degree agents. Our results show that each market has a characteristic property : the development of trust relationships in the bilateral market and the robustness of the auction one. This complementarity may be at the origin of their observed coexistence.

Agent-Based Model and Directional Change Intrinsic Time for Market Simulation

ABSTRACT. One of the most attractive topics for researchers from the world of finance is theoretical models of complex networks based on interacting agents behaviour of which coincides with the actions of real market participants. These models can be used to better understand the structure of financial markets and to test different theories on the synthetically generated time series. One of the main reasons why the agent-based models have been often used for this research question is a well-known fact that time series of real financial markets have several statistical properties, called stylized facts, which cannot be replicated by simple Geometrical Brownian Motion (GBM) for years used as the main benchmark in the market analysis. These stylized facts include fat-tailed distribution of returns, the absence of autocorrelations, volatility clustering and others. In some of the analytical solutions, authors try to overcome this problem by adding jumps or stochastic volatility to the model based on the GBM. Results, achieved by this enhancement, are much more precise but the solution of the problem also becomes much more complex.

In our work, we developed an agent-based model which is also designed to replicate the abovementioned stylized facts, but, unlike many others models, our network of interacting agents is extremely simple. Each agent has only one factor which determines whether the agent is going to buy or sell fixed volume of the traded asset at a given moment of time. The decisive factor is a tick of the directional change intrinsic clock which ticks when the price experiences a reversal of a given fixed threshold from the local extreme. Thus, the only source of information used by the agents is the price itself. All agents from this network receive a new price quote and analyze it using their individual intrinsic time mechanism. Since square root function is a good approximation of the volume impact, the generated by the agents net volume moves price upward or downward to the size of the square root of this volume.

Surprisingly, even such trivial system is able to successfully go through several benchmarks. First of all, we checked if the generated by the intrinsic events agents set of prices replicates stylized facts mentioned before. In addition, we verified the presence of one more scaling law which was the first time presented by Glattfelder et al. in 2011 and is universal for all markets: the overshoot scaling law. This stylized fact states that the average distance between a directional change point and the previous extreme price (called overshoot) is equal to the length of the corresponding directional change price moves. Finally, the number of the intrinsic event agents with long and short positions was compared to the position information from a big forex and CFD exchange OANDA. Even in this experiment the shape of the generated position ratios mostly had the same shape which we can detect in the real market.

In general, analyzed time series, generated by the introduced agent-based model, and prices from Forex market demonstrate striking resemblance of their statistical properties. Taking into account simplicity and even primitiveness of the constructed model, we conclude that the underlying directional change intrinsic event approach indeed can shed some light on the market’s nature, agents’ behaviour and the cumulative impact they have on the structure of financial markets.

Topological characteristics of economic transaction networks

ABSTRACT. Economic networks play a central role in the exchange of goods and services. Studying the network structures underlying these trade relations should help understand the way agents interact and how the trade flows evolve.

The information requested by some administrative tax requirements can be of great help towards analyzing these economic flows. One such example is the information required to buyers and sellers on their exchanges in most legislation on VAT. Following this legislation, buyers and sellers are obliged to declare those transactions that exceed a certain level. Since both sides of the trade are obliged to reveal their operations, both statements should coincide. However we find situations where only one of the sides declare the transaction and others where both declare different amounts. Using the information of such flows for a region in Spain during year 2002, we have generated a network of transactions. In this network, all edges are directed from buyer to seller.

There are actually six different networks: (1) Joint, it includes both the buyer and seller declarations, an edge appears if an operation between two operators is declared; (2) Matched, an edge appears between a buyer and a seller if their respective declared amounts agree exactly; (3) Matched 10%, similar to previous one when the declared amounts disagree by less than a 10% margin; (4) Differed amount, the edges are defined between operators if their respective declared amounts between them disagree by more than a 10% margin; (5) Non reciprocal buyer, an edge appears between operators if a seller declares an operation, which is not declared by the buyer; (6) Non reciprocal seller, in this case, the edge appears if a buyer declares an operation, which is not declared by the seller. These last two groups need further filtering processes since some agents are not obliged to declare, what could justify the existence of many of these “missing” links.

An initial statistical analysis of the six empirical networks was conducted and some revealing results were found. For example, the different buyer/seller declaration networks present small-world effect, as usual in many real networks. Moreover, some of them also exhibit a power-law fit. In particular, the Matched network presents a gamma-parameter clearly higher than those for differed or non-reciprocal declaration networks. This observation shows that a specific topology of the matched declaration network is presented, which point to a macroscopic behavior for the individuals belonging to this group different to the rest of operators.

Labor flow network reveals the hierarchical organization of the global economy

ABSTRACT. The global economy is a complex interdependent system that emerges from interactions be tween people, companies, and geography. Understanding the structure and dynamics of the global economy is critical for adapting to its rapid reorganization. While the network framework has uncovered insights into the evolution of national economies [1], existing frameworks cannot expose the organization of industries that arises across a wide range of geographical scales, nor capture the full spectrum of industries. Here, we construct a global labor flow network [2] from a dataset from LinkedIn, the largest professional networking service, and reveal the deep hierarchical organization of industries. Furthermore, we demonstrate that geo-industrial clusters organically identified by network community detection can serve as a more coherent unit for studying the growth and decline of industrial sectors.

[1] Hidalgo, C. A., Klinger, B., Barabasi, A.-L. & Hausmann, R. The product space conditions the development of nations. Science 317, 482487 (2007). [2] Guerrero, O. A. & Axtell, R. L. Employment growth through labor flow networks. PLoS ONE 8, e60808 (2013).

Reputation and Success in Art

ABSTRACT. How does an artist's reputation evolve and how does it affect the success of his endeavors? To shed light on these questions, we have collected a unique dataset on 463,632 artists between 1980 and 2016 and across 142 countries. We document the existence of a rich club of prestigious artistic institutions, and a poor club of institutions of low prestige. The evolution of an artist's career is strongly impacted by the prestige of the institutions in which his work was first exhibited. His chance of success in the auction market can be improved by exhibiting in more venues, appealing to a more international audience and exhibiting at institutions of higher prestige. These findings have implications on our understanding of the role reputation plays in cultural markets.

Market equilibria with imperfect information and bounded rationality

ABSTRACT. In Microeconomics, the First and Second Welfare Theorem assume the existence of perfect information in the market prices, that is, they contain all relevant information for decision making and are available permanently for all the agents. However, these assumptions are little strong. As a consequence, the economists have developed diverse tendencies whose objective is to analyze the behavior of the agents in front of the existence of asymmetries in the information that they have. In this work, is made an approach allowing the existence of these asymmetries, but also, considering that the agents have biases in their decision making and, consequently, they develop a learning mechanism on market prices. Through this process they define their roles as buyers or sellers within the market in order to maximize their individual utility. Thus, a model of complex systems known as Naming game is used as motivation, which allows the implementation of a first stage of learning costless (cheap talk), based on the random interaction of the agents, and a second stage, where they make decisions, participate in the market and obtain new information from their particular experiences. Through this trial and error mechanism, the agents reach a stable equilibrium. This is integrated into a computational agent-based model, which is set on a single set of valuations, simulating the interactions, decisions and evolution of buyers' preferences over T periods. Since the algorithm has a random component in its calculations, the same case is repeated N times, in order to distinguish those patterns that result from non-deterministic behavior, and those that persist due to the restriction in the assumption of perfect information. To draw conclusions from the data, it required to repeat the algorithm numerous times, in order to verify how it behaved in front of different set of valuations. In this sense, the first results are consistent with the traditional approaches: the information about prices is very important to achieve a paretian equilibrium, and this is not often achieved. However, it also demonstrates that agents can build market power by knowing their relative valuation early. As one could intuit, those agents that determine their role as firms within the market and start selling early, may have a greater impact on the preferences of agents perceiving themselves as buyers, which gives them an advantage in long term, and therefore, higher profits at the end of the simulations. This is not surprising, and seems to indicate that it is not just about having accurate information about market prices, but about having it just in time.

16:30-18:30 Session 5I: Foundations of Complex Systems - Complexity 1

Parallel session

Location: Cozumel 2
The Evolution of Complexity seen as a problem of Search: Can we predict a priori which search algorithm will work best on which problem?

ABSTRACT. Much of science, both in the physical and biological domains, can be characterized as a problem of search. The reason why is that all systems - physical, biological, ecological and social - are composed of hierarchies of building blocks - atoms, molecules, cells, tissues, individuals, species etc. with corresponding interactions - and the dynamics of such systems is characterised by a search through the space of possible configurations. Nucleosynthesis and biological evolution are examples of search algorithm wherein some consequences of the search were large nuclei, such as iron, formed from many protons and neutrons as constituents, and the human eye? Both were formed via a search algorithm that constructed a hierarchy of intermediate building blocks. Yet, we know very little about why one search algorithm is preferred versus another. There are, in principle, many ways by which iron nuclei and eyes could be formed. The No-Free Lunch theorems assure us that no search algorithm, no matter how complex, is better than any other, and, moreover, no better than random search, when considered over all possible problems. So what characteristics do the “problems” of nucleosynthesis and evolution possess that implies that a given search algorithm exists that is better than another? Here we will consider this problem in the context of machine learning – which search algorithm works best on which problem? In spite of the No Free Lunch theorem, a great deal of research in machine learning is associated with looking for a “magic bullet” algorithm that offers better performance across multiple problem areas, where multiple algorithms are compared and contrasted across multiple test data sets to determine which one performs best on average across the whole spectrum. Unfortunately, the variance in performance of a given algorithm between different problems is far greater than any average performance enhancement across many problems, as one would well expect given the No Free Lunch theorem. The fundamental question is: which search algorithm is appropriate for which problem? Knowing that no single one is better than any other across all problems. Can we predict a priori which algorithm will perform better on which problem, and what problem diagnostics will help us to predict? Here, I will present research that begins to answer this question, using as an example a set of much used machine learning algorithms: the Naive Bayes approximation, and generalisations thereof (AODE, WAODE, HBA), as well as some standard tree-based algorithms. We will present statistical diagnostics that examine the correlation structure between the variables of a search problem, use them to characterise the problem and then to predict what algorithm type will perform best. We thus end up with a meta-prediction algorithm that predicts which algorithm will work best on which problem. We will discuss the opportunities and challenges that arise from this research and relate it back to the problem of understanding both physical and biological evolution as search problems.

The Shifting Traveling Salesman Problem
SPEAKER: Jorge Flores

ABSTRACT. The traveling salesman problem (TSP) is central in the field of combinatorial optimization. A salesman must visit several cities exactly once, return to its starting point and should do it following the shortest path. It is therefore a problem which can be stated in a simple form but which is extremely difficult to solve. It represents a complex system, indeed, and many different techniques have been used in the past to deal with the properties of the TSP. In particular, the rank distribution has been studied. To do this all trajectories that the salesman can follow are given a rank according to the path length. The lowest rank corresponds to the shortest path (i.e. the one looked for in the standard TSP) and the highest rank is given to the longest trajectory. The distribution is found to be a Beta distribution.

The rank distribution is inherently an instantaneous measure, in the sense that it captures ranking at a given time and does not take into account how ranks change under perturbations. To tackle this problem we have introduced the rank diversity which is a measure of the number of different elements occupying a given rank over a length of time. In order to do that for the TSP, we consider two time-dependent traveling salesman problems.

The first model, which we shall call the relocation of sites, consists in picking a subset of sites at random and afterwards locate them at new positions with random coordinates. All the trajectories are given a new rank. Proceeding now with the new map, the random annihilation and creation process is repeated (with the same subset), and a rank is given to the new trajectories. The procedure is repeated several times, and the rank diversity is calculated. It has a semicircle form, with small values for low and large ranks.

The second model is obtained if one lets the sites move, as if they were boats instead of cities, with random velocities. The rank of a given trajectory changes with time, and the rank diversity can be obtained. The shape of it is the same in all situations and very similar to the one obtained with the first model.

From our calculations, and previous work in languages [PLoS ONE 10(4): e0121898 (2015)] and sports [EPJ Data Science 5:33 (2016)] it seems that the rank diversity is a general property of complex systems of very different nature.

Realizing Simpler Quantum Models of Complex Phenomena with present day Quantum Technology

ABSTRACT. Mathematical modelling of observable phenomena is an indispensable tool for engineering new technology, understanding the natural world, and studying human society. However, the most interesting systems are complex, such that modelling their future behaviour demands immense amounts of information regarding how they have behaved in the past. From a theoretical perspective, such processes to not admit simple models where there is a succinct characterization of what elements of the past are meaningful for future prediction. From an operational perspective, the adoption of such models in computer simulators require immense amounts of memory.

In this talk, we discuss recent experimental efforts to reduce this cost by use of quantum technology. We first review recent developments, where it was shown that the statistical complexity of general complex processes – a quantifier of how much information one must store about the past of a process to simulator its future – can be drastically reduced by use of quantum information processing [1]. We then introduce our recent proof-of-principle experiment, where each bit is replaced with a quantum bit (qubit) encoded with the polarization states of a photon [2]. Our quantum implementation observes a memory requirement of 0.05 ± 0.01, far below the ultimate classical limit of C = 1. We discuss the unique features of such quantum models and potential extensions, such as their capacity to output quantum super-positions of different conditional futures and potential generalization to simulate general input-out processes [3].

The talk is designed to be accessible to audiences with minimal knowledge of quantum theory.

[1] Nature Communications 3, Article number: 762 [2] Science Advances 03 Feb 2017: Vol. 3, no. 2, e1601302 [3] Nature Partner Journal: Quantum Information 3, Article number: 6

Causal Irreversibility in a Quantum World

ABSTRACT. In computational mechanics there is significant interest in how much information must be communicated from past to future in a process. This quantity is known as the process’s statistical complexity, and is a widely adopted quantifier of structure and complexity [1]. Operationally it captures the minimum amount of information any predictive model must record about the past, in order to make statistically accurate predictions about the future.

Surprisingly the statistical complexity generally displays an asymmetry in time. If you take a process and reverse the temporal order of events, so that the past becomes the future and vice versa, then in general the statistical complexity will change. This divergence has been heralded as a source of time’s barbed arrow in complex processes [2,3].

Here we examine what happens to this arrow of time in the quantum domain. Recent advances show the potential for quantum mechanics to instigate predictive models that store less past information than any classical counterpart [4,5]. This motivates an interesting possibility –- the barbed arrow of time may arise in the process of classicization. Can a stochastic process exhibit an arrow of time when modelled classically, yet have this arrow vanish when quantum models are considered?

In this talk we answer this question in the affirmative, by directly constructing a process where there is a classical arrow of time, but at the quantum level this arrow vanishes. Our work suggest that this arrow of time could be an artefact of forcing classical causal explanations in a fundamentally quantum world.

[1] C. Shalizi & J. Crutchfield, J. Stat. Phys. 104, 817. [2] J. Crutchfield, C. Ellison, and J. Mahoney, Phys. Rev. Lett. 103, 094101 [3] J. Mahoney , C. Ellison, R. James & J. Crutchfield, Chaos 21, 037112 [4] M. Gu, K. Wiesner, E. Rieper & V. Vedral, Nat. Commun. 3, 762 [5] J. Mahoney, C. Aghamohammadi & J. Crutchfield, Sci. Rep. 6, 20495

Complex networks, Google matrix and quantum chaos

ABSTRACT. The Google matrix G of a directed network is a stochastic square matrix with nonnegative matrix elements and the sum of elements in each column being equal to unity. This matrix describes a Markov chain of transitions of a random surfer performing jumps on a network of nodes connected by directed links. This matrix is the fundamental part of the origin of the crawler. In this talk I will show some spectral properties of this matrix for real matrices coming from different fields as computer science and economics or built from models of chaotic systems. We will use tools coming from the field of quantum chaos to study this networks [1-3]. We will analyze the eigenvectors of the matrix which can be related with network communities. Other interesting result will be that the number of long lived eigenvalues of the matrix is associated with the fractal dimension of the network. Also a two dimensional ranking of the networks will be defined using the time inversion of PageRank, and using phase-space properties developed in dynamical systems. The relationship between this two different fields as "complex networks" and "quantum and classical chaos" would be clarified in this talk with different examples of real networks.

[1] Google matrix analysis of directed networks, L.Ermann, K.M. Frahm, D.L. Shepelyansky, Rev. Mod. Phys. 87, 1261 (2015).
[2] Ulam method and fractal Weyl law for Perron-Frobenius operators, L.Ermann, D.L. Shepelyansky, Eur. Phys. J. B 75, 299 (2010). 
[3] Spectral properties of Google matrix of Wikipedia and other networks, L.Ermann, K.M. Frahm, D.L. Shepelyansky, Eur. Phys. J. B 86, 193 (2013).

Modeling Open and Closed Cyber-Physical Systems with Graph Automata and Boolean Network Automata
SPEAKER: Predrag Tosic

ABSTRACT. We are interested in suitable abstractions and formal dynamical systems foundations for modeling, analysis and behavior prediction of a broad variety of critical cyber-physical infrastructures. In that context, we have been investigating prediction and characterization of asymptotic dynamics of several classes of Boolean Network Automata (BNA) and Graph Automata (GA) such as Discrete Hopfield Networks, Sequential/Synchronous Dynamical Systems, and (parallel, sequential and asynchronous) Cellular Automata (CA). Within that general framework, one line of our recent research has been on identifying the key differences in behavior between open vs. closed cyber-physical systems (CPSs), abstracted as various types of BNA and GA.

A closed CPS can be formalized as a BNA or GA in which each node is an "agent" whose local interactions and therefore possible behaviors we know; the challenge then is, to characterize and/or predict the emerging behavior or collective dynamics of an agent ensemble. We note, that the typical sizes of such agent ensembles, in the context of applications of our interest (such as ensembles of unmanned autonomous vehicles, smart sensor networks, power micro-grids, etc.), range from a few hundred in the smaller-scale CPSs to many thousands. Moreover, in the context of other applications of BNA and GA models, such as biological and life sciences or computational social sciences, the underlying "networks" may have millions or more of autonomously or semi-autonomously (inter-)acting agents. In contrast to the closed systems, in an open CPS (or other complex network of interacting agents), not all "nodes" correspond to agents whose local behaviors are known; some nodes may correspond to external agents whose behavior may be unknown, or to other ("non-agent") aspects of the environment that may be exercising influence on our agents in potentially complex and unpredictable ways. Importantly, those who design, analyze and/or monitor the underlying cyber-physical system in general have little or no control over the external agents, the "control nodes" or other behavioral aspects of the "environment".

We identify several interesting aspects of the global behavior and asymptotic dynamics of such distributed cyber-physical and other networked systems, abstracted as BNA, GA or CA. Our focus is on the differences in asymptotic behaviors between open and closed such systems with the same or similar "types" of simple deterministic individual agents and same or very similar (sparse) network structures. Furthermore, to make the open vs. closed system dynamics differentiation as sharp as possible, we severely restrict the allowable kinds of impact of the "control nodes" or "external environment" on the agents whose collective dynamics we are trying to understand. Our formal study of the systems dynamics, therefore, is done by mathematically and computationally analyzing the configuration space properties of the appropriately restricted types of BNA, GA and CA. In this talk, we mostly focus on those properties capturing the underlying system's asymptotic collective dynamics. In that problem setting, we summarize several recent theoretical results that establish a provable "complexity gap" in the dynamics of closed vs. open systems in a formal BNA/GA based setting.

16:30-18:30 Session 5J: Socio-Ecological Systems (SES) - Politics & Games

Parallel session

Location: Cozumel 5
The biased voter model
SPEAKER: Raul Toral

ABSTRACT. The voter model is arguably the simplest and one of the most widely studied out-of-equilibrium models with application to different scenarios of social interest. The rules are very simple: consider a network such that in each node lies an agent capable of holding one of the possible values of of a binary variable. Then, a node is randomly selected and the agent in that node copies the value of the variable held by another agent in another randomly selected connected node. Most of the literature assumes that both values of the binary variable are equivalent. In this work we focus in the situation where there is a bias towards of the two values. This situation has been considered previously as, for example, indicating the lack of asymmetry in the social preference for one or another language in a bilingual community. We introduce bias by letting a fraction of the agents to copy with a higher probability one the two options (the preferred option). We first assume that there is no correlation between the connections of the biased agents and revisit some of the results about the dependence of the time to reach consensus as a function of the bias parameter. We then ask the question of how the ratio of the density of connections between biased nodes (B) and unbiased nodes (U) influences the behavior of the system. To this end we use two different strategies to connect nodes and compare the results with the random network. Both strategies keep the same average degree and the total number of links as in a random network of the same size. Case I assumes that we cut links between unbiased nodes and draw additional links between biased nodes (i.e. a UU node becomes BB. The strategy in case II is to rewire links to increase the number of biased-biased connections (BU becomes BB) or to decrease the number of unbiased-unbiased connections (UU becomes UB), keeping the degree of each node constant. It seems that a crucial role for reaching consensus are the degrees of biased and unbiased nodes, rather than the number of links between pairs of biased or unbiased nodes. Even if the majority of the nodes is biased but weakly connected, the probability to reach consensus in cases I and II cannot be larger than in a random network. On the other extreme case, when biased nodes form a well-organized minority, case I gives higher probability to order for the preferred state. In the thermodynamic limit any non-zero value of the bias leads to preferred consensus. In contrary, when the network is finite, there is always a chance to order in the not preferred state. For random network case we find that behavior of the system depends of the effective bias, which is the value of bias parameter multiplied by the number of biased nodes. When the topology is not random that scaling disappears. Our analytical results are supported by numerical simulations.

Dismantling the generalized dumbing process: a key to the mitigation of climate change

ABSTRACT. Dismantling the generalized dumbing process: a key to the mitigation of climate change Luis Tamayo, PhD

As described by numerous studies (Kolbert, 2014), humanity is on the way to the sixth extinction of the species. This phenomenon is not only anthropogenic but derives from social control and dumbing process described in various ways by numerous authors (Diamond, 2007; Klein, 1999, 2007, 2014; Tamayo, 2010). Understanding the nature of such a process is a key element to the formulation of viable proposals for global warming mitigation.

Partisanship or corporatism: a comparison of models and actual data in Mexican elections

ABSTRACT. How we vote and what influence it deserved a lot of attention of physicists and mathematicians during the last two decades. Models and searches of "power laws" in electoral results are in the current literature. However, unfortunately, politician are involved within and deviations from models appear. In this work we discuss how corporatism, more than partisanship, plays a role in elections in Mexico. We use actual data from the federal elections during the last fifteen years. We look for some evidence of such a behaviour in Indian and Argentinian elections.

Shadow Capital: Emerging Patterns of Money Laundering and Tax Evasion
SPEAKER: Lucas Almeida

ABSTRACT. Among the many social structures that cause inequality, one of the most jarring is on the use of loopholes to both launder money and evade taxation. Such resources fuel the "offshore finance" industry, a multi-billion dollar sector catering to many of those needs. As part of the push towards greater accountability, its crucial to understand how these decentralized systems structure and operate. As its the usual case with emergent phenomena, the efforts of law enforcement have had limited effects at best.

Such challenge is compounded by the fact that they run under the logic of "Dark Networks" avoiding detection and oversight as much as possible. While there are legitimate uses for offshore services, such as protecting assets from unlawful seizures, they are also a well documented pipeline for money stemming from ilegal activities. These constructs display a high amount of adaptiveness and resilience and the few studies done had to use incomplete information, mostly from local sources of criminal proceedings.

The goal of this work is to analyze the network of offshore accounts leaked under the “Panama Papers” report by the International Consortium of Investigative Journalists. This registers the activities of the Mossack Fonseca law firm in Panama, one of the largest in the world on the Offshore field. It spans over 50 years and provide us with one of the most complete overview thus far of how these activities are networked. There are over 3 million links and 1.5 million nodes, with accompanying information, including time of operation, ownership and country of registry.

The preliminary analysis already performed by cleaning the dyadic relations allow for a snapshot of this universe, which is very receptive to the metrics already current in network science. The betweenness centrality of nodes is extremely skewed, with less than a hundred being the “backbone” of the system, mostly on countries that are already known to be tax heavens ( like the Cayman Islands , Bahamas and Jersey). The degree distribution is very similar to the power-law produced by the Bianconi-Barabasi model of preferential attachment with changing fitnesses. These patterns will be explored in order to better understand the evolution of the system. This work will also model how different strategies of law enforcement intervention can disrupt the flow of illegal resources by testing local(and network-level) targeting metrics.

By crossing the methods from data analysis with the public policy perspective, we expect to contribute to the literature of compliance, as well as the growing field of dark networks. It can also provide an important baseline for understanding other recently uncovered schemes such as the “Car Wash” scandal in Brazil. The perspective of complexity is uniquely well-positioned to shed light on this enigma that neither economics nor law alone have been able to tackle.

Is the coexistence of Catalan and Spanish possible in Catalonia?

ABSTRACT. The dynamics of language shift have received a lot of attention during the last decades. Non-linear differential equations inspired by ecological models [1] have been widely used to reconstruct how two tongues would coexist or how a hegemonic language would push another, minority one towards extinction [2]. Some attempts were made towards prediction in scenarios still far from a steady state [3], cases which often involve a multifactored political scenario. The additional social complexity should be a further incentive for us to test simple mathematical models. By finding out how far our equations can hold, and when do they fail, we have the needed ingredients to advance the theory.

With this spirit, we study the stability of two coexisting languages (Catalan and Spanish) in Catalonia (north-eastern Spain). There, a very complex political setup is confronting nationalistic forces of diverse sign within one of the most prominent European regions (Catalonia ranks 4th among all European regions both by nominal GDP, by GDP in Purchasing Power Standards, and by population size [4]). Our analysis [5] relies on recent, abundant empirical data that is compared to an analytic model of populations dynamics. This model contemplates the possibilities of long-term language coexistence or extinction – both plausible outcomes of the socio-political system under research. We establish that the most likely scenario is a sustained coexistence. However, the data needs to be interpreted under very different circumstances, some of them leading to the extinction of one of the languages involved. We delimit the cases in which this can happen, and find that the fostering of a broad bilingual group shall be a key stabilizing element. As an intermediate step, model parameters are obtained that convey important information about the prestige and interlinguistic similarity of the tongues as perceived by the population. This is the first time that these parameters are quantified rigorously for this couple of languages. Limited, spatially segregated data allows us to examine dynamics within two broad sub-regions, better addressing the likely coexistence or extinction. Finally, variation of the model parameters across regions tells us important information about how the two languages are perceived in more urban or rural environments.

[1] Kandler A and Steele J. Ecological models of language competition. Biol. Theor. 3(2), 164-173 (2008). [2] Abrams DM and Strogatz SH. Modelling the dynamics of language death. Nature 424, 900 (2003). [3] Kandler A, Unger R, and Steele J. Language shift, bilingualism and the future of Britain's Celtic languages. Philos. T. Roy. Soc. B 365(1559), 3855-3864 (2010). [4] [5] Seoane LF, Loredo X, Monteagudo H, and Mira J. Is the coexistence of Catalan and Spanish possible in Catalonia? Under review.

Identifying patterns of global change in attitudes and beliefs using the World Value Survey.
SPEAKER: Damian Ruck

ABSTRACT. The Word Value Survey (WVS) is an international, cross sectional survey of beliefs and attitudes of a thousand people in each of 107 different nations, administered since 1990. We extract a reduced set of cultural units using Exploratory Factor Analysis which offers the best explanation for the high dimensional survey data and utilize a Bayesian regression to estimate recent cultural change. Using Multilevel Granger Causality we investigate the direction of causation between cultural values and economic development. Then we present a dynamic nonlinear relationship between religious subscription and secularization which has the hallmark of cultural inheritance.


Agent-based Modeling to Understand Brexit

ABSTRACT. UK subjects’ “swarming away” from the EU is not only a political, but also a legal concern. Actually, a court judgment has been solicited. The United Kingdom Supreme Court ruled on the UK Secretary of State's authority to decide for Britain to exit from the EU without hearing Parliament. The judgment is in 96 pages of print. This document is/contains our data.

As legal scholars we are aware that adequate comprehension (cf Bobbitt [1982]) of almost all complex situations involving deliberate human behaviors requires blending the arts with the sciences. Our goal is to provide a proof of concept for how two agent-based models of the same situation can support this. Our working hypotheses are (i) that the UK Supreme Court judgment cannot be properly understood in different (alpha,beta) cultures without mitigation and (ii) that agent-based models can be designed, combined and used for simulations that support constructive cross-disciplinary analysis and comprehension.

Assuming that agent-based models create toy worlds, our main pont of departure is (apart from experimentation with valid laws, which we consider contempt of democratic/legislatory procedure) this: we cannot reasonably discuss the capabilities of the law to help a complex social system survive in the real world without having a model of how a toy complex system will react to internal and external adaptations in technology, culture, economics and law. No single discipline is capable of finding the best solution to such toy complex's behaviors. Finding and designing working examples of adequate mechanisms is the next best thing. Agent Based Model Simulation (ABMS) can help deliver those.

Our approach is a new one. We use the 96-page judgment document as source material for study. From it we harvest modeling requirements through two different disciplinary filters: alpha and beta (for arts and sciences) in a manner that takes de Marchi [2005] seriously. Running these models leads to repeatable stochastic encounters between agents. The encounters translate into working towards the selection of the best strategy sequence, conditional to the “political season” these evolve in. Inspired on Alexander [2007] the games that can dynamically form in this manner are prisoner’s dilemmas, stag hunts and bargaining games. “Political seasons” reflect stable political periods as presented in the judgment. The two (alpha, beta) collections of available strategy-payoff combinations are also harvested from the document.

Our results show that we can use the UK Supreme Court judgment to design working toy versions of the UK subjects, the UK and the EU as a dynamic complex social system both from an arts and from a science perspective. And that the evolutionary-game-theoretic simulation approach allows for blending these perspectives' expectations in a rational manner.

References J McKenzie Alexander. The structural evolution of morality. Cambridge University Press Cambridge, 2007. Philip Bobbitt. Constitutional Fate: Theory of the Constitution. Oxford University Press, 1982. S. De Marchi, Computational and mathematical modeling in the social sciences, Cambridge University Press, 2005 United Kingdom Supreme Court. R (Miller) v Secretary of State for Exiting the European Union [24 January 2017]. UKSC, (5), 2017.

19:00-22:00 Session : Welcome Cocktail

Reception cocktail

Location: Terraza Akumal