next day
all days

View: session overviewtalk overview

17:00-20:30 Session 1: Posters (all week)
Location: Cozumel A
Energy efficiency in Mexican electrical technology manufacturers

ABSTRACT. The objective of this study is to evaluate energy efficiency in Mexican electrical technology manufacturers. By 2015 78% of the Mexican energy production were based in high impact environmental technics (PROSEDEN, 2016). Energy demand by general population increases every day (SENER, 2015) so its crucial for producers of energy use products to manage the optimization and adoption of new technologies that decreased the general use of energy as a clever strategic plan of action to reduce energy demand and develop a bridge to energy efficiency. The Method used in the study is focused in some of the principal producers of electric technology landed in 10 focus groups. The study also approached two associations of energy use technologies and an observational study of the process methods in one of the principal refrigerating manufacturers facilities. The method is a Heuristic analysis by Atlas Ti program and the research is based on the analysis of complex systems. Theoretical framework is based in the theory of rationalized action model by Fishbein and Ajzen (1980), the Technology Acceptance Model (TAM) by Davis et. al (1989) and Yong- Rivas Model (2008) based on technology acceptance. For the complex system analysis of the information obtained by the research and interviews is presented a casual loop diagram following the methodology proposed by Vensim in 2016. The results as showed in figure 1 its the relation of the lead factors that conduct to energy efficiency in the studied subjects centered in four variables of study, Mexican and international legislation, regulatory body supervision, commercial policies and component integration which derivate in the principal subject of this work which is energy efficiency. The principal findings in this study are based on the knowledge energy efficiency is dependent of the international standards that derivate in official legislation imposed by local government that impulse the creation of more energy efficient products. Another finding is that there are three kind of efficient technologies disruptive saving efficient technologies, mature technologies like electric motors which increases significantly their energy saving rates and optimizing result technologies. It s also very relevant that one of the principal barriers to study energy efficiency is the aversive culture which is present in Mexican institutions which leads to a secrecy culture impregnated in private and governmental organizations. Lack of regulation that leads to problems in which is impossible to take metrics of energy efficiency like refrigeration chambers in matters like importation of products from other countries that export inefficient technologies at low cost affects the environment and local markets.

Cognitive significance of adaptive self-organization of Bali’s ancient rice terraces

ABSTRACT. In their paper, Lansing et al. (2017) propose an evolutionary game to show that the spatial patterning of Balinese rice terraces is caused by the feedback between farmer’s decisions and the ecology of paddies, creating a self-organized process that maximize harvest and explain the emergence of cooperation in a social complex system. Although the authors do not mention it explicitly, their adaptive self-organizing model is based on several cognitive assumptions from Game Theory, such as rational choice, utility maximization intentionality, and common knowledge. We propose to interpret the model’s implications as a support of a dynamical-enactive approach which consist in explaining Balinese traditional philosophy of “Tri Hita Karana” and the ideology of Balinese people as the result of environmental constrains and daily activities. With this objective, this presentation will be divided into four parts. First, presenting Lansing et al.’s model essentials, focusing on how the model was constructed and the general implications according to authors. The intention in the second part is to reveal the model's underlying assumptions about cognition and show how, although they are rooted in Game Theory, they are also breaking with it. The following part delves into these divergences from Game Theory to argue that the assumptions can be related to a dynamical-enactive interpretation of cognition that actually supports their claims. Finally, the conclusion remarks some open questions and desirable efforts that can be made in the model to improve our understanding of cognition.

Lansing, J. Stephen; Thurner, Stefan; et. al. (2017) “Adaptive self-organization of Bali’s ancient rice terraces”, PNAS, Vol. 114, No. 25: 6504-6Lansing, J. Stephen; Thurner, Stefan; et. al. (2017) Adaptive self-organization of Bali’s ancient rice terraces, PNAS, Vol. 114, No. 25: 6504-6509.509.

Quantifying multi-scale variability during dyadic embodied interaction: High-functioning autism as a case study.

ABSTRACT. High-functioning autism (HFA) is one of the so-called social interaction disorders. In consequence, two-person experimental setups are required to objectively and systematically describe interaction’s aspects that are impaired in these patients. Using a minimalistic paradigm known as the perceptual crossing experiment (PCE) we studied real-time interaction in combined pairs of healthy participants and HFA individuals, as well as in pairs of healthy participants. This constrained setup aims to isolate the interaction-aspect of reactivity (action contingency). However, it has also been applied for assessing complex forms of alignment found in real-life social interactions. In the PCE, pairs of blindfolded participants are embodied as avatars in a one-dimensional and looped virtual space and move their avatars with a mouse. A tactile vibration stimulus is delivered whenever the avatar crosses another object in the space. Each player can encounter three objects: a static decoy, the avatar of the other player, and a mobile “shadow” that copies the other player’s avatar movements at a constant distance and that is not reactive in that the other does not receive any feedback when her “shadow” is encountered. Hence, the only event when both partners receive feedback simultaneously is when they cross each other’s avatar. The task is to mark these encounters but not those with the decoy or partner’s “shadow” via button press. We adopted a multi-scale time-series perspective and analysed the participants’ movement trajectories during a PCE. We applied intra-daily variability (IV), a method that quantifies how much small-scale and large-scale components account for the variance of a time series. Additionally, IV excludes the trend of the signal by computing the derivative (X’) of the original time series (X). Accordingly, the players’ positions in the virtual space were differentiated yielding their velocity profiles. By applying IV we aimed to assess which scales (P) contribute the most to the variance of the time series: IV (P)=(Var (〖X'〗_P))⁄(Var (X')). Clearly, IV is a relative measure that is normalized to the variance of X’, so it made possible the comparison within and between samples. This preliminary analysis shows that individuals of the control-control pairs converge towards the same type of dynamics, i.e., their trajectories tend to be less variable between successive sessions and across all scales. In contrast, the controls of the HFA-control pairs remain rather stable without significantly increasing nor decreasing their variability according to sessions, and the variability of autistics’ movements increases significantly for larger scales but decreases for smaller scales between successive sessions. Such behavior leads to an increase in the variability gap between participants, reflecting quite different strategies for solving the task not only within the HFA-control pairs but also between the two types of pairs. Our findings support previous work on the quantification of embodied interaction and show a characterization of the movement profiles of HFA patients. Consequently, this research places both the PCE and the multi-scale time-series analysis as suitable tools for objectively studying psychopathologies, with the potential implication of complementing the intrinsic subjectivity of clinical assessments in the realm of neuropsychiatry.

Simple local rules underlie collective foraging movements of spider monkeys.

ABSTRACT. Collective decisions underlie group coordination and are the result of the choices made by each group member. Unlike cohesive animal groups, individuals living in groups with high fission-fusion dynamics can separate temporarily from their group when there is a divergence of interests. It is not clear how collective decisions are made in this kind of groups; collective coordination could emerge from simple rules and localized interactions, in turn giving rise to complex, adaptive properties at the group level. We evaluate the collective foraging decisions made during food-limited conditions (dry seasons of 2015 to 2017) of a group of spider monkeys (Ateles geoffroyi) in a tropical forest in the northeastern Yucatan peninsula, México. We used observations of two different contexts of collective foraging: focal subgroup observations during foraging movements to explore leadership processes and observations of novel food patches to identify knowledgeable and naïve individuals and their use of social information. We assessed how each individual’s centrality in the social network, age (adult, subadult, juvenile), sex (female, male) and years living in the area (0-3, 3 to 10, ≥ 10) affected its collective movement initiation rate, as well the proportion of times that as a knowledgeable individual was followed by naïve ones to novel food patches. Additionally we examine the likelihood of being followed in both contexts as a function of these attributes and considering the kinship and the leader-follower association value. Leadership was partially shared between both sexes, residency and age classes, and individuals seemed to follow others that initiate collective movements without any intentional recruitment by the leaders. In both collective contexts, the most central individuals in the social network stood out as leaders (by the number of followers that they had), but adults and subadults initiated foraging collective movements more frequently. These characteristics, modulated by the social relationship between leaders and followers, influenced the decision of each individual to follow certain leaders. In the visiting dynamics to novel food patches, males and more central individuals were more likely to be followed by naïve individuals; neither the age class of the leader nor the social relationship between knowledgeable and naïve individuals was relevant in the individual choices. As in self-organized collective movements of other species with fission-fusion dynamics such as fish schools, crowds of humans and hyaena clans, our results suggest that collective foraging movements in our study group could emerge from simple local rules. Each individual could be following these behavioral rules according to its current ecological knowledge: when it is not possible or necessary to follow a knowledgeable group partner, an individual will decide to follow another according to their social affinity; similarly, when a naïve individual needs to find a new foraging patch, this individual will follow the group members with greater centrality and/or a male. Thus, the flexibility and coordination observed in adaptive, collective behavior such as fission-fusion dynamics could be the outcome of local, simple rules used by individuals to make foraging decisions.

Data mining: Using spatial analysis and socio-economic factors to solve the location problem in non-profit organizations.

ABSTRACT. Network localization problems occur when new public or private services in networks are planned. In other studies, methods have been developed to Clinics or hospitals location by means of optimal allocation models using geography and costs information. In this article, data mining techniques were used to determinate socioeconomic factors to characterize the different profiles of patients with ophthalmological conditions that have preferred the option of the 3rd sector over public or private options. The correlation of socioeconomic characteristics with each one of the ophthalmological diseases attended by FDHZ was determined through the Naive Bayes classifier. Data mining techniques and spatial analysis were combined to predict where there will be a greater probability of communities complying with the profiles. In this way a solution is proposed to the location problem for new non-profit headquarters of ophthalmological clinics, as well as to make a characterization of socioeconomic risk factors for three ophthalmological conditions. Data from the FDHZ, which has realized consult and ophthalmological services, through donations from 2006 to 2017, on patients who were operated on three categories (cataracts, pterygium and facovitrectomy) in Orizaba, Veracruz, Mexico, were used in this study.

Towards a model for centriole biogenesis regulation by Polo-like kinase 4 activity and first evidence for its dimerization in live cells

ABSTRACT. Cellular processes are often tightly regulated by a complex network of proteins, but in few cases, the core of a complex regulation loop is found within one single species. This is the case of centriole biogenesis regulation by the Polo-like kinase 4 (Plk4). Plk4 is considered the master regulator of centriole biogenesis, its over-expression leads to abnormal centrosome formation, resulting in defective chromosome segregation. Plk4 is subject to a positive-negative regulation loop that operates within Plk4 itself, as two regions of Plk4 are among its own phosphorylation targets. In both cases auto-phosphorylation occurs in a trans mechanism which suggests the formation of Plk4 dimers, one leads to Plk4's activation while the other labels the kinase for degradation. We built a two-compartment self-regulatory model for Plk4's activity based on Ordinary Differential Equations which considers an auto-phosphorylation process based on dimerization (Figure 1). The dimerization event involves association, phosphorylation and dissociation resulting in a monomeric active Plk4 population. The model allows the determination of the major species at the steady-state and predicts specifically the prevalence of particular dimeric species of Plk4, which are predicted to be restricted to the centrosome. Plk4 dimerization has never been reported in live cells. Here, we provide evidence for Plk4 dimerization in live human cell using single molecule confocal spectroscopy techniques. Our experimental findings corroborate the major Plk4 model predictions, namely Plk4 indeed forms dimers in the centrosome of living cells, which is consistent with the proper regulation of its activity at the centrosome.

On the evolution of western modern art: Characterizing a paradigm shift in the production of painting artists.

ABSTRACT. In this paper we characterize a shift paradigm in modern western (european) painting art between the late 1800s and early 1900s, in which avant-garde movements and more abstract concepts developed in the broad of paintings. On one hand, this characterization of transition can be measured in terms of complexity measures in painters community, like number of artist and heterogeneity, focusing on art movements and art specilities by painter. On the other hand, we show that the cultural trait of art production can be proved extracting concepts of pictures via Image Content Analysis and thereafter applying Structural Topic Modeling in these concepts in order to see how the painters developed emergent ideas in their works in this revolutionary epoch. This work shows evidence for that the transitions in sociocultural evolution can be described in terms of complexity measures of systems and showing how certain cultural traits changes according to the change of another critical variables that motivates those changes.

Sensory substitution devices as a mean to assess dynamic systems approaches to cognition

ABSTRACT. The dynamic systems approach to cognition has gained momentum in the last decades, with research from a wide scope of different disciplines [1]. Here we outline the theoretical reasons for conducting three experiments to empirically develop the perspective, currently under development in the 4E Cognition Group of the National Autonomous University of Mexico. We make use of the “Enactive Torch” (ET), a sensory substitution interface equipped to give real time data of human sensorimotor behavior [2]. The ET allows for the collection of time series of human movement and sensation by taking the input of an infrared sensor and converting it into a vibrational motor output delivered to the user’s hand, so that the presence of distal objects can be felt via the skin. That way, the ET makes it possible to control the distal sensory input of the user. The experiments we propose are: 1. Multi-scale time series collection: The long-term effects of sensory substitution devices have been studied as augmented perception, and certain effects on behavior have been identified [3]. Nevertheless, their usage as a gateway to understand multi-scale dynamics of human behavior are yet to be exploited. Although many features of behavior are well captured as a dynamical phenomenon [4], its long-term dynamics at the scale of months, or at multiple scales are still poorly understood. We propose to compare the learning rate obtained in short scale experiments, with the learning rates of iterated trials (separated by a fixed amount of days) of the same experimental setup, after training with the device in between trials. The study of scale-free properties could serve the purpose of finding long-term memory in motor activity associated to changes in perception, and to correlate it with perceptual efficiency in the sensorimotor tasks. 2. Sensorimotor complexity of perception: Although transfer entropy has been employed in robotics [5] and artificial life studies [6], the methods used to collect information in real time from sensors and actuators are rarely possible in humans. Analysis of the minimal complexity features for acquiring a perceptual skill as the emergent effect of sensorimotor contingencies, would need very complex data collection, but can be achieved via the ET. In this experiment, we devise a navigation task in which the participants need to use the ET to avoid obstacles, and apply information theoretical methods to study the emergent complexity of the sensorimotor dynamics needed to exploit distal perception [5,7]. 3. Intentional Behavior as a complex system: According to [8] intentional behavior can be characterized as a complex system. Nevertheless the effect of intentional movement under identical information exposure has not been studied with sensory substitution interfaces. In the third experiment, we expose two subjects to the same sensory array using the ET. One of them can move volitionally and the second follows the movement on passive mode. Then, we measure their success in a perceptual decision task.

Time series analysis of human sensorimotor behavior using human-computer interfaces

ABSTRACT. Sensorimotor theory conceives perception as an emergent effect of interaction dynamics between brain, body and environment, as opposed to passive computing [1]. This view holds that perception is an active behavior agents do to cope with their environment arising from sensorimotor coupling, and so action and perception in biological and artificial agents can be understood as part of the same dynamic process. Moreover, sensorimotor theory suggests that the quality of perceptual capacity is constituted by sensorimotor contingencies. That is, by lawful relations between motor behavior and associated sensory stimuli. Seeking to understand how to quantify the acquisition of a perceptual skill as the result of mastering a sensorimotor contingency has found widespread interest among cognitive scientists, and has led to information theoretical [2] and dynamic systems accounts of embodiment [3]. This however, implies the collection of data both from sensation and motor behavior, and has sparked interdisciplinary research in robotics [2], artificial life [3], philosophy of cognition [4] and psychology [5,6]. Psychology has followed unconventional research methods, namely, the use of sensory-substitution interfaces to study human behavior. Here we analyze the time series of arm movement and sensory output collected from human-machine interaction experiments to find empirical support for sensorimotor theory. For that purpose, we use a distal-haptic sensory substitution interface named “the Enactive Torch” (ET). The ET transforms the inputs from an infrared sensor into a motor output, such that a participant receives a vibration when pointing the ET to a distant object in proportion to that object’s distance. For our experiment, we devised a modified version of the maze navigation task used in [7]. Blindfolded participants were instructed to follow target sounds located in the corners of a closed space full of obstacles by using two ETs. The subjects were told they would “win” if they managed to pass five intervals without stopping nor touching anything, or the experiment would stop after 40 minutes of navigation. Task success was measured as the number of intervals that were passed without error. To analyze the time series, information theoretical methods were used to compute the transfer entropy between motor and sensory signals within and between both ET devices. This method is used to measure information flows in complex systems [8] and was proposed to quantify embodiment in a previous study using a quadruped robot [2]. We also use a measure complexity [9] in this case for the emergence of movement dynamics when they are modulated by sensory input, and their correlation to task proficiency. Our research highlights the complexity of human behavior as studied through time series, and the use of human-computer interfaces as a means to collect complete sensorimotor data from human subjects that are normally only available from experiments in robotics.

Characteristic Spatial Interaction between Shops and Facilities in Japan

ABSTRACT. We empirically investigate spatial distributions of shops and facilities (hereafter called establishments) observed in Japanese telephone directory (Yellow Pages) data on nationwide scale. This data contains comprehensive individual listings of about 7 million establishments (nearly all shops, firms, hospitals, schools, parks, etc). Name, address, latitude and longitude, phone number, and industrial sector of the establishment are also included. The industrial sector is divided into 39 categories. Each category is further divided into 735 subcategories. This allows us to study and discuss systematically geographic concentrations that are associated with various aspects of agglomeration. In order to measure the concentration of different types of subcategory, we use M index introduced by Marcon and Puech (2010). For each establishments of subcategory A, we count the total numuber of establishments (t) and the number of establishments of subcategory B (n), which are located within a distance r from it. We compute the ratio n/t to the global ratio N/T, where N and T are the total number of establishments and the number of establishments of subcategory B in the whole region, respectively. The M index between A and B is defined as the average value of this ratio over all establishments of subcategory A. If M value is larger than 1, there are more establishments of subcategories A and B within a distance r relative to the whole area. The spatial interaction between A and B can be defined as I=log(M), which is positive if there is attraction and negative otherwise. To identify characteristic location patterns of different types of establishment, we characterize subcategories by 735 dimensional vector whose elements consist of the values of I. Then we perform a cluster analysis with Ward's method using Euclidean distance and classify subcategories into groups. The obtained dendrogram illustrates the hierarchical structure and defines groups at different levels. We show that obtained groups are different from category classification and help to characterize spatial distributions of establishments, implying that we can extract important spatial information of urban structure from geographic interaction between different types of establishment.

Embodied language and the endo/exo effect: Toward a big data analysis

ABSTRACT. There is increasing evidence for the complex interactions between language and embodiment. One example is oral movements, which are related to digestive functions and speech. Ingestion and expectoration are the oldest oral functions to have emerged. Outwardness is associated with expectoration whilst inwardness is associated with ingestion. These movements can be placed on a sagittal plane ranging from the lips to the rear of the mouth where consonantal strictures can also be located as precise articulation points. Topolinski et al. (2014b) showed that participants in experiments preferred inward words (the “endo effect”) over outward words (the “exo effect”) when presented with positively-associated words, whilst the effects vanished when presented with negatively-associated words. This preference was modulated by oral affordance (e.g. lemonade is to-be-ingested whereas mouthwash is to-be-expectorated). Yet, when an object is situationally induced, it can elicit both effects, as it happened when participants were presented with bubblegum. We wish to construe a computational tool to analyze the contents of, which includes a large fish-related noun corpus in several hundred languages, and to label exemplar species as to-be-ingested or as to-be-expectorated even if reports of such exemplars’ suitability for use as food are lacking or nonexistent. For this purpose, we have begun codifying certain phonation repertoires and their sagittal plane of consonantal strictures, and then testing for inwardness or outwardness over a set of archetypal words associated with positive, neutral and negative attitude objects. German was the first language to be codified along with a small set of words (For instance: Milch, Papier, Gift). Once testing is over, results may have to be further analyzed based not only on the certain variations of dialects, but also on the phylogenetic aspect of the selected words. This effect may be seen as a novel instrument for the analysis of language, semantics and phonation regulation.

Complex musical imagination in an ecological landscape: an epistemological framework for neuromusicology

ABSTRACT. The suggestion that cognition has to be explained in terms of the organism-environment system (with affordances for action), rather than by localizing functions to specific brain structures, has been solidly established. This is a result of the maturation and viability of enactivism as a legitimate alternative to traditional computationalist approaches. In this presentation we explain why an enactive neuroscience is needed to give a better explanation of the first-person experience of musical imagination. We argue for such a claim based on problems that have arisen in cognitive neuroscience with the functionalist (and uni-causal) project when structural connectivity strongly suggests that the function of the entire nervous system in interaction serves as dynamic, changing over time and working with complex causation. We explore this in the context of the problem of trying to determine the function of a brain area underlying musical imagination. We argue that neuroscientists need to look at the agent as characterized by a circular causality of global and local processes of self-regulation in an ecological and social environment, rather than following cognitive psychology in its analysis of isolated psychological functions. We point out how the dynamical conceptualization of causality, described as multidirectional across multiple scales of organization, may be a better explanatory unit. Also, an epistemological notion of first-person experience of musical imagination will be introduced, which can be defined as the insufficiency of a single descriptive modality to provide a complete description of this kind of complex system.

A second aim of our paper is to develop a particular definition of based on this enactive neuroscience perspective. Following this perspective (Thompson and Varela, 2001; Froese et al. 2014; Barandiaran, 2016), our argument centers on the idea of the inseparability of imagination and motor engagement in the nervous system (in interaction with a musical environment). We argue that Motor Musical Imagination is best understood in terms of imaginative affordances in the context of the agent’s ongoing skillful engagement with the musical environment. MMI involves the whole living body of the organism, and is elicited by relevant possibilities for action in the environment that matter to the organism. We argue that the weight of evidence has now shifted in support of the view that MMI can be explained by appealing to principles of nonlinear dynamics, that introduce an emergent thought and an epistemic relational point of view. We review current research on structural and functional brain organization in music cognition, that defend a one-directional causal explanation, and we reject neuroreductionism in neuromusicology, arguing that the neurodynamical theories, that propose a two-direction or reciprocal relationship between embodied experience and MMI, provide a coherent framework for understanding this kind of first-person experience. Finally, we also identify a research agenda that naturally arises from our proposal. In this way we hope to provide an impetus for musical cognitive neuroscientists to pursue an enactive inspired research program.

Cherry-Picking Sensors and Actuators in Topologically-Evolving and Uncertain Dynamic Networks
SPEAKER: Ahmad Taha

ABSTRACT. A defining feature of dynamic networks is the prevalence of reliable realtime sensing and actuating devices---sensors sampling physical data in realtime and actuators driving networks to a specific state given the sampled data. For example, water quality sensors measure contamination levels in tanks and pipes in drink-water networks. This information is utilized to determine immediate control signals of actuators such as decontaminant injections or valves flushing out contaminated water. Other dynamic networks such as smart energy systems, transportation and gene regulatory networks operate in a similar fashion---they all obtain information from sensors thereby determining optimal signals of actuators to follow.

Systems-theoretic studies addressed a plethora of problems that explore optimal sensing and actuating algorithms. These classical studies, however, have two major limitations. First, the combinatorial selection of sensors and actuators (SaA) given the realtime physics and uncertainty is often ignored---all SaAs are activated which results in higher energy costs and oversampling. Second, the network topology is often static, that is, the selection of SaAs does not consider changing network structure. Physics-based studies have shown that network-level objectives can be met with fewer sensors and actuators, hence learning to activate specific sensors or actuators has plenty of merits. This work focuses on the interplay between the physical dynamics and the network evolution through the realtime SaA selection in uncertain dynamic networks.

With a large number of SaAs in uncertain complex networks, four research problems are investigated. (P1) How can the most influential SaAs be identified, and how does their selection change as network conditions evolve? (P2) Given a specific networklevel metric such as stability, minimal energy, or resiliency against attacks, how can SaAs be optimally selected in realtime? (P3) How does the network metric vary as the topology of the network evolves? (P4) How can key infrastructures such as drink-water networks and distributed energy systems benefit from the aforementioned theoretical advancements?

The objective of this work is to initiate a model that addresses the above questions, while providing preliminary interpretations and answers to some of these challenging problems. The outcome is a new, hybrid network-dynamic mathematical model that formulates the above questions via tractable computational algorithms. First, we prove that the combinatorial problem of selecting SaAs can be relaxed into a tractable convex optimization routine. Second, we illustrate that robust selections can be obtained if the network evolution is bounded. Third, numerical simulations on random networks with unstable nodes show that the proposed solutions are able to bound the optimal combinatorial solution to the SaA selection problem in evolving networks. Finally, real-world applications of the model to power and water networks are discussed.

Dynamical analysis of agents that were evolved for referential communication

ABSTRACT. Referential communication is a complex form of social interaction whereby agents manage to coordinate behavior with respect to features that are not immediately present during their interaction. A famous example from nature is the bee waggle dance. Williams et al. (2008)[1] proposed an evolutionary robotics approach to create an agent-based model where referential communication emerges from the evolution and the dyadic interaction between the agents. With this approach as inspiration, we proposed a model which reduced the complexity to permit a full dynamical analysis, while still remaining complex enough so that the results provide a useful perspective on the processes that could be involved in natural referential communication. Also, we take the same structural copy of the artificial neural network (Continuous Time Recurrent Neural Network, Beer (1996)[2]) to control the sensorimotor system of each agent, in order to be more close to the natural example, where the bees can take on different roles during their lifetime. The task is for two embodied agents to interact in a “hive” area such that one of the agents (the receiver) is able to move to a specific “target”, the location of which is only known to the other agent (the sender). The task implicitly requires adopting the right role (sender vs. receiver), disambiguating between translational and communicative motion, and switching from communicative to target seeking behavior. Similar to the waggle dance, the best solution involved a correlation between duration of contact and distance to be traveled. The full dynamical analysis revealed surprising results: (1) There is only one fixed-point equilibrium attractor that is in different positions for each role. (2) The position of the attractor changes while the agents move in space and interact with each other. (3) Their behavior cannot be attributed to the agents in isolation. (4) The separate roles are clearly distinguishable from the time series data shown by the neural states of both agents while they are interacting with their environment and the other agent. Our model, therefore, reveals that referential communication can be studied as a complex system at the level of the sender-receiver interaction as a whole.

A computational model of developing neuronal circuits driven by activity-dependent plasticity

ABSTRACT. A developing brain can be thought of as a dynamical system in delicate balance. From birth to maturation, circuits of neurons in the cortex have to dramatically change how they are wired, while not interrupting the basic functions of the brain like sensing, and responding to stimuli. Figuring out how these changes take place is an active area of research in Neuroscience. Novel genetic manipulations and optical techniques allow the creation of detailed connectivity maps at different points in time, but it is nearly impossible to study the slow structural transitions from one developmental stage to the next experimentally. Here, we use computer models to test the hypothesis that these changes might be facilitated by neuronal activity-dependent mechanisms. In particular, we focus on an early thalamocortical circuit between neurons in the thalamus and two neuronal populations in the cortex (somatostatin-positive (SST+) interneurons in layer 5b and spiny stellate neurons (SSNs) in layer 4). This circuit is transient and shortly after the first week after birth, these connections disappear giving way for more mature brain circuitry.

We built a thalamocortical circuit model of 1700 integrate-and-fire neurons, comprising thalamic, SST+, and SSN cells to study the effect of varying activity levels in the network, the reversal potential of GABA (EGABA), and the two spike-timing dependent plasticity (STDP) rules on the network structure. Of the 21600 parameter combinations we tested, 21.7% allowed the network connectivity to evolve similarly to the experimental results, indicating the emergence of robust architectural features. The crucial parameters for successful network construction were the GABAergic reversal potential, and the parameter controlling the excitatory plasticity rule’s bias towards strengthening or weakening a connection weight. Interestingly, the initial structure and synaptic weights of cortical populations were a key factor to predict the fate of the network. Our results indicate that in addition to genetically encoded connectivity changes, activity-dependent mechanisms -- as well as the current structure of the network itself -- are substantially contributing factors in the creation of functional networks in the cortex. In a next step we can now predict the mechanistic effects of cell specific or network wide pharmacological or genetic manipulations such as knocking out genes, blocking specific transmitters or transecting a nerve. These can then guide experiments to further understand how these changes lead to deviations from normal brain development in structure and activity.

Emergence of DNA replication by dissipation of UV-C light

ABSTRACT. This work aims to establish a theoretical basis for a primitive mechanism of RNA and DNA enzymeless denaturing leading to replication during the Archean. This mechanism, called \ac{UVTAR} is associated with the dissipation of a generalized chemical potential, the solar photon potential. The dissipative structuring of molecules now known as the fundamental molecules of life can explain the emergence and proliferation of life in physical-chemical terms. This theoretical work consists in understanding the UV-C photon-DNA interaction process on time scales as short as fractions of picoseconds to nanoseconds and simulating with molecular dynamics, using empirical-potentials, the dissipation of excitation energy along the double helix of a 25 base pair DNA molecule due to the absorption of a photon (260 nm) in a single base pair. The absorbed energy of the photon breaks the hydrogen bonds that binds the complementary single strands of DNA, thus beginning the basic process of denaturing. In addition, the work considers all possible forms of energy transfer along the DNA, like the formation of excitons and charge transfer along the chain that could be important in fixing the initial conditions of the replicative-dissipative system based on the absorption of UV-C light. Experimental data exists favoring the proposed mechanism.


Mathematical model for the temporal organisation and envelop of Ca2+-spike trains in sea urchin sperm flagellum

ABSTRACT. Fertilisation is one of the most important events for sexually reproducing species. Organisms with external fertilisation, such as sea urchins, have been widely used as models for studying processes relevant to reproduction. The composition of the signalling network responsible for steering sea urchin spermatozoa in response to egg-released peptides, known as SAPs (Sperm Activating Peptides) remains largely unresolved. It is by now clear that upon stimulation by SAPs, several interconnected electrophysiological and biochemical events ensue within a sperm cell: increases in cyclic nucleotides (e.g. [cGMP]) and intracellular pH (pHi), as well as membrane potential changes caused by regulated ionic fluxes of K+, Na+ and Ca2+. The upshot of these events are fluctuations in intracellular Ca2+ concentration ([Ca2+]i) that control flagellar beating asymmetry, which in turn steers the cell. Here we used a differential equations model of the signalling network to ask which set of channels can explain the characteristic envelop and temporal organisation of [Ca2+]i-spike trains. The signalling network model comprises an upstream module that links the SAP activity to downstream cGMP and membrane potential, via the receptor activation, cGMP synthesis and decay, hyperpolarisation and repolarisation. The outputs of this module were fitted to kinetic data of cGMP activity and early response of membrane potential measured in bulk cell populations. Two candidate modules featuring voltage-dependent Ca-channels link these outputs to the downstream dynamics and can independently explain the characteristic decaying envelop and the progressive spacing of [Ca2+]i spikes. [Ca2+]i spike trains are explained by the concerted action of a classical CaV-like channel and BK in the first module, and by pH-dependent, [Ca2+]i-inhibited CatSper dynamics alone in the second module. The model predicts that these two modules interfere with each other to produce unreasonable dynamics, which suggests that one of the modules may predominate over the other in vivo. We further show that [Ca2+]i dynamics following sustained alkalinisation or the presence of low extracellular [Ca2+] would allow to identify if CatSper or a pH-independent CaV and BK modules predominate.

Self-modeling in continuous-time Hopfield neural networks
SPEAKER: Mario Zarco

ABSTRACT. Discrete-time Hopfield neural networks, whose dynamic presents multiple fixed-point attractors, have been used widely in two cases: associative memory (Hopfield, 1982) based on learning a set of training patterns which are represented by attractors formed at updating the weights, and optimization (Hopfield & Tank, 1985) based on mapping a constraint satisfaction problem into the network topology such that the attractors represent solutions to that problem. In the last case, the network energy function has the same form as the function to be optimized, so that minima of the former is also minima of the latter. Although is has been proved that low-energy attractors tend to have large basins of attraction (Kryzhanovsky & Kryzhanovsky, 2008), networks usually get stuck in local minima. Watson, Buckley, and Mills (2011b) have demonstrated that discrete-time Hopfield networks can converge into globally optimal attractors by enlarging the best basins of attraction. The network combines reinforcing its own attractors by Hebbian learning, hence increasing their basins of attraction, and randomizing neuron states once the network have learnt its current configuration. Given the fact that global optimum has sub-patterns in common with many local optimum, reinforcing low-energy attractors through learning has the potential of simultaneously reinforcing lower-energy attractors even before the network converges onto the latters for the first time (Watson, Buckley, & Mills, 2011a). This so-called self-modeling process was restricted to be applied to symmetric weights matrices without self-recurrent connections so as to ensure the existence of only fixed-point attractors, and therefore the decrease of energy when the network is relaxed into a stable state. However, these conditions narrow the space of possible complex systems that could be represented by the network. In this work, we face the challenges involved in relaxing the constraints of this self-optimizing process by using continuous-time Hopfield neural networks with asymmetric weights matrix and self-recurrent connections. Continuous-time Hopfield neural networks can exhibit many different limit sets depending on their topology (Beer, 1995). Also, using a continuous activation function has important consequences in the network dynamic when Hebbian learning is applied. According to Zhu (2008), attractors are moved toward the corner of the phase space hypercube when the patterns learned by the network are being reinforced. Although the attractors are not longer neither binary nor stabilized in the same way as in discrete Hopfield networks, the associative memory allows the continuous Hopfield network to generalize over the learned patterns such that reinforcing local optima also reinforces superior optima regarding constrains satisfaction. Here we show that the self-modeling process can exploit the structure of the network in order to find globally optimal configurations, even if a positive correlation between the energy of attractors and the number of satisfied constraints was not found.

A novel method to assess Caenorhabditis elegans pharyngeal pumping time series through Digital Image Processing as a measure of functional decline in disease and ageing

ABSTRACT. A biological system is characterized by a set of different interdependent scales which interact non-linearly. It has been therefore proposed that the dynamics of physiological variables reflect the underlying modulation mechanisms. Among all physiological variables in humans, heart rate variability (HRV) is the most studied one and has been proved to serve as an independent predictor of heart rate for some chronic degenerative diseases. Our research team proposes Caenorhabditis elegans as an animal model to explore the relationship between the dynamics of physiological variables and the underlying modulation mechanisms. Hereby we aim to test the hypothesis that the variability of the pharyngeal pumping could be a relevant index of functional decline in disease and ageing in this organism. In C. elegans, feeding is achieved through pharyngeal muscular contractions (pharyngeal pumping) controlled by pacemaker neurons. Its pharynx has been compared to the human heart because of their similar electrical properties and development which are controlled by homolog genes. Furthermore, the C. elegans lifespan is only around two weeks; hence physiological alterations can be visualized over the course of aging. Age-related changes in tissue morphology and function, and a decline in C. elegans health are strongly correlated with a reduction in pharyngeal pumping rate (number of pumps/ total recording time) and thus with a decline in survival probability. Traditionally pharyngeal pumping has been assessed by eye and therefore the underlying variability has not been yet taken into account. For obtaining pharyngeal pumping time series, C. elegans individuals were filmed with a high speed camera coupled with a microscope. The videos were then segmented through Digital Image Processing (DIP). The change in the area comprising the pharyngeal grinder (contracted grinder is comprised by fewer pixels than when it is relaxed) was used to construct the time series. A crucial step in the statistical analysis of the time series is to separate the worm´s body movement (tendency) from the pharyngeal pumping (fluctuations). The novel statistical method Singular Spectrum Analysis (SSA) allows to divide self-consistently and data-adaptively tendency from fluctuations. In this work we established an experimental and theoretical method for measuring C. elegans pharyngeal pumping events with the aid of DIP and based on time series autosimilarity.

Lender Roles in Online Peer-to-Peer Lending Networks

ABSTRACT. Online peer-to-peer (P2P) platforms of economic exchange enable users to take advantage of an excess capacity of resources (goods, capital, time, etc.) through free or fee-based sharing directly between individuals. P2P lending platforms such as Prosper, Lending Club, and Bitbond provide users who are in need of funds (borrowers) access to users who have idle capital and are interested in lending it (lenders). Like other types of decentralized platforms, online P2P lending markets offer participants a high degree of autonomy to participate as they wish: lenders can choose how much to lend, when, and to whom, according to a variety of relevant data that they themselves interpret. Yet despite the basic division between lenders and borrowers, little is known about lenders’ distinct paths to participation. These paths are important because they are tied to lenders’ investment decisions, and can have a significant impact on borrowers - many of whom are unable to find funding elsewhere if they are underbanked or unbanked. This study uses loan transaction logs from a P2P Bitcoin lending network, Bitbond (, to identify four structural-behavioral roles enacted by lenders. We then examine how these emergent roles reflect different investment strategies with implications for the lending community at large.

Data from 5,819 loan transactions were collected through Bitbond’s API and website. This data was used to create a static, directed graph where source nodes represented lenders, target nodes represented borrowers, and edges represented the loans that flowed between them. In order to identify structural-behavioral roles, we first partitioned the network into modules using modularity optimization techniques. We then classified lender nodes into four structural-behavioral roles - Provincial Lenders, Connector Lenders, Non-Hub Lenders, Peripheral Lenders - according to their patterns of intra- and inter-module activity. Each plays a different structural role in the network; for example, Provincial Lenders are important to the coherence of their particular communities, whereas Non-Hub Lenders are important for network coherence.

We found that lenders in different roles tended to invest in different numbers of loans, of differing amounts, and of differing quality. For example, Non-Hub Lenders were relatively conservative: they made few, low-risk loans that were relative small, yet they had a high rate of repayment and lost relatively little on average. Provincial Lenders, on the other hand, made many, relatively large loans that had a moderate rate of success in terms of repayment, though they still lost a high amount on average. Taken together with what we know about lenders’ structural roles within the network, these results can help us to understand the value of different types of lenders to a peer-exchange community, and how they may be engaged to improve the welfare and sustainability of the larger community. At a higher level, this study is an important first step in a research agenda for exploring P2P lending systems as socio-technical systems of networked exchange.

What drives transient behaviour in complex systems?
SPEAKER: Jacek Grela

ABSTRACT. We study transient behaviour in the dynamics of complex systems described by a set of non-linear ODE’s. Transient phenomena are ubiquitous in nature whenever system is directional and become increasingly important in non-linear systems. Motivated by ecological (food-webs) and neural network dynamics where interactions are intrinsically non-symmetric we discuss robust properties of transients in these systems.

Destabilizing nature of transient trajectories is discussed and its connection with the eigenvalue-based linearization procedure. The complexity is realized as a random matrix drawn from a modified May-Wigner model. Based on the initial response of the system, we identify a novel stable-transient regime. We calculate exact abundances of typical and extreme transient trajectories finding both Gaussian and Tracy-Widom distributions known in extreme value statistics. We identify degrees of freedom driving transient behaviour as connected to the eigenvectors and encoded in a non-orthogonality matrix T 0 . We accordingly extend the May-Wigner model to contain a phase with typical transient trajectories present. An exact norm of the trajectory is obtained in the vanishing T 0 limit where it describes a normal matrix.

Resilience: An emergent property for complex agroecosystems
SPEAKER: Ismael Quiroz

ABSTRACT. Based on the General System Theory (GST), agroecosystem (AES) is defined as a representation of interactions between abiotic, biotic, technology and culture elements related to crops where farmers define and take decisions on AES management based on his experiences and local knowledge. Resilience is an agroecosystem’s attribute to recover or maintain its structure without losing its functions after being impacted and damaged by external phenomena. Therefore, there is a need to develop resilience systems as an alternative to diminish losing due to extreme events. The aim of this work was to analyze the concept of AES resilience as a property or attribute of complex systems. Then the systems approach is used as a methodological framework to find out solutions to a diverse problems through the world conception in terms of irreducibility attached to a systems and highlighting the whole as a result of complex interrelations among their elements. So, AES, as an abstraction of reality, is integrated by elements, such as social, environmental, economic and technological in agricultural context, and its boundary is defined according to researcher objective. As a result of the elements interrelations emerge properties, which can be divided into structure and attributes. Resilience is an attribute that is expressed, as a function of the vulnerability of the elements that make up AES structure, and has disadvantage, in some cases, as it is a phenomenon not directly observable due to its unpredictability. In the context of agroecosystems, the complexity related to the study of resilience increases as AES is approached due to number of elements integrated in relationship with the hierarchical level. The study of AES can be represented vertically, for example: a geographical region, a farm, or a crop or livestock; or horizontally between geographical regions, farm systems or crops. Another feature, which must be taken into account in resilience process, is the variable nature that make up AES, because in a scenario where system is damaged, the recovery time of the components could be different. For example, if soil suffer erosion, the formation of few centimeters can take at least 100 years, as compared with other elements as the technology, for example an irrigation system, that could be replaced in the short time. It is concluded that resilience is an emergent property of AES and is based on the interactions of the components of system and their nature, so vulnerability depends very much on system´s strength, the kind of the event and therefore to analyze resilience is matter of probability.

Consequences of changes in global patterns of human interactions

ABSTRACT. We address the very rapid and extensive changes in global patterns of interaction between individuals resulting from exponential growth in the proportion of populations participating in social media and other interactive online applications.

While already significant, we anticipate further rapid growth in both participation rates and in the amount of time people spend online.

We see a number of possible consequences from these changes in social interaction networks, some of which are concerning for the future of democratic societies and for the stability of global order.

We draw insights from the scientific study of collective phenomena in complex systems. Changes in interaction patterns often bring about system re-arrangements that are sudden and transformative, through the emergence and self-amplification of large-scale collective behaviour.

In physical systems these are called phase transitions – whereby the dynamics of the system takes on significantly new characteristics and many degrees of freedom become correlated.

We see a parallel here in the possible effects of changes in human interaction patterns on the structure of global social systems – including the traditional structures of nation states, and national and cultural identities.

In particular, the instantaneous and geographically agnostic nature of online interaction is enabling the emergence of new forms of social groupings with their own narratives and identities which are no longer necessarily confined by traditional geographic and cultural boundaries.

Moreover, since the growth of these new groupings is largely driven by the recommender algorithms implemented in the social applications, whereby people become more and more connected to like-minded others and have less and less visibility of alternate perspectives, the possible trend we are concerned about is towards a global “factorisation” of society into large numbers of disjoint groupings, accompanied by erosion of national identities, and weakening of the democratic base.

We study these new long-range interactions and their disruptive potential through both historical analysis and modelling of the dynamics, and draw conclusions about the risks and their consequences, and about the possibility of mitigating the risks through modest levels of policy initiatives – for example by fostering the growth of weak cross links between the emergent social structures.

Social Network Formation Based on Endowment Exchange and Social Representation
SPEAKER: Yuan Yuan

ABSTRACT. The formation and evolution of social networks is a fundamental but poorly understood problem in social network analysis. Large-scale datasets from widely adopted communication technologies, such as cellphone communications, open up new possibilities for studies of social networks. In this paper, we build a novel model for social network formation based on individual characteristics and rationality, and by leveraging large scale datasets we test the effectiveness and predictability of our model.

Our model contains three components. First, we utilize a key concept named endowment, a well-known and useful concept in economics, which represents all attributes of an agent such as assets, abilities, capacities, qualities, etc. In our model, we represent each agent's endowments by a fixed length real-valued vector (hereafter referred to as the "endowment vector"). Second, we use a utility function to decide whether or not a pair of individuals who happen to "interact" should form a social tie, based on the benefits/costs associated with forming a tie considering their endowment vectors. An agent gains positive utility by forming social ties with people who have greater values in beneficial endowment dimensions. However, differences in some other dimensions lead to high communication costs, hindering the formation of new social ties. The utility function decides the willingness of each agent to communicate with another. Last, we use large-scale empirical communication datasets to infer the underlying endowment vectors of each agent. Using optimization methods, we calculate the endowment vector for each individual which would maximize the likelihood of reproducing the observed ground truth social network given our model's dynamics. Like representation learning technique, a popular method in machine learning, the inferred endowments can be utilized to predict the network dynamics and individual attributes.

The results on both synthetic and real datasets demonstrate the effectiveness of our model. The synthetic dataset is a community with 1,000 people and randomly generated endowment vectors. We simulate the dynamics based on our model. The resulting dynamics demonstrate several well-known sociological properties, such as social hierarchy and homophily. Moreover, we are able to successfully recover the underlying endowment vectors for each agent from the network dynamics based on our inference algorithm.

In addition, we use our model to fit a nationwide large-scale mobile communication network in Andorra, by fitting past communication patterns, our model was able to predict future network formation with accuracy that is significantly higher than that reported by competitive machine learning algorithms. Supporting the conjecture that our model provides a causal explanation for social tie formation, the learned endowment vectors can also be used to predict attributes that are likely to be related to endowments, including location, cost of phone model, cellular usage, and special event attendance. Endowment vectors learned by our model are able to predict these attributes with better accuracy compared with state-of-the-art algorithms for network embedding, like DeepWalk, indicating that the learned vectors successfully capture the underlying attributes of agents. This suggests that our model of social tie formation can be used to shape the diversity and usefulness of the network.

Addictions from an enactive and complex systems perspective

ABSTRACT. Addiction has posed a challenge for therapists, health professionals, public policy planners, philosophers, and self-control researchers due not only to the reluctance of addicted individuals to start a treatment and the difficulty to maintain abstinence in the long run, but also to some of the phenomena associated with recovery, such as increases in substance use that result from attempts to suppress thoughts related to the addictive substances, the false hope syndrome regarding self-change, denial, spiral relapse, or weight gain. This poster suggests that the above-mentioned phenomena can be understood under the light of an approach to addictions based on recent proposals within enactivism that generalize the concepts of autonomy, autopoiesis, and adaptivity from metabolic processes to habits. According to these proposals, habits are complex structures that generate and sustain their own identity under precarious conditions. They are conformed by processes involving basic bodily functions, neural mechanisms, intersubjective aspects, and interactions with the environment. One important point of this approach is that habits cannot be understood in isolation, but only as part of a complex network of regional identities that mutually enable and restrain each other, giving rise to a global form of identity (a self). Accordingly, preservation of this net of habits becomes a norm that, along with metabolism-based normativity, guides agent’s behavior. However, habits may conflict with each other and with some of the basic metabolic processes. This could explain the occurrence of the so-called “bad” habits, which from an enactive perspective would not be bad for themselves, but for some of the other identities that constitute the agent, such as the metabolic and social identities. We propose to conceive of addictions as a kind of bad habit. Under this perspective, bad habits may be difficult to eliminate because their dynamics influence the formation and maintenance of other habits, as well as the development of a global identity, making it necessary to change many other regional identities. Furthermore, some bad habits, like addictions, can be so deeply entrenched that they may affect the autonomy of metabolism, making it dependent on habits that get incorporated in the agent’s physiology. This perspective, which implies a different understanding of self and habits, also sheds light on the success of therapies such as treatments that include the use of psychedelics or mindfulness.

A new way of archiving and developing scientific software

ABSTRACT. We describe here the novel SARA Systems Analytics platform, which is a new way of computational support for the scientific community, both for individuals and research groups. The main idea behind this new platform and associated software archive is that scientific software has a very short lifetime compared to commercial software. The reasons are usually short funding cycles inside the scientific community. Software associated to research activities then becomes rapidly irrelevant, because it is not brought up to date after its first release. The distribution of the software never reaches a wider range, because it is designed for a limited purpose. Such limitations are in contrast to the need to solve a variety of scientific computational problems, which are of quite similar nature. Here we propose to put more effort into an international structure which works on a commercial basis, and therefore can guarantee the proper archiving and further development of scientific software, independent of funding cycles but in cooperation with scientific institutions.

SPEAKER: Ismael Quiroz

ABSTRACT. Agroecosystem’s concept represents the interaction of biotic, abiotic, technological and cultural elements respect to a crop where producer defines and makes management decisions based on his traditional knowledge and life experiences in terms of coverage of market demands and social needs. However, this has been approached with positivist-deterministic criteria, so the style of generating knowledge gave way to a linear perspective with the assumption: if the initial conditions of a system were known, it was possible to know its final-global behavior; nevertheless, phenomena such as climate change call into question such predictability, directing agroecosystem’s functioning to the imbalance. Current complex phenomena contain a variety of relationships and variables that have a negative impact on the agroecosystem, leading to the development of new approaches to tackle those. This document aims to analyze the agroecosystems functioning as a system that has emergent properties of a complex nature. In order to approach the functioning of the agroecosystem from the Luhmannian complexity approach, it is important to clarify the cultural dynamics of the social system under study, which drives the process of social autopoiesis and the reproduction of agricultural processes. Central theme is culture seen as a factor that is self-reproduced in social systems, its process of environment adaptation, and therefore, agricultural practices built throughout the adaptation and reproduction process, considering present elements: economic, social, technological, political and environmental, among others. Another aspect of complex agroecosystem functionality is critically state (extrapolated from genetic to social context) allowing the system to be reorganized based on its properties of robustness and innovation and a rational level in its productivity, affecting as little as possible ecosystems stability. Then, agroecosystem controller evolves in its practices and the ability of a system to produce and reproduce its own elements to differentiate itself from its environment. In this sense, necessary information to self-reproduce and evolve is found in culture society to which the agroecosystem controller belongs. From preceding approach a redefinition of interdisciplinarity arises: "interdisciplinary research" under a complex system study approach. What integrates an interdisciplinary team for the study of a complex system is a common conceptual and methodological framework, derived from a shared conception of relation society-science, which will allow to define problematic to study under same approach, result of specialization of each member of research team. It is concluded that methodological tool needed to address agroecosystem complexity is interdisciplinary; in addition, criticality state stimulates the producer so that his agroecosystem is dynamic and that its management is self-produced through social autopoiesis.

The structure of mythologies explains the human expansion out of Africa
SPEAKER: Hyunuk Kim

ABSTRACT. Mythologies are collections of myths, or corpora of traditional stories, in human cultures to commonly explain our customs and places in the universe and/or the origins of the universe. Historically, comparative mythology has recognized remarkable consistencies in mythological themes and structures across the world. Often, these commonalities have been explained by one of two mechanisms: a shared common origin, or a shared, Jungian, collective imagination. The universality and structural distinctions, however, have not been tested in a comprehensive and systemic way as they are difficult to quantify. Here, we focus on the global distribution of mythemes, mythological motifs that are irreducible thematic units identified within and across individual myths. The statistical properties of co-occurrences of mythemes in traditions are analyzed, and hence ten distinctive clusters of traditions are identified. These clusters remarkably well band with biogeographic regions as shown in Fig 1. Furthermore, information transfer from one cluster to another exhibits a good agreement with human migration trajectory. The result suggests a deep evolutionary history to human mythologies originating with modern humans in Africa < 60k yBP and diversifying as humans expanded out of Africa across the planet. Finally we show how underlying structures of mythemes to explain structural characteristics for each cluster, and the direction of their information transfer.

Building technology space from microscopic dynamics to macro structure
SPEAKER: Hyunuk Kim

ABSTRACT. Combination of existing ideas has moved into prominent role in technological innovation [1]. These dynamics can well be captured in US patent data where individual technological capabilities are codified as classification system (USPC), accumulation of which accounts for the large repertoire of combinations. Using statistical analysis, we measure the extent to which each pair deviates from the expected random configuration (Z-score), and assign novelty profile to each invention. Analyzing statistics of patent’s novelty profile uncovers a secret recipe for high impact innovation: conventional (typical) pairs of ideas with novel (atypical) twist [2]. Built on this empirical result that inventive activity can be considered as a searching process for novel connections of technological building blocks, we provide a mechanistic model to explain how technology space evolves by strengthening conventionality and creating novelty. We find the structural characteristics generated by the suggested model are in a good agreement with the observed macro structure of technological innovation including rich-get-richer phenomenon, modular structures, and the formation of high novelty links.

[1] H. Youn, D. Strumsky, L. M. A. Bettencourt, and J. Lobo, Journal of The Royal Society Interface 12 (2015), 10.1098/rsif.2015.0272. [2] D. Kim, D. B. Cerigo, H. Jeong, and H. Youn, EPJ Data Science 5 (2016), 10.1140/epjds/s13688-016-0069-1.

Quantifying airborne dispersal routes of pathogens over continents to safeguard global wheat supply
SPEAKER: Marcel Meyer

ABSTRACT. Infectious crop diseases spreading over large agricultural areas pose a threat to food security. Aggressive strains of the obligate pathogenic fungus Puccinia graminis f.sp. tritici (Pgt), causing the crop disease wheat stem rust, have been detected in East Africa and the Middle East, where they lead to substantial economic losses, and threaten livelihoods of farmers. The majority of commercially grown wheat cultivars world-wide are susceptible to these emerging strains, which pose a risk to global wheat production, because the fungal spores transmitting the disease can be wind-dispersed over regions and even continents. Targeted surveillance and control requires knowledge about airborne dispersal of pathogens, but the complex nature of long-distance dispersal (LDD) poses significant challenges for quantitative research. We combine international field surveys, global meteorological data, a Lagrangian dispersion model and high-performance computational resources to simulate a set of disease outbreak scenarios, tracing billions of stochastic trajectories of fungal spores over dynamically changing host and environmental landscapes for more than a decade. This provides the first quantitative assessment of spore transmission frequencies and amounts amongst all wheat producing countries in Southern / East Africa, the Middle East, and Central / South Asia. We identify zones of high air-borne connectivity that geographically correspond with previously postulated wheat rust epidemiological zones (characterized by endemic disease and free movement of inoculum), and regions with genetic similarities in related pathogen populations. We quantify the circumstances (routes, timing, outbreak sizes) under which virulent pathogen strains such as ‘Ug99’ pose a threat from LDD out of East Africa to the large wheat producing areas in Pakistan and India. Long-term mean spore dispersal trends (predominant direction, frequencies, amounts) are summarized for all countries in the domain (Supplementary Data). Our mechanistic modelling framework can be applied to other geographic areas, adapted for other pathogens, and used to provide risk assessments in real-time.

Empirical correction of the percolation threshold using complement networks

ABSTRACT. Models of percolation processes on networks currently assume locally tree-like structures at low densities, and are derived exactly only in the thermodynamic limit. Finite size effects and the presence of short loops in real systems however cause a deviation between the empirical percolation threshold and its model-predicted value. Here we show the existence of an empirical linear relation between the percolation threshold and its model predicted value across a large number of real and model networks. Such a putatively universal relation can then be used to correct the estimated value of the percolation threshold. We further show how to obtain a more precise relation using the concept of the complement graph, by investigating on the connection between the percolation threshold of a network and that of its complement.

Who is the shepherd? Small city follows larger city’s trajectory in urban economy
SPEAKER: Inho Hong

ABSTRACT. Identifying the evolutionary paths of urban economy is key to accessing, maintaining and forecasting city’s future growth and success. Although path dependency in economic trajectories has been alluded to in the literature, it still lacks comprehensive empirical evidences. Here, we study the evolution of urban economy by analyzing the whole U.S. industries in individual cities over two recent decades. The industrial characteristic of a city is quantified by revealed comparative advantage (RCA), and the temporal change of industrial similarity between cities describes how urban economy evolves. We find that small cities move into closer resemblance of large cities in time as Fig. 1a shows, that is, urban industrial evolution repeats itself in individual cities. Figure 1b shows that when a group of largest cities is fixed in time, the rest of smaller cities become more similar to them with time lag. It is indeed the case that small cities follow the industrial footprints of large cities. We show that these dynamics are relatively general characteristics, not entirely driven by a few industry sectors. Finally, we identify the structural transition in urban economy as a crossover of dominant industries, and the transition point is analytically explained by the distribution of scaling exponents.

The Relationship of Social Network Connectivity to Positive Emotion Word Use and Other LIWC Word Categories

ABSTRACT. Linguistic Inquiry and Word Count (LIWC) is a quantitative text analysis program that uses word count strategies to extract psychological and social meaning out of the words people use (Pennebaker et al. 2003). Another method of studying word use is by analyzing social network structures and predicting general language patterns through network analysis and information theory (Vinson and Dale 2016). This study inspects datasets from Yelp and examines the relationship between measures of social network connectivity and positive emotion word use (e.g., love, nice, sweet). The number of friends each user has is a simple measure of social network connectivity. Positive emotion word use is measured by analyzing business reviews of Yelp users using LIWC. We run a linear correlation between users’ number of friends (connectivity) and the percentage of positive emotion words in their respective reviews. Results suggest that users with more friends tend to write reviews with less positive emotion words. We further explore relationships between other LIWC word categories and the number of friends and run linear regression models using these LIWC word categories to determine if we can predict the number of friends by the words people use. Extending these findings to other social networks such as Twitter and Facebook will generalize the results and make a stronger theoretical case.

Is tumor evolution neutral?

ABSTRACT. Cancer is the result of a somatic evolution process. During tumor growth, however, evolution continues within the tumor, new mutations arise, the tumor divides into more and more parts. It is an interesting question whether the individual subclones grow according to the same dynamics or maybe selective processes can also contribute, which influence makes some subclones more widespread and other ones more limited. According to recent results, in several cases there are no differences between the subclones from the point of view of selection, the subclones grow by the same rate. As we will see in the presentation, however, a closer examination raises serious questions about the significance of the results, or generally the empirical verifiability of theoretical predictions, taking into account the limits of the current and near future technologies. On the other hand, there is a quantity among the employed empirical data, which is less significant from the point of view of subclone selection: the number of frequent mutations within the tumor. The empirical values of this quantity are hard to reconcile with the usual picture about the accumulation of somatic mutations. In this talk, I sketch the possible alternatives and their implications.

El Niño, Food Insecurity, and Challenges to Resilience in Highlands Papua New Guinea
SPEAKER: Jerry Jacka

ABSTRACT. The 2015 El Niño severely impacted horticulturalists in highlands Papua New Guinea as accompanying frosts and droughts devastated their subsistence food crops. Responses to previous El Niño events have typically resulted in large-scale migration to lower altitude areas. However, with economic development stemming from large-scale gold mining, population pressures and intergroup conflicts, and changes in access to natural resources in the destinations where people are migrating, customary, resilient responses of highland social-ecological systems are being challenged. The research uses the heuristic of the panarchy to understand how cross-scalar variables interact with peoples’ food systems and decision making processes in a time of crisis. Results are based on field research conducted in 2016 and highlight vulnerabilities and the limits of resilience in certain coupled social-ecological systems to extreme climatic changes and socioeconomic development.

What Darwin didn't know: the arrival of the fittest
SPEAKER: Chico Camargo

ABSTRACT. In evolutionary biology, the expression “survival of the fittest” is often heard as a summary of Charles Darwin’s idea of natural selection. This idea has definitely helped us understand how life is shaped by variation (and selective survival), but even Darwin himself acknowledged our lack of understanding of what causes all the variation he observed. Hugo de Vries, one of the first geneticists, famously said: “Natural selection may explain the survival of the fittest, but it cannot explain the arrival of the fittest.” More recently, genomics and bioinformatics have added pieces to the puzzle: there is large redundancy in the genome, caused by neutral mutations, implying that multiple genotypes can produce the same phenotypes. That naturally raises questions about how genotypes are distributed over phenotypes, and about biases in that distribution.

In this work, we address these questions using computational models for gene regulatory networks. In particular, we look at the gene network that regulates the fission yeast cell cycle. By working with a coarse-grained model of this gene network, we find that the design space of gene networks has a large bias in the distribution of genotypes mapping to phenotype, which is related to properties such as mutational robustness and evolvability. Moreover, we find that this bias is can also be characterised by applying concepts from algorithmic information theory, such as Kolmogorov complexity and Levin’s coding theorem, which suggest that the most likely phenotypes will be the ones with lower complexity.

Analysis of Mexico's drug-cartels network
SPEAKER: Ollin Langle

ABSTRACT. Violence linked to drug traffic in México has increased the last ten years, the reasons of this spread are difficult to quantify. However, a relevant feature to take into account is the large number of drug cartels and the disruption of them into small violent cells.

Many strategies have been proposed to dismantle the operation networks of these criminal groups, being the capture attempt of the cartel leaders the most usual one. This strategy has not have a significant positive outcome decreasing the influence of these groups neither the violence around the country. In this sense, the complex network theory approach emerges as an alternative to understand the dynamics underlying this no-trivial phenomenon. In this approach, a network is composed by nodes such as people, places, cities, etc., and links represent any kind of relashionship between said nodes.

In this work, by means of a semi-automated text mining tool we construct a network of the characters of the Anabel Hernandez's book "Los señores del narco" in order to analyze it's topological and dynamical properties. By performig directed attacks to the most relevant nodes of the network using different centralities, we measure the robustness of this network in terms of the size of the giant component i.e. optimal percolation. We also analyze the resulting network communities after these attacks and observe the exact amount of removed characters needed to dismantle this giant component.

With this approach it is possible not only to propose a minimal quantity of characters to be removed from the network to desmantle it but also if there are differences between the most socially influential nodes and those who are important to the network topology. These kind of approaches could aquire relevance in terms of developing strategies to disable complex criminal structures.

Political dynamics of the mexican senate

ABSTRACT. With the vast amounts of data available freely about virtually any field of knowledge, one of the greatest challenges for today’s scientists is to be able to store, organize and analyze this data and to use it for novel and useful applications. Political sciences and legislation are fields that have seen such an increase. The Mexican government and several of its dependencies have made a lot of their databases publicly available online, with the Chamber of Senators being the main focus of this work. New oportunities to be aware of the actions that decision makers are taking are arising and showing if a real representative democracy is being held. In this work we present a framework for automatic data acquisition, construction of a graph oriented data base and statistical modeling of data taking advantage of the capabilities of cloud computing. The data collection was though the Mexican Senate's official website, so everything is completely open. The information gathered consist of the names of the senators and their alternates, party and comissions they belong to, entity they represent, the edicts, how did the senators vote, attendance and the dates in which the above happened. Followed by this a graph oriented database was build which allows to performm an analysis of the senators actions and find communities in a temporal basis. A distance matrix between each senator was created from the votes which was used to perform statistical analysis such as multidimensional scaling for the projection of the vectors asociated with the senators and the construction of a weighted network in order to find communities amog them and study it's topological properties. Another network was also built from the joint proposals of the edicts by the commissions because each edict is proposed by one or more commissions. A new analysis of communities was carried out for this network, finding 3 great subjects that after a manual review we determined that they reflect the following topics; governance, foreing affairs and social issues. The final part of the paper is to determine if it is possible to predict the vote of a particular senator through his or her history and the metadata we have. Obtaining the best accuracy by means of logistic regression with a value of 0.7082, surpassing the 0.6608 of predicting that they always vote for pro.

Modeling the Spatio-Temporal Dynamics of Worm Propagation in Smartphones based on Cellular Automata

ABSTRACT. In recent years, the worldwide market for smartphones has grown dramatically. Smartphones users have the ability to share programs and data with each other, such as surfing the web, sending or receiving emails, and online shopping. However, these availability and mobile services provided by smartphones increases their vulnerability to malware attacks. Consequently, modeling of worm propagation in smartphones in order to predict the side effects of a new threat and understand the complex behavior of the modeled malware has received significant attention in recent years. One of possible communication channels for the penetration of mobile malware is the Bluetooth interface, where the malware infects devices in its proximity as biological virus do. Due to this strong similarity in the behaviors of self-replicating and propagation between mobile malware and biological viruses, most investigations of malware propagation in smartphones focus predominately on modeling the malware propagation by employing the classical epidemic theories in epidemiology. Cellular Automata (CA) models have emerged recently, as a promising alternative to characterize worm propagation and understand its behavior. However, in the most of the existing CA models for mobile malware, it is assumed that all smartphones are homogeneous and transmission time of the worm is done in one time cycle. Moreover, there are few models dealing with the propagation of mobile worm by means of Bluetooth connections and the most of them only study temporal evolution. However, it is also of interest to simulate the spatial spreading due to the main characteristics of Bluetooth. In this work, a mathematical model to study the spatio-temporal propagation dynamics of Bluetooth worms based on cellular automata and the compartmental epidemiological models is introduced. The model takes into account the local interactions between the smartphones and it is able to simulate the individual dynamic of each mobile device. Furthermore, the model considers the effect of mobility of smartphone users on the infection propagation. Some simulation results indicate that the model captures the spatio-temporal dynamics of Bluetooth worm propagation and facilitates predictions of the evolution of the malware spreading. In particular, results indicate that while Bluetooth viruses could reach all susceptible devices within a time period, the human mobility and the Bluetooth antenna range are crucial factors to the stop the spread of malware. In addition, the computational cost of the model is low in comparison with other existing models, making it suitable to understand the behavior of a modeled malware and predict the spreading curves of Bluetooth worm propagation in large areas.

Non-linear analysis of the occurrence of hurricanes in the Gulf of Mexico and the Caribbean Sea

ABSTRACT. Hurricanes are complex systems that carry large amounts of energy. Its impact produces, most of the time, natural disasters involving the loss of human lives and of materials and infrastructure that is accounted for in billions of US dollars. However, not everything is negative as hurricanes are the main source of rainwater for the regions where they develop. However, the great progress that has been made in its study to predict the number of hurricanes, their intensity and trajectories from year to year. Despite the progress made, we do not have the ability to fully predict their behaviours. So there are still some questions to be answered. Some of them have to do with their chaotic and non-linear behaviour. In this study we make a non-linear analysis of the time series obtained from 1749 to 2012 of the hurricanes occurrence in the Gulf of Mexico and the Caribbean Sea. The construction of the hurricane time series was done based on the hurricane database of The North Atlantic-basin Hurricane Database (HURDAT), and the published historical information. The Lyapunov exponent indicated that the system presented chaotic dynamics, and the time series spectral analysis along with the non-linear analysis of the hurricanes time series showed a chaotic edge behavior. One possible explanation for this edge is the individual chaotic behavior of hurricanes, either by categories or individual, regardless of their category, and behaves on a regular basis.

Time Dependence of Meme Popularity Distributions

ABSTRACT. In this work we present a simple model of the spreading behaviour of memes on a social platform or network structure like Twitter for example. The dynamics of the model can be described quite simply in terms of a (re)tweeting or innovating process. On a Twitter like network memes or tweets propagate in a unilateral direction determined by the so-called friend-follower relationships. With innovation probability $\mu$, a Twitter user generates a new unseen meme which overwrites the meme currently on the user's screen and is broadcast to all of the user's followers, overwriting the screen of the followers [1]. On the other hand, a user may decide to (re)tweet, with probability $1-\mu$, the meme currently occupying his/her screen and broadcast the meme, again overwriting the meme currently on the followers screen. In this work, we consider the density or total number of screens occupied by a specific meme at a given time. We determine via a branching process approximation a simple governing model for the users' actions in the form of an advection equation. The time-dependent probability distributions of the screen occupancies are calculated numerically; the structure of these depends on the follower degree distribution. Asymptotic analysis analytically permits the construction of the large-time probability distribution corroborating the results obtained numerically. It is shown analytically that the distribution of cascade sizes consists of two components, a static or steady-state segment and a component which continues to evolve. We highlight the two components by a rescaling of the distribution of cascade sizes and show the collapse of the distributions, observed at different ages onto a single curve, thus showing the self-similarity of our model at large time-scales. We also obtain the distribution of lifetimes of memes and the results are in good agreement with those appearing in [2].

[1] J. P. Gleeson, J. A. Ward, K. P. O'Sullivan and W. T. Lee, ``Competition-Induced Criticality in a Model of Meme Popularity'', Phys. Rev. Lett, 112, 048701, (2014).\\ [2] K. I. Goh, D. S. Lee, B. Kahng, and D. Kim, ``Sandpile on Scale-Free Networks'', Phys. Rev. Lett, 91, 148701, (2003).

Finite size effects in the glass transition: a field-theory approach

ABSTRACT. Super-cooled liquids are out-of-equilibrium systems in which a material remains in the liquid phase at temperatures lower then its melting point. At even lower temperatures, these materials stop flowing, passing through the glass transition, solidifying to an amorphous glassy phase. In this work, we present the effects caused by changes in the system size, using a field-theory approach. We compactify one dimension of the system, leading to a quasi bi-dimensional film. We show that the transition temperature increases with decreasing thickness, reaching a divergence when it vanishes, which suggests a fundamental difference between two- and three-dimensional systems. We also present preliminary numerical results, reinforcing the analytical one.

Numerical observations of an Ornstein-Uhlenbeck process for the velocity process of an $N$-particle system interacting stochastically

ABSTRACT. We consider a $3$-dimensional $N$-particle system with mass $m$ and no potential energy. The interaction is modeled as random momentum exchange between particles, obeying conservation of energy. The dynamics of the system is given by a single stochastic differential equation for a $D = 3N$ dimensional velocity vector ($\mathbf{V}$) driven by a $D$-dimensional Brownian noise term. A quick look at this evolution equation shows us that a single component of $\mathbf{V}$ evolves independently of the remaining directions according to an one dimensional Ornstein-Uhlenbeck process driven by a single noise term along the same direction (corresponding to a particle moving in ``white noise" with friction) , when this component is small enough. Our interest, however, is to study the limiting process for the components of $\mathbf{V}$ when these are of order one. Let $A$ be the noise amplitude and $k_{\mathrm{B}} T /2$ be the total energy per degree of freedom. We thus consider the component $V_1$ of $\mathbf{V}$ and the component $AB_1$ of the noise term driving $\mathbf{V}$. Call $U_1$ the Ornstein-Uhlenbeck process driven by the noise $AB_1$ in a viscous bath with friction rate $mA^2 /(k_{\mathrm{B}} T )$. We prove that $V_1$ converges in probability to this $U_1$ as $N \to \infty$. The proof easily extends to any finite number $n(< 3N )$ of components of the velocity vector $\mathbf{V}$ ; these $n$ components become independent and identically distributed (i.i.d.) in the limit $N \to \infty$. Furthermore, we show that our model relates to the class of Kac systems. If we impose total momentum conservation, the three-dimensional velocities of individual particles converge in probability to independent three-dimensional Ornstein-Uhlenbeck processes as $N \to \infty$. Further, a change in velocity variables, proposed in \cite{KL06}, allows to analyse the $N$-particle system with total energy and momentum conservation in terms of a $(N-1)$-particle system with only energy conservation.

SPEAKER: Pilar Pena

ABSTRACT. Public administration is a complex system considering that complexity theory requires a new conceptual framework to characterize nonlinear dynamic systems in organizations where change is part of nature and interactions. There was perceived instability everywhere, dissipated structures and non-random behaviors close to the chaos that require a new field of knowledge and application of the administrative process, redesigning organizational systems in which they participate: individuals, processes and organization as the model of strategic management Of the complexity to explain the new dynamics of our time. Because we are in the knowledge society requires a paradigm shift in Public Administration and a redefinition of roles and responsibilities of public entities in charge of a certain activity. Therefore, there is a need for recurrent innovation in water management as well as the training of public administrators, regardless of public sector reforms.

Complexity and Control of the Nonlinear Vibrational Dynamics of the HCN Molecule

ABSTRACT. We study the complexity of the vibrational dynamics of a model for the HCN molecule in the presence of a monochromatic laser field. The variation of the structural behavior of the system as a function of the laser frequency is analyzed in detail using the smaller alignment index, frequency maps, and diffusion coefficients. It is observed that the ergodicity of the system depends on the frequency of the excitation field, especially in its transitions from and into chaos. This provides a roadmap for the possibility of bond excitation and dissociation in this molecule.

Molecular vibrational dynamics has been the subject of an intense research activity in the past years, this giving rise to numerous publications that appeared in this field. The theoretical framework for these kinds of studies is based both on classical and quantum mechanics, having profound roots in the characterization of chaos in Hamiltonian systems. This topic was nicely addressed in the seminal work of Kolmogorov, Arnold, and Moser, that produced the celebrated KAM theorem. The study of dynamical chaos theory has substantially flourished thereupon, becoming an area of active research within the scientific community of dynamical systems.

One topic of much interest in this branch of chemical dynamics is the active control of molecular nonlinear dynamical systems and chemical reactivity, typically using lasers. An extensive literature has been produced on this subject. In relation to our work with the HCN molecule, the laser control of bond excitation, bond dissociation (typically of the strong CN bond), and the isomerization of HCN have been extensively considered in the literature. Brezina and Liu considered the possibility of controlling the CH and CN excitations and dissociation with laser pulses. For this purpose, they used a classical mechanics widespread vibrational model consisting of two kinetically coupled Morse bond functions freezing the bending at its equilibrium value. Special attention was paid to the role played by IVR, considering different values of the laser frequency and amplitude. These authors found that simple linearly chirped pulses are effective in exciting and dissociating the CH, while this is more difficult for the stronger CN bond. Recently, Sethi and Keshavamurthy revisited the same problem, concentrating only in one of the laser frequencies. This work was a start in the identification of the main aspects of the dissociation dynamics and mechanism in phase space, and the characterization of the system in terms of the classical dynamical resonances (Arnold) network. They found the importance of two regions of frequency space, the dissociation hub, which constitutes a gateway for dissociating trajectories, and the noble hub, characterized by very irrational frequency ratios, that constitutes a very sticky area of trapped trajectories for long times.

In this paper, we extend previous work, by considering the influence of the laser frequency on the dynamics, to use it as a possible control parameter by varying the dynamical structure of the system. In this way, we can be more precise than previous works in predicting which laser frequencies are best in order to promote dissociation.

Innovative Educational Intervention Based on Self-observing Interactive Pupils

ABSTRACT. New educational approaches try to foster in a school group peer recognition, self-observation and pupils’ freedom to express themselves in reflective terms and about their experience in the classroom.

The objective of this research was to investigate how the interventionist strategy of the teacher facilitated these interactions and how such interactions favored the group's self-organization.

To that end, a questionnaire was applied to the pupils with questions regarding the classroom environment and the teaching innovative strategies, to identify the observed differences of the course with other courses as well as changes in the pupils themselves.

Reorganizing the classroom space from a traditional layout of rigid row benches looking forward towards a flexible arrangement where all students could make visual and physical contact with each other, was the most innovative strategy that promoted the largest interaction. In this case, it is interesting to link these forms of interaction derived from spatial reorganization from the enactive approach (Varela et al., 1991) and the enactive approach to social interaction (Froese, 2015), especially regarding "Making sense" and "embodiment". The multi-agent system schema of Froese and Di Paolo (2011: 12) "... is possible when two adaptive agents, who share an environment, are involved in a sensor-motor coupling, in which their activities are intertwined so that mutual interaction results in a process of interaction characterized by an autonomous organization ... "

In the repeated interactions of self-observation among the agents of the system, perceptions of trust, physical contact, group recognition and interaction emerged, as well as meaning discovery and group reflection of experience. Hence, we can affirm that they are elements of the domain of enaction in the sense of Froese (2011). Such emergencies are new properties of the system - school group -, since the observers distinguish them as out of normality and their expectations at the beginning. With this study, some important results have been emerged: group connection, recognition of the other, freedom of expression, construction of meaning, and a definition of what a model of formative intervention is outlined. These results will be presented as the discourse expressed by the students in the answers to the questionnaire.

Phyllotactic pattern formation driven by curvature and stress

ABSTRACT. The arrangement of repeated lateral plant organs, such as leaves, floral structures, ribs in cacti or scales in a pine are a Phyllotactic process. The emergence of these phenomena has fascinated humans along centuries and can be considered as the oldest branch of mathematical biology and one of the open questions in developmental biology. The positioning of lateral organs around the plant roots, the organization of plant tissues to resist environmental stresses or the choice of the oriented cell division planes seem to have a common response to mechanical stress and their feedbacks. These kind of phenomena suggest that the physics, genetics and biochemist of development are not separated issues for an organism to grow and develop. In this sense, we propose a model that integrated some intrinsic features of growth, physical forces, mechanical and geometrical constraints that feedback a phyllotactic process. Our propose is a novel mechanism that integrated a Turing system on growing domains and the phase-field theory for measure the changes of the curvature and the stress.

Cameroon's security architecture: A complex adaptive system?
SPEAKER: Manu Lekunze

ABSTRACT. This article evaluates Cameroon’s security architecture against literature on complex adaptive systems. It uncovers that Cameroon’s security architecture is made up of multiple actors which are relatively autonomous, heterogeneous, interdependent and could follow simple rules. The security architecture is hierarchically organised, anti-reductionist, auto-catalytic, co-evolutionary, dynamic, capable of learning and adaptive. It consists many systems within a larger security system. It is therefore concluded that, Cameroon’s security architecture is a complex adaptive system. This lays down the foundation for the use of complexity sciences in the study of security. The use of complexity sciences signifies a paradigm shift in security studies which have traditionally focused on linear analysis of actors, threats and consequences on the referent.

A complexity perspective of the Bitcoin economy: design, reality and emergence

ABSTRACT. Abstract Bitcoin is an original attempt to create a currency without a central issuer. In contrast to fiat currencies where money supply is dictated by policies of a central authority, Bitcoin creation follows a stringent rule that makes bitcoins laborious to create, while any user can participate of the supply. Users connect to a peer-to-peer network, hide themselves behind multiple aliases (so-called addresses) and issue transactions to others. To ensure validity, transactions are recorded in a public ledger so that Bitcoin ownership can be independently verified. All together: the mechanism of supply, users, and the transactions constitute the Bitcoin economy. While anonymised, it is possible to aggregate aliases into users. Doing so, we reveal the main characteristics of the Bitcoin economy. This closed system, having followed a technocratic approach in its immutable design, is the only case of an economy where all monetary transactions can be traced back with full detail. Our analyses show that the number of Bitcoin users has been growing at least exponentially since the introduction of Bitcoin, while the number of bitcoins mined grew linearly. This creates a mismatch between large-scale adoption and scarce supply, partly explaining the explosive price increase against fiat currencies in the last years. One of the main advertised characteristics of Bitcoin is the decentralised nature of mining. However, the proportion of users who participate of the supply has become vanishingly small. This leads to: (i) increasing accumulation of wealth in the hands of a few, far exceeding the wealth inequality observed in real countries; and (ii) the emergence of a hierarchical flow of bitcoins emanating from the miners, a phenomenon only visible at the level of the user network. Interestingly, this fixed incentive scheme has created the emergence of large levels of centralisation and economic flow, as our analysis show.

Complexity at the nanoscale "A new era in nanotechnology Graphene the New Horizon"

ABSTRACT. A honeycomb two dimensional lattice of a monolayer carbon atoms is a new material called graphene which discovered recently. Graphene consists of a single atomic layer of sp2 hybridized carbon atoms that result in a hexagonal lattice. Around each carbon atom, three strong σ bonds are established with the other three surrounding carbon atoms. Graphene opened a new era in nanotechnology. The outstanding mechanical, electrical and physical properties of graphene warrants its use in a variety of areas such as hydrogen technology, electronics, sensing and drug delivery, among many others. The zero band gap of the graphene sheets renders the construction of graphene based field effect transistors very difficult. Therefore, several groups have been proposed different methods to open a band gap in graphene. The electronic structure of pristine graphene sheet and three different adsorption sites of H2S onto graphene sheet were studied. Calculations show that the adsorption of H2S on top site open very small direct energy gap. Comparing the angular momentum decomposition of the atoms projected electronic density of states of pristine graphene sheet with that of H2S-pristine graphene for three different sites (bridge, top and hollow), we found significant influence and strong hybridization between H2S molecule and graphene sheet. Thus pristine graphene sheet is very good adsorbent materials for H2S molecule. In addition the linear and nonlinear optical susceptibilities of pristine graphene and H2S adsorbed at three different sites onto graphene sheet are calculated so as to obtain further insight into the electronic properties. Calculations show that the adsorption of H2S on top site cause significant changes in the linear and nonlinear optical susceptibilities. That is attributed to the fact that adsorb H2S onto graphene sheet cause significant changes in the electronic structures, and strong hybridization between H2S molecule and graphene sheet, as a results of the strong hybridization a strong covalent bonds were established between C, H, and S. A DFT calculations based on all-electron full potential linearized augmented plane wave (FP-LAPW) method, was used. In order to understand the adsorption properties of H2S molecule adsorbed onto graphene, all possible adsorption configurations (top, bridge and hollow -sites) were considered.

Crowdsourcing as a source of popularity for mobile applications

ABSTRACT. What drives consumers to choose and download mobile applications from over 2.8 million available on android? Is the decision-making process different when we pay or download a free application? In what situations are consumers willing to share an application's opinion? How many other opinions affect the number of downloads? The study was carried out by 5881 of the most popular applications in each of the 49 categories. Statistical surveys have made it possible to identify statistically significant factors. The study separated the analysis separately for paid and unpaid applications, including the number of downloads, the number of appraisals and their value, and the application history.

Computational cell lineage dynamics to understand embryogenesis

ABSTRACT. Digital cell lineages reconstructed from 3D+time imaging data provide unique information to unveil mechanical cues and their role in morphogenetic processes. Our methodology based on a kinematic analysis of cell lineage data reveals deformation patterns and quantitative morphogenetic landmarks for a new type of developmental table. The characteristic spatial and temporal length scales of mechanical deformation patterns derived from a continuous approximation of cell displacements indicate a compressible fluid-like behavior of zebrafish gastrulating tissues. The instantaneous deformation rate at the mesoscopic level of the cell's neighborhood is spatially and temporally heterogeneous. The robustness of mechanical patterns results from their cumulative history along cell trajectories. Unsupervised classification of mechanical descriptor profiles was used to assess the homogeneity of biomechanical cues in cell populations. Further clustering of cell trajectories according to their cumulative mesoscopic biomechanical history during gastrulation revealed ordered and coherent spatiotemporal patterns comparable to that of the embryonic fate map.

The chaos of coral reefs: an assumption-free approach to causality, dynamics and predictions in ecosystems

ABSTRACT. Although there is a high risk of continued coral reef loss on a global scale, responses to widespread stressors at local and regional scales indicate that resilience to chronic stress is possible, but also variable. Coral reefs are complex systems that exhibit nonlinear behavior, including chaos, feedbacks, multistabilities, cascading effects, adaptation and emergent phenomena. Using traditional models to resolve the dynamical processes that control resilience is problematic due to error from excluded variables, incorrectly identifying mirage correlations as system drivers, and untestable assumptions about relationships between variables. Alternatively, a changing coral reef can be considered a trajectory through different states, whose change over time depends on previous states and is determined by a set of rules. We present a promising technique for understanding and forecasting ecosystems that is adapted from single-species Empirical Dynamic Modeling, using time series data to reconstruct nonlinear state space. This reconstruction preserves the topology of its chaotic attractor manifold, which represents a trajectory of linked variables through state space, allowing us to correctly discern shared causal drivers from interactions. Without the need for error-inducing model assumptions, this approach also outperforms many other tested models for forecasting system dynamics. We show ecosystem-scale predictions from simulated and real data that demonstrate the tremendous value of this tool for improving our understanding and management of coral reefs in a changing world.

Dynamical approach to the phase transition in kinetically constrained spin-gasses via population dynamics.

ABSTRACT. By means of a population-dynamics algorithm we explore statics and dynamics of the Fredrickson-Andersen and the North-East kinetically constrained model in two-dimensional set-ups with L^2 sites, they correspond to strong and fragile glasses, respectively. We investigate these systems with two different boundary conditions: the FA with all the sites on the left boundary active (FAL), the FA with the corner active (FAC), and the NE with the corner active (NEC). The so-called cloning-algorithm allows us to explore the dynamical phase transition occurring at a finite s corresponding to the modified evolution of the master equation of the process, which scales as s = 1/L for the FAL and s=1/L^2 for the FAC and NEC. With such algorithms we have access to, in principle, unphysical configurations corresponding to rare events. In particular we explore a ’less-active-than-average’ regime where mobile defects propagate along the system more slowly than for the unmodified dynamics corresponding to the physical system. Finite-size effects are discussed and some heuristics about the transition are given.

A new approach to detecting sub-community structure in high-dimensional data
SPEAKER: Sehyun Kim

ABSTRACT. For complex systems described as networks, modularity maximization has been emerging as one of community detection methods like PCA and network analysis, due to their intuitive concept and application potential to real systems in spite of the resolution limit. By the traditional methods, however, sub-community structure may not be clearly revealed in many cases. For the complex system of which nodes are expressed in high-dimensional feature vectors, we propose a new procedure using archetypal analysis (AA) and an invented quality function for uncovering the multiscale community structure of the system, visualize the structure with t-SNE and also use machine learning techniques for optimization issues. In this study, we reveal macro- and sub-community structures of various complex systems including generated system, financial system and bioinformatics system with the proposed approach and other traditional methods, and also show that the proposed one can overcome the limit of traditional ones in community detection.

chemistry complexity

ABSTRACT. Complexity" is a subject that is beginning to be important in chemistry. Historically, chemistry has emphasized the approximation of complex nonlinear processes by simpler linear ones. Complexity is becoming a profitable approach to a wide range of problems, especially the understanding of life.

On A Generative Revolution in the Age of Complexity

ABSTRACT. Living in the Age of Complexity “we are running against the hard wall of complexity” (Barabási, 2003, p. 6; emphasis added). This describes nicely the problem posed by complexity for both science and society at large. Actually this state of the art describes the very ‘crisis of knowledge’ we are in nowadays (Cilliers, 1998, 2011; Jörg, 2014). The knowledge available has very much become a problem itself instead of the solution ( see Müller-Prothmann, 2006, p. 14): that is, the solution of the deep problem of complexity for science and society. We seem to be very much imprisoned in our regular description of society (Wierzbicka, 2014). Helga Nowotny has described this state of the art as “the embarrassment of complexity” (Nowotny, 2013, 2016). We not only do not know what we do not know. We also do not know how to know what we do not know. According to the Santa Fe Institute we still have no general theory of complexity available (SFI, 2015) for an adequate description of the complexity of our society. Some scholars like Stuart Kauffman (2009) have recognized that we urgently need “a radically altered account of reality” (p. xv). It may be argued that for such an altered account of reality we really need to go beyond mainstream science (see Mitchell, 2011, p. 303). We urgently need a new science of complexity. The concept of complexity as we know it is insufficiently complex, unable to deal with the true nature of complexity of reality which is dynamic, multi-dimensional, and web-like. The so-called ‘paradigm change’ involving complex systems and complexity (Nowotny, 2016, p. 42) and the corresponding science of complexity “is still in its early stages” (Mitchell, 2011, p. 303), waiting to be developed. To deal with complexity, and to develop an adequate science of complexity which fully recognizes the true nature of complexity of reality, we may actually need a new scientific revolution: that is, a generative revolution (Jörg, 2017). This generative revolution may show the possibility of a new foundation for science: that is, a generative foundation (Jörg, 2017). This generative foundation implies a generative approach of complexity: an approach which fully recognizes complexity as generative, emergent complexity (Jörg, 2011, 2016; cf. Lichtenstein, 2014). We may open up new spaces of complexity in which complexity may operate as self-potentiating and self-perpetuating (see Rescher, 1998; and Arthur, 2015) within networks and their “dense and highly dynamic web of interconnections” (p. 132). From this we may derive a radically altered account of reality: an account in which the boundaries between “a reality that exists and a reality that is being made become blurred” (ibid., p. 132). Reality, then, may be taken as a new kind of generated reality (cf. Nowotny, 2016, p. 132). Finally, we may develop a science of complexity with “new kinds of complexity” (ibid., p. 132). Interestingly such a science of complexity may be viewed as a verb-based science (Arthur, 2015, p. 25).

Life's butterflies - a simulation-based analysis of Conway's Game of Life sensitivity to random perturbations

ABSTRACT. In this paper, I utilize a General-Purpose GPU based implementation of Conway's Game of Life (GoL), in order to conduct a highly computer-intensive assessment of how sensitive GoL is to randomly introduced perturbations. Since the GoL is fully deterministic and dependent on the games’ initial configurations, I argue that it is relevant to differentiate perturbations of two types: 1) random slight changes in the initial conditions of a given model-run and 2) random alterations to the live cells of the simulation space at a randomly chosen iterations - which resembles the so-called butterfly effects. I run thousands of different GoL runs, each with its own pseudo-random number generating seed, which I call benchmark runs. Then, I repeat each of such runs also thousands of times – but then introducing random changes at random iterations. Such changes can be juts flipping the live-dead state of one or a few given cells, or it can be introducing an oscillator or a glider in the simulation space. In all cases, I store the type of introduced perturbations, the iterations when they were introduced, their relative position in relation to the mass of live cells at the moment when perturbations were introduced, as well as the percentage of different live cells of each subsequence iteration in comparison to their counterparts in the same-seed benchmark game run. Doing that means processing dozens of millions of GoL cells, which is why I resort to highly parallel computing using Nvidia’s CUDA to implement GoL at the GPU. The basic idea of the simulation is to measure the extent to which the GoL is indeed sensitive to changes in initial conditions, but more importantly, to the occurrence of later butterfly effects. More importantly, however, the elaborate goal is to measure how much space and time affect the effect of such butterflies. That is, to which extent the GoL iterations are altered depending on how close butterfly effects happen? How long does it take to big changes to manifest in relation to when perturbations were introduced? Since it is well known that GoL’s self-organizing criticality is itself sensitive to the size of environment grid, to increase the robustness of the simulation, I aim at implementing GoL grids that have each side being of a big enough size such that no living cell ever reaches the boundaries. Which is a technical way to implement unbounded grids at the GPU. Also, I vary the pseudo-random generation of the initial conditions by using uniform, normal and poisson distributions. If time permits, it is also in the plans to test the simulation on Perlin-noise based initial conditions and on the well-known complicated initial configurations.

From to be or tohave in the Situational Adaptive Complex System; An epistemological critique of the neoliberal theoretical path.

ABSTRACT. The scientific ideal of the seventeenth and nineteenth century conceived the world as a completely deterministic system, and consequently economic theories were erected and remain valid for their study and application. In contrast to this method; in this work we address the socio economic interrelationships as a complex adaptive system which we call situation (or situational system).

The operation of a system as an interrelated set of teleological subsystem functions generates a self-organized dynamic interaction towards an emerging order. Given these interrelationships, emerging processes would arise, causing a system to be complex, and because this is homeostatic, we can talk about complex adaptive systems.

Taking the previous considerations into account, we tackle neoliberalism, which is characterized by devising economic, political and social positions against regulation or economic intervention from the state. Against inequality and individualism, it recovers the individual freedom notion and the laissez faire, laissez passer (free market).

The situational trajectory is the concept of our system in time, its structure, and its adaptive capacity. In the first part of this work, we will deal with the historical reconstruction of this path.

Neoliberalism gets access to the previous situational structure from the ideological and cultural subsystem as a catalyst that modifies the legal and political subsystem.The consequences that were persecuted (and still are) were, apparently, purely economic. And yet, as it is empirically verifiable, these consequences are spread throughout the systemic structure.

In the context of the economic policy that started in the eighties, Neoliberalism in Mexico was characterized by the liberalization of markets including the financial sector; under the opening of this sector, we could take note of the devaluation of that productive work and that this effect spreaded to the ideological and cultural field. We broke down the concept of work and the relation with the State, Money and its flow to the productive field which propagated or spreaded some virtual values that were completely away from the creative source of value. We generated an internal structure which currently shows the symbolism of money and the ideological flow between two concepts: To be and To have.

Along with this vision, it is revealed that the unidisciplinary theoretical fetishism has impacted on everything, for example, in its respect, the political fetishism dissociates itself from the society that is to represent; the economic fetishism in terms of pricing but not dealing with social problems; or the cultural - ideological fetishism in terms of that ethnic or racial segregation which supports broadcast stereotypes; this becomes evidence.

The hypothesis that complex (social) reality can not be understood from the theoretical approaches that support the neoliberal ideological model is asserted. It cannot be understood from the unidisciplinarity. And, therefore, it cannot be tackled and treated from such approaches. The public sector problems are not economic problems; they are not political problems; nor any problems of any other particular kind either; they are all holistic, transdisciplinary and dynamic. They are all complex problems.

Competitiveness Analysis of Maize Producers in Chicontepec Veracruz, using agent-based modeling

ABSTRACT. The objective of this research is to evaluate maize production in Chicontepec using agent-based modeling that describes the relationship between organization, support policies, climatic conditions, production costs, yield, marketing to design a model to ensure competitiveness. (Veracruz State) and local (Chicontepec municipality), as well as the production levels of the most prominent countries, as well as the leading Mexican entities to the municipalities of greater production in the state of Veracruz and the characteristics of the nature of corn production in the municipality of Chicontepec. Research subjects were Chicontepec maize growers from Tejeda Veracruz. The method was the complex system analysis to the variables to be used in this research are based on the theoretical, contextual and state of the art presented previously, which will serve to evaluate the production of corn in the municipality of Chicontepec and simulate the competitiveness of the same through modeling Based on agents with the software called NETLOGO. The results determined that the main maize producers in Mexico are the states of Sinaloa, Jalisco, Mexico, Michoacán, Chihuahua, Guanajuato and Veracruz. According to its volume of production since although Sinaloa sows less hectare than Chiapas, its rate of productivity in the country's largest 9.95 vs 1.62 (Agri-food and Fisheries Information Service, 2017) and the Veracruz field is one of the engines that Promotes the development of the country; Historically, the state of Veracruz is among the top seven maize producers in Mexico, the main producing areas are San Andrés, Papantla, Soteapan, Isla, Jose Azueta, Playa Vicente among them is Chicontepec in 2015 Place number 15; The municipality of Chicontepec is one of the most productive and has the potential however the low price of production has led the producers to switch to the cultivation of orange. The relevant of findings is that our work is the first research to study maize production in Chicontepec using agent-based modeling.

Complexity and patterns formation in Wind energy in Oaxaca

ABSTRACT. The objective of this investigation is to show the interrelation between the economic, social and environmental impacts of a wind project. Wind energy is one of the most environmentally friendly sources of energy. However, there are questions about its benefits in the economy of the communities where it is installed, and in the case of the Isthmus of Tehuantepec, the uncertainty about its impacts on human health, flora, fauna and wáter. The lack of information about of wind farms impacts and the deficiency in the inclusion in decision-making have generated serious social conflicts in the Isthmus of Tehuantepec, which have culminated in the division of communities, legal processes, acrimony, and fights with the police and even death of people. The research method started from the literature review to identify the main economic, social and environmental impacts of a wind project and its interrelation. Qualitative analysis was used for the treatment of interviews carried out to development companies, local authorities, opposition organizations and academics to know the relationships that keep the impacts studied. This analysis offered information to develop the Forrester diagrams, and to model the relationships that exist between the impacts studied of the wind sector. The results show the existence of pattern formation between the economic, social and environmental impacts of wind energy. This information reveals that decision-making must occur from a systemic perspective, not only under the environmental or economic emphasis as has occurred in the study area. Complex systems are one opportunity to analyze the evolution of impacts across the time, to analyze wind energy development through different scenarios even before building a wind farm. The wind energy development can be analyzing as complex system, where a lot stakeholders are integrated, and anyone have different knowledge, interests, and have different perception about the wind energy impacts. The relevance of this research is that is the first research about Economic, social and environmental impacts of wind energy in the Isthmus of Tehuantepec.

Patterns Analysis of drug traficc at the International Airport of Mexico City

ABSTRACT. The objective in made an analysis of those patterns of concealment that passengers use when they want to enter drugs at the Airport, in this case in particular in Mexico City. The research was carried out by conducting a network analysis and, above all, an analysis of those new methods of concealment that have been detected by the airport authorities and are currently used in our country. The importance of the study lies in the analysis of the patterns, using most of the public statistical data of the customs authorities and the data compiled in the main newspapers of the country. The method will be the network analysis between Airlines, country, and drug dealer profile with use of software CYTOSCAPE 3.2. The results reveal that the entry of these products into Mexico, mainly cocaine, cannabis and pseudoephedrine, is increasingly taking on innovative methods of concealment, from being introduced into the body, such as making utencils made from the same drug. It is certainly a difficult task for the customs authorities because it is necessary to have increasingly sophisticated instruments to detect them in time. The relevance of this research is vital as it is highlighted in the government's fourth report of 2016, which in the first seven months of 2016 secured 414 tonnes of cannabis, a reduction of 25% compared to the 558 tonnes insured in 2015. (4th Government Report, 2016). Undoubtedly a subject of great relevance at the national level. This is the first analysis using Complex Systems Analysis in drug traffic in Mexican airports.

Complexity Analysis in Air management in Guanajuato State (Mexico)

ABSTRACT. The Objective in this research is a study of the models of air quality management applied in the five megacities of the state of Guanajuato. Air quality management takes a new and relevant approach with the Paris agreement of 2015 because it includes a greater number of countries ratifying the agreement as well as the inclusion of other stakeholders such as civil society, industry, financial institutions, Cities and subnational authorities. The importance of the study lies in the study of air quality management models as complex systems formed by subsystems using unconventional methods such as complex systems theory and structural modeling to explain their impact on air quality management in the megacities. The Method used was mixted: a) Documentary research is applied to build the theoretical model of air quality management in official documents such as air clean acts, ONU climate change convention, Paris 2015 agreement and b) Structural modeling is applied and a comparative analysis with Spain case and to a set of thirteen manifest variables obtained from the public database of the World Bank. The Results were parameters examined applying the LART 2009 environmental management model, identifying the eight instruments: international instruments, legal, economic, political, cultural, educational, social and technological explaining the alignment of the agreements of Paris 2015 with the local management in the cities, the case of Spain is taken for this study. Data were collected from the World Bank on greenhouse gas emissions, urban density, agricultural and wildland extension to construct a structural model to identify the main factors related to climate change management in countries reporting their emissions in 1995 using the STATA software. We also found two formative factors of two latent variables that explain: CO2 emissions per capita, urban population, use of fossil fuels and access to solid fuels as well as agricultural land, jungle areas and exposure to emissions Of PM2.5.. The interpretation at the time of this result observing the indicators of 1995 reveals that the management of the climatic change was insufficient because one of the two manifested variables formed by the indicator: agricultural surface, forest area and exposition to the PM2.5 present positive charges for the agricultural area and exposure and negative for the wildland area. The relevant of Findings shown that No studies have been found regarding models of air quality management that include management variables, most of them refer to mathematical models for the prediction of pollution, taking as variables: meteorological aspects and pollutant criteria.

Network Analysis in Scientifics Collaboration in Mexican Environment researchers

ABSTRACT. The objective is the pattern formation analysis of an environment network from National Polytechnic Institute in Mexico. Networks have acquired an important relevance as a vehicle for collaboration and knowledge generation with regard to finding solutions to environmental problems. The importance of the study lies on the analysis of the network, using graph theory to explain network’s structure. The coauthorships network reveals important structures that compose the scientific community social network. The method was the network analysis to three variables: Betweenness centrality, Clustering coefficients and Node Degree distributions. We collected information from the network members’ scientific production in order to examine their collaboration with other researchers and analyzed their coauthorships. The collaborations of 231 researchers were analyzed in the scientific production of Articles, Books, Books chapters and Thesis Direction, in the 2011-2013 period. The researchers belong to 14 research centers of the IPN, which are members of the Environment Network (REMA).The software used was CYTOSCAPE 3.2. The results reveal that the network is in its beginning, and the parameters betweenness centrality, clustering coefficients and degree distributions are low (Freeman, 2000, & Newman, 2003) with respect to a fully connected network. The results suggest that it will be necessary to review institutional policies in terms of resource allocation to encourage collaborative work, in order to increase the parameters value of betweenness centrality, clustering coefficients and node degree.


ABSTRACT. According to the catalogue of the National Institute of Indigenous Languages in Mexico indigenous languages are an integral part of the cultural heritage of the nation that gives the Mexican Nation greater expressions of multiculturalism. This leads us to the questions: Should we save the speakers to promote their culture or should we support culture to help speakers? What at first seems the same sometimes can be very different. In this paper we will deal with the problem of Spanish in conflict with the indigenous languages of Mexico and if language shift of these languages will lead to disadvantages for their speakers. Our purpose is to analyze if disadvantage is identical to inequality from a complex perspective. Using our theoretical framework of ‘ecology of pressure’ we will analyze if indigenous language speakers experience disadvantage in contrast with monolingual speakers of Spanish and if so, if disadvantage will always coincide with inequality.

Recommendation Algorithm of Social Policy Based on Risk Analysis and Early Warning Systems.

ABSTRACT. In order to improve the targeting of social programs, the System of Integral Social Information (Sistema de Información Social Integral - SISI) strives to create a platform to help analyze multi-dimensional data not usually taken into account when developing social policy in Mexico. The proposed approach is to create various compound indicators based on different early warning systems tailored to tackle areas of interest in social policy; as such, all indicators would create a profile of geographical areas and help target social programs in a more thorough manner.

Complex systems for studying the history of educational institutions

ABSTRACT. Complex systems have made advances in the social sciences and the humanities in recent decades. This paper presents a proposal to study the origin and evolution of one of the most important educational institutions in our country, the National Preparatory School (ENP), considered the origin of the National Autonomous University of Mexico. The ENP was created in a social, political and economic environment on the edge of chaos, between order and disorder, where the teaching staff worked as a small world network generating a process of self-organization, through which their educational model was reproduced in other regions of the Mexican nation and with the capacity to help in the creation of other institutions.

Complex interactions between Social and environmental process in Mexico Valley
SPEAKER: Edgar Gaytan

ABSTRACT. I researched, through an interdisciplinary and complex adaptive systems approach, the explanatory burden that the environment has in the manifestation of specific diseases associated with chronic stress. The study focuses on human populations vulnerable environmentally and socio-recurrent exposure to risk of disaster, particularly by severe flooding. I propose to describe the interwoven complexity in the interaction between environmental and social systems. Chronic stress is an emerging bio-cultural process that affects urban co-morbidity and mortality differential in human populations exposed to factors of socio-environmental risk. Through the support of multivariate statistical modeling procedures I analyzed the relationship between adaptive responses and socio-environmental factors that contribute to the expression of comorbidity associated with living conditions in vulnerable and not vulnerable urban contexts. Allostatic load was measured in two urban communities located within designated vulnerable zones and high risk of flood in the region of Valle de Chalco, Mexico and two control populations located in the Coyoacan and Tlalpan municipalities of Mexico City. This methodology combines different qualitative and quantitative data analysis. The variables that I considered were constructed from anthropometric and physiological parameters associated with the presence of stress as a factor in chronic diseases such as: hypertension, cardiovascular disorders and obesity, among others. The methodology of the thesis focuses on the review and analysis of the processes that result from the interactions between the social system and the environment. I qualitatively analyzed the perceptions and social representations of spatial context at the local level in four colonies and extracted physiological biomarkers from a sample of 160 people to measure the response of perceived stress and environmental pressures. I concluded that the environment plays an important role in the manifestations of non-communicable diseases such as hypertension and obesity.

Comparative institutionalization for Open Government.

ABSTRACT. This paper propose to compare how Open Government (OG) has been implemented by the Open Government Partnership (OGP) participating countries; as well as, to analyze institutionalized practices related to the issue in all other countries.

Autophagy is essential for the symbiotic relationship between Phaseolus vulgaris and both rhizobia and arbuscular mycorrhizal fungi

ABSTRACT. Most plants establish symbiotic interactions with soil microorganisms such as arbuscular mycorrhizal (AM) fungi and/or rhizobacterias. Such interactions provide nutrients that are not readily available in the soil for plant uptake. These mutualistic interactions require the regulation of different intracellular trafficking pathways. Phosphatidylinositol 3-kinase (PI3K) Class III is a key trafficking regulator by mediating the synthesis of Phosphatidylinositol 3-phosphate (PI3P). It is known that PI3K is regulated for the AutTphagy Gene 6 (ATG6) by forming a complex. Although both proteins have been studied in plant immunity as key regulators, its role in symbiotic interactions is poorly understood. Here, we report that down-regulation of Phaseolus vulgaris PI3K (PvPI3K) severely impairs both nodule and mycorrhiza symbiosis pathways. Concordantly, PvPI3K down-regulation affects early stages of symbiosis responses involved in root colonization. Furthermore, we show a drastic reduction in transcript accumulation of autophagy-related genes in PvPI3K down-regulated background. Seemingly, loss-of-function of the autophagy gene Beclin1/Atg6 phenocopies PvPI3K down-regulation in transgenic roots. Our findings show that autophagy related proteins are crucial for mutualistic interactions of P. vulgaris with beneficial microorganisms.

Complex Social Network Analysis of Key Player’s Role on Conversation Dynamics

ABSTRACT. The new Online Social Networks (OSN) are examples of complex social systems that reflect continuous interacting behavior among individuals and organizations. All the prominent actors of the Mexican Entrepreneur System are part of Twitter (an example of online social media network platform) that constantly interact through this network, by publishing the most relevant content (ideas) that each of these actors produce or share.

In this paper, we define an explanatory model of the Entrepreneur System in Mexico’s behavior through Twitter online social network. We aim to model a complex system of the Mexican Entrepreneur System actors that interact through microblogging (tweets) to grasp an emergent social phenomena: a pattern behavior that correlates the conversation topics and the flow of information through the network by understanding the network’s central key players.

For achieving this endeavor, we rely on a Data Product Architecture (DPA) realization allowing us to model thousands of conversations gathered from tweets of Mexican Entrepreneur System’s actors in a streaming basis. The idea is to extent the advantages of a data science product architecture that can provide the extraction of patterns of large data volumes and model them into a heterogeneous-multiplex network in a graph-database engine.

Like many of the real world networks, the graph-model that we propose is defined by multiple types of actors and distinct types of interactions that defines a complex graph. We demonstrate that conversation flows and spread of information are well defined by central key players in the system. Those so called key players define the structure of the network as a hierarchical one, making the propagation of information asymmetrical in different layers. Using our model we prove that key player performe a central role in the information dynamics of the network.

In addition, by capturing the conversations of each key player actor in this network, we perform a Natural Language Processing (NLP) analysis in order to obtain the topics and sentiments of conversations. By the propagation of topic conversations through the network, we can understand not only how the information is flowing but also what kind of topic this information is coming from. Moreover, by applying a dynamic network analysis we detect and represent structures of the social system that can enhance the explanation of the occurrence of the collective whole behavior.

Dynamics of saturated granular media discharged from a silo

ABSTRACT. The flow rate of grains discharged from a silo is independent of the height of the granular column above the outlet. In contrast, for a liquid, the flow rate depends on the hydrostatic pressure. Now, if both materials are discharged simultaneously, a model to describe such dynamics is missing in the literature. On one hand, Darcy’s Law is used to describe the steady-state water flow through sand columns. It states that water flux in saturated porous media is linearly proportional to hydraulic gradient. Nevertheless, for low-permeability porous media, Darcy’s law is not adequate because of the strong fluid–solid interaction that results in non-linear flux-gradient relationships. On the other hand, the Kozeny-Carman relation is used to calculate the pressure drop of a fluid flowing through a packed bed of solid particles. This approach considers a laminar flow in a collection of capillaries where we can use the Poiseuille's law for a laminar viscous flow. However, if the packed bed is also moving, we do not have a simple relation to describe the granular or liquid flows.

Here we studied the dynamics of saturated spherical glass beads being discharged from a vertical silo. We found that in this case the flow rate depends on the column height but also on the grains size and aperture diameter with a non-linear behavior. We estimate experimentally the permeability for each packed bed, then we solved numerically the Navier-Stokes equations for a non-compressible laminar flow in the cylinder, and finally we compared the theoretical results with the experiments.

Towards social network analysis for student project team formation

ABSTRACT. In some undergraduate courses in computer science or engineering, it is necessary to form teams with the students to accomplish projects. For many teachers, classroom projects are a learning strategy that could help develop the proposed course skills. An important skill to practice is teamwork, especially in engineering where the student is expected to learn how to collaborate with peers. There are many ways to configure a student group to set up a team, but we would like each team to be qualified to develop the proposed tasks successfully. One way is to create groups randomly, but some of them could be unproductive and fail. Another way is voluntarily set groups, but the success of all cannot be assured equally. Even more, there are other strategies based on student profiles, such as personality traits, learning styles, or other information that we can group through some clustering methods. Finally, we could use social networks to analyze the relationship between students to discover other ways to create groups for teamwork. Sociograms is a visual social network description of individual preferences to other peers. It is used to build a social network to find social traits or identify featured individuals. For example, we could use it to discover structures, applying social network analysis algorithms to find triads of individuals with similar attributes. The triads can express the local structures in social networks. Many theories about social relations can be tested using hypotheses about the triad census. In this work-in-progress paper, we used sociograms to represent social links between students. In our case of study, each student expressed their preferences for working with other three peers at the course beginning, and this information was used to detect cliques within the social network using network analysis algorithms. Although first we were focused on clique structures in order to describe the natural formed groups, then we tried find triads structures because we are interested in three members’ team's behavior specially. Finally, we compared the network analysis results versus groups formed by a teacher in a real course, and we discussed the advantages and disadvantages of the student project teams from complex networks analysis approach.

Modelling immune response dynamics as predator-prey systems

ABSTRACT. Predator-prey models, specially Lotka-Volterra-type models, are one of the oldest mathematical approaches to ecological dynamics. They are even used -in addition to other techniques- to model immune response dynamics, in which the immune systems acts as the predator; and the parasite (or parasites) act as prey. In this work, we analysed five immune functional responses and four parasite growth functions from existing and new predator-prey type models. The models we are contemplating for the immune system functional response are: Lotka-Volterra, Holling type II, Holling type III, DeAngelis-Beddington and Crowley-Martin. These last two, hadn't been previously proposed as immune system models. For the parasite growth functions we are considering exponential growth and the logistic model, both with different parameters for micro and macro parasites. Our preliminary results show that stationary stable states for the parasite remain the same for every parasite growth type, in spite of the changes in the immune system functional response, however the immune response stable states change depending on both aspects of the function. Finally, we performed a symbolic regression approach using time-series data from immune and parasite populations inside an individual. So far, the method shown here is the first capable to infer mathematical models directly from data. Given the difficulty to calculate some of the parameters involved in our equations, this technique proved to be the most exact and achievable.

Dipole-Dipole And Dipole-Ion Intermolecular Interactions: A Complex Approach

ABSTRACT. Dipole-dipole and dipole-ion intermolecular interactions play an important role in a wide variety of phenomena involved in different areas, e.g. the behaviour of dielectric materials in electronics as well as in biological processes such as a polypeptides’ choice of structure.

This project presents a model of molecular interactions of the aforementioned type, accompanied by a simulation based on cellular automata. It also includes a comparison between such model and its results and one determined by MD calculations as well as with experimental data.

Its main goals were to gain a better understanding of the larger scale phenomena that may arise from this kind of interactions so as to have a more complete picture of the systems involved at a larger scale, as well as determining whether cellular automata modelling can contribute to their study.

The complexity of the educational phenomenon. The case of the basic mathematics teaching

ABSTRACT. The collective –students, teachers and their interactions- as a complex phenomenon has been studied in education in recent years. A complex approach considers the classroom as an environment where the students cover the needs of learning. The prevailing knowledge, the teacher participation, and the ways of communication are established.

In the case of mathematics education, several researches have been interested in the collaborative learning of the students including the creativity and the problem solving. A way of approaching to teaching - learning process is studying the development of the collective ideas and depersonalizing the individual contributions; this means the emergence of new ways of assessments.

This work shows some empirical results about the tendencies of subjects involved in a mathematical problem situation and the characterization of obstacles for solving it; for example the presence of obstructions coming from the semantics over the syntax and vice versa, the provision of the intermediate senses, the impossibility of trigger operations that could do moments before, between others.

By using the Agent Based Modeling we propose an abstract model of a classroom to analyze the collective learning and propose a mechanism to model the obstacles coming from empirical results. It considers two representative agents, the student and the teacher. The relations among them are established by communication channels, where “knowledge units” flows. The assessments are the feedback mechanism that measures the performance of the agents.

The Phenotype-Genotype-Phenotype Map

ABSTRACT. Here we introduce a robust mathematical and data analytic framework for a mechanistic explanation of phenotypic evolution that is conceptually rooted in developmental evolution theory. We respond to the lack of evolutionary models that integrate multiple simultaneously-occurring mechanisms of inheritance with developmental mechanisms in order to explain the origins of evolutionary novelty. We explore a re-conceptualization and an associated mathematical formalism of the Phenotype-Genotype-Phenotype (PGP) Map, which is based on Laubichler & Renn’s framework for extended evolution. Conceptually, rather than to begin with the genotype, as is the case with the genotype-phenotype map, we instead begin with a phenotype—an agent in Laubichler and Renn’s extended regulatory network model. A phenotype can be a single trait, a complex of traits, an organism, or a system at any scale. The phenotype is then “decomposed” into a unit of inheritance (genotype, or “features”) which passes the generational divide and is then “reconstructed” via developmental processes. Examples of features include, but are not limited to, gene regulatory network motifs, specific interactions between molecular agents (e.g. transcription factor modules), developmental mechanisms, epigenetic interactions, and of course, an organism’s genotype. This abstraction avoids later post-hoc assumptions about the genotype-phenotype map in exchange for a model of phenotypic evolution that places the explanatory power in the processes of inheritance and development. The PGP Map framework is thus capable of uniting the proximate/mechanistic explanation with the evolutionary explanation by providing a mechanistic explanation of phenotypic evolution. To accomplish this, we have developed a mathematical and associated computational framework for the PGP Map based on digital signal processing (DSP) and wavelet analysis, as it ensures that the conceptual framework, mathematics, and computational implementation are as identical in structure and logic as possible. The framework integrates concepts and methods from wavelet theory, machine vision, and graph theory and is thus a flexible tool that facilitates the conceptual interpretation and multi-scale modeling of known phenomena of phenotypic evolution (e.g. multiple mechanisms of inheritance, gene regulatory network dynamics, among others). The PGP Map is implemented in TensorFlow, a machine learning interface used for data analysis via custom designed computational graphs. This makes the PGP Map amenable to empirical test by allowing for the integration of multiple types of biological data, such as single-cell genomics and epigenomics data, gene expression data, and/or phenotype-environment interaction data, to list a few.

Knowmap: Antidisciplinary cartographies for tackling complex problems

ABSTRACT. To approach complex problems for research and collective intervention purposes requires a self-organization process between heterogeneous agents that regularly originates conflicts in terms of ideology, language, terminology, techniques and methods, —mostly because of epistemological contrasts among fields of knowledge. These conflicts have direct impact in the coordination of agencies, and in the way that strategies are variated, selected and adapted, enabling or obstructing the interactions flow and its productive results.

This paper conceives that kind of conflicts as disciplinary interaction processes that bring into play a series of forces, interests and positionings of both individual and collective human identities related to a disciplinary formation. Disciplinary interaction has different orientations, such as mono, inter, multi, trans and metadisciplinarity; and it represents by itself a complex problem related to the modes of knowledge production and small groups dynamics.

To face this scenario, the author proposes to design a strategic instrument for disciplinary interaction called “Knowmap”, based on the following assumptions: (i) Research and intervention for complex problems requires the formulation of systems-oriented strategies to be developed collaboratively. (ii) In disciplinary interaction processes, human agents’ profiles are not determined by their disciplinary formation, so it seems pertinent to describe them as “knowmads”, a term coined by John Moravec as an evolution of Peter Drucker’s “knowledge workers”. (iii) Cartographic techniques of collective mapping are useful to deploy approaches and representations of problems that involve a multiplicity of agents, systems and interactions. (iv) The metaphor of the stratified map allows to identify different modalities of human creativity where agents can interact. As a reference, we take the four modalities deployed on Neri Oxman’s Krebs Cycle of Creativity (KCC): Science, Engineering, Art and Design. In the same way that the KCC propounds to understand human knowledge as intellectual energy (CreATP), this paper aims to track and represent the flow of strategies, interests, conflicts and agreements taken between diverse agents by using visual thinking tools like cartographies developed in group sessions to tackle complex problems.

Using network theory to characterize natural languages
SPEAKER: Diego Espitia

ABSTRACT. The study of natural languages from the complex systems point of view has atracted the interest of researchers in different fields in recent years. In this work we use network theory to elucidate some of the complex behaviour behind natural languages. After constructing the co-ocurrence and visibility word networks, we use different network-based measurements, such a degree distribution, distance distribution, clustering coefficient, etc, in order to study texts written in natural languages (Spanish, German, English, Arab, Turkish, Russian, French) as well as a text written in an unknow alphabet or language (Voynich manuscript). We show how the analysis of these networks captures some statistical properties of the languages.

Hurst Exponent and its use in Financial Markets. An application to exchange rate yields.

ABSTRACT. In this research, MXN/USD (Mexican Peso / US Dollar) exchange rate yields behavior is studied in terms of the conditions needed to be analyzed with conventional risk analysis methods; in order to validate the use of the referred methods.

This research involves the analysis of these features:

• The kind of data distribution related with the statistical sample, • the similitude between its variations and simple Brownian motion and, if any, its persistent behavior.

In order to be able to analyze these given features, some techniques that involve statistical and fractal analysis were used, such is the case of: heavy-tailed distributions –a. k. a. leptokurtic distributions– fractal geometry, fractal brownian motion and, specially, the Hurst exponent –and how obtain it by means of computer algorithms using the rescaled-range Method– to determine the behavior of the analyzed information: random, persistent or anti-persistent.

Obtained results, show that the MXN/USD yields have a heavy-tailed distribution –in contrast with the normal distribution raised as initial condition for some risk analysis methods– and, their variations’ behavior is similar to fractional Brownian motion; this, determined by the related Hurst exponent –Rescaled-range Method as an alternative to calculate this variable is also emphasized.  

The contribution of this research lies in the addition of algorithms and concepts related to Fractal Geometry to support our hypothesis –including its interaction with statistical concepts and the software developed to estimate Hurst exponent, the use of the mentioned software could be extended to analyze further financial variables.

Finally, it’s worth mentioning the relevance of the Hurst exponent as a valuable tool for evaluation of financial assets’ risk, based on its relation with models that evaluate this variable, such as VaR (Value at Risk).

Tending rhetorical bridges for real collaboration: the epistemic value of metaphors on interdisciplinary spaces and its pertinence for better practices in multidisciplinary groups

ABSTRACT. In cases where relations between human knowledge and that we refer as real or "the world" are ignored into scientific research and this debate is considered only for philosophers, epistemological obstacles and methodological consequences could modify in one or another case our findings; for instance, two restrictions at fact: (i) several limitations of our research objects and its construction, and (ii) interdisciplinary groups quality deficiencies. For the first case, that can be seen as a negative effect for our pretension and ideal holistic complex system crossdisciplinary approaches; in case two, knowing similarities in those disciplinary lebenswelt who are inside in our academic collaborative groups, could be so helpful for methodological handbooks. The use of tropes, specifically metaphors, as a classical epistemic access to the world, are in this context relevant objects of research. Thus, we search into two interactive complex systems: (i) those neuronal and symbolical scaffolds involved in certain cognitive access to the world, relevant for meaning construction in disciplinary frameworks and scholars' points of views, and (ii) collaborative 'spaces' and the ways of naming that sites of crossdisciplinary approaches, in many cases institutionalized and immeasurable. In that sense, both systems and its external and internal interactions are conceptualized by metaphor structures, in order of our epistemic interest spotligh. Our findings are the result of a one year study sponsored by two universities, made by three junior scholars, and one of its products is presented here in this Congress as a conceptual matrix who shows nodes of conceptualization of the word 'space' in several cases studies into the institutional sponsors, Universidad Nacional Autónoma de México and Universidad Autónoma Metropolitana, and its agents (individuals, groups of individuals or each discipline in particular) involved in those crossdisciplinary and collaborative academic groups. Finally, our discussion on the conceptual matrix in the poster presentation searches into the possible influences of the nodes of rhetorical figures, as a powerful epistemic tool, for the future of handbooks of mixed and interdisciplinary methodologies or practical collaboration, and its incidence for better practices of multidisciplinary groups in the way of crossdisciplinary investigations.

Discovering Epistatic Interactions of Continuous Phenotypes Using Information and Network Theory

ABSTRACT. Epistatic genetic interactions are typically ignored in genome wide association studies because of the underlying mathematical and computational complexities. One proposed method from Hu et al. 2011 uses information and network theory in order to identify important single nucleotide polymorphisms (SNPs) that engage in epistatic behavior. Hu uses Information Gain (McGill 1954) as a measure of the amount of information gained only from the combined effect of two SNPs shown in the equations below. A and B represent the allelic values for two SNPs, C is a discrete variable representing the phenotypic status, I(x;y) is the mutual information of x and y and I(x|y) is the conditional entropy between x and y.

This method works well for discrete phenotypes but does not extend to continuous phenotypes. We use the Kullback–Leibler divergence (KL) as an approximation of the continuous entropy. The second form of Information Gain can be thought of the difference between the mutual information of A and B with a fixed C and a variable C. This form has a natural extension using KL divergence shown below where P is the probability distribution from the data.

We combine the method from Hu with the KL divergence and test our model using a toy dataset of 3000 samples and 30 SNPs. The value for the alleles for each sample where chosen assuming Hardy-Weinberg equilibrium using a random value less than 0.5 as the minor allele frequency. We then calculate the Information Gain between every pair of SNPs. We purposely assign phenotypes as a linear combination of SNP1, SNP2, and SNP1*SNP2 with respective coefficients 0.1,0.1,5 along with a small random normal error. This process forces the continuous phenotype to be very large if both SNP1 and SNP2 are present but very small if not.

We then build a network for each of the data sets in which the nodes are SNPs and edge weights are the Information Gain between them. We then analyze the network by first thresholding the edge weights and calculating basic network properties, such as size of the network, the connectivity, the size of the largest component, and the node and edge distribution. The final cutoff for thresholding is decided by a 100 fold permutation test as the null model. In each permutation, the phenotype class is randomized, the Information Gain is recalculated, and the network is rebuilt. The cutoff is determined by finding the threshold in which the topological properties differ most from the null model with a p value < 0.5.

We then evaluate the thresholded network and rank the SNPs using degree, betweenness, and closeness centrality. In our simulation, SNP1 and SNP2 were outputted as the nodes with the most epistatic influence, thus matching our linear model. In order to further validate our model, we will apply our method to a dataset with a continuous phenotype and then investigate the associated genes and pathways of each ranked SNP for interactions in biology literature.


ABSTRACT. A water jet impacting on a pool generates bubbles due the air entrapment during the deformation of the air-liquid interface. In this work, we study the formation of air bubbles when a jet of grains hits on the water surface. Once a bubble is created, the grains stick at its surface due to capillarity forming a granular capsid. Using fluid dynamics, we describe the bubble formation and we analyze the interaction of the three different media involved in the phenomenon: liquid, air and grains. We performed experiments at different impact velocities, grain size and liquid surface tension, and we analyzed the different regimes in terms of Weber and Froude Numbers.

Dynamics of a repulsive granular gas

ABSTRACT. When a container with solid beads is shaken, the particles move in such a way that remember us the typical image of a gas in kinetic theory. Vertically vibrated granular materials are called "granular gases" and exhibit a wide variety of phenomena: clustering, bouncing beds, undulations, phase coexistence, etc., and these phenomena have been considerably revised in the literature. In a typical granular gas, the particles only interact through contact forces. In this work we study experimentally the physics of a granular gas formed by repelling magnetic particles. The system consists of cylindrical magnets inside a two-dimensional cell with their dipoles oriented in the same direction to produce repulsive interactions among them, and therefore, the particles never collide. We analyze the dynamics and the collective phenomena that appear under this long distance interaction, depending on the vibration amplitude and the volume fraction.

Analyzing the Malaysian Twitter Community in Respond to 2014 floods

ABSTRACT. Malaysian are highly prolific on Twitter, so much so it has been said to be the second highest country per capita that uses Twitter. Twitter has this great potential to be useful in time of disaster where accurate information is of utmost importance. We would like to investigate the utilization of Twitter for this purpose in Malaysia as well as understand the behavior and patterns of Twitter usage in the face of such events. On 27 December 2014, Malaysia was hit by its worst flooding in 30 years, displacing an estimated 160,000 from their homes. Twitter data with keyword ‘banjir’ (flood) was obtained for 4 days starting 27th December 2014. Flood were occuring both in Malaysia and Indonesia so the first classification of tweets was to separate between Malaysian and Indonesian tweets. The second classification of tweets is between emotional and informational ones. And the last classification was to determine whether the informational tweets are facts or rumors. We see different patterns of emotional and informational Tweets emerging from the Malaysian data in contrast to the Indonesian data. This due to the different flood occurrences in both countries and the time of day in which the tweets were posted. The result gathered reflects local tendencies, one can see the trend of lunch tweets and night owls. We identified some rumor debunking tweets as well as some key players in from the network obtained from the data set.

An axiomatic proposal for cooperative adhoc networks

ABSTRACT. In response to several of the challenges imposed in the coordination of the tasks that man and his social organizations are carrying out today, vast methodological proposals have been made with the aim of collectivizing the creation of knowledge and decision-making processes [1] [2]. In the solution, diffuse lines that separate individual interests and preferences from global interests and preferences [3] are interposed, opening the way to known social paradoxes that show the tensions between individual and collective rationality [4]. Then is when promote the emergence of cooperation between individuals (or communities) becomes important to achieve high-value objectives when, in a spontaneous, random or planned way, an event of common interest is generated.

The mechanisms that seek to promote the emergence of a new cooperation result in a set of formal models with possible strategies for rational decision-making, coupled with reputation mechanisms, based on incentives (or virtual payments, which have been modeled as credits), based on emerging links, based on mixed mechanisms, or based on punishment, etc. There are still serious challenges in addressing the problems of distribution and justice in distribution [5] of the resources required for the execution of a specific task. This article proposes, model, the provision of a service in a communications network through the execution of collective actions, where a community of agents is in charge of managing the service through the generation of coordination processes and the cooperation among its members.

This proposal aims to explore an axiological model for the dynamics of knowledge, at the interior of the decision-making process between two more communities of socially inspired artificial agents. From a conceptual model for each agent, its computational implementation is evaluated through a block structure that configures the definition of a finite automaton which demarcates its rational behavior within the limits imposed by the negotiation of the group. Adapting for this, some conditioning factors widely studied in the social sciences: trust and social reciprocity. A second dimension of analysis is proposed for the evaluation of the inter-community negotiation scheme, where a coalitionist vision is established and some observations are made regarding the description of the activities of the negotiating community, its hierarchical chain, level of Centralization/Decentralization, communication and autonomy for decision. The foregoing concepts are treated within the design of an adhoc telecommunications system known as "TLÖN".

[1] H. Nurmi, «Approaches to collective decision making with fuzzy preference relations», Fuzzy Sets Syst., vol. 6, n.o 3, pp. 249–259, 1981. [2] K. A. McHugh, F. J. Yammarino, S. D. Dionne, A. Serban, H. Sayama, y S. Chatterjee, «Collective decision making, leadership, and collective intelligence: Tests with agent-based simulations and a Field study», Leadersh., 2016. [3] B. Arfi, «Linguistic fuzzy-Logic social game of cooperation», Ration. Soc., vol. 18, n.o 4, pp. 471–537, 2006. [4] A. K. Sen, «Rational Fools: A Critique of the Behavioral Foundations of Economic Theory», Philos. Public Aff., vol. 6, n.o 4, pp. 317–344, 1977. [5] A. Sen, «Rawls versus Bentham: An axiomatic examination of the pure distribution problem», Theory Decis., 1974.

Modelling interaction using BDI agents

ABSTRACT. New technology has revolutionized the availability of information becoming a more complex interaction of users and systems. The learning process is essential in the construction of new knowledge in the pursuit of the user experience improvement. On this paper, the interruption factor is considered on interaction quality because it emerges during learning process. We present the users in a children’s museum in Mexico as a case study. We model the interaction of an interactive exhibition using BDI agents; we adapted the BDI architecture using Type-2 fuzzy inference system to add perceptual human-like capability on agents in order to describe the interaction and interruption factor on user’s experience. The resulting model allows describe content adaptation by creating a personalized interaction environment. We conclude that managing interruptions can enhance the interaction, producing a positive learning process that influences user experience. We can achieve a better interaction if we offer the right kind of content considering the emergent interruptions.

Multi-criteria Planning Analysis of the Use of Hydroelectricity Surplus in Paraguay based on the Analytic Network Process (ANP)

ABSTRACT. The abundance of electric energy, generated mainly by the binational hydroelectric dams of Itaipú and Yacyretá, constitutes a strategic asset for the development of Paraguay. This has a great impact on the economic growth and social progress of the country, through the planned infrastructure growth and the development of the productive sector, mainly industry, based on a greater share of electricity in the energy matrix. In fact, it can be said that Paraguay, from different perspectives, urgently needs to take advantage of the large levels of clean energy available, encouraging the penetration of hydropower into the energy demand matrix, replacing biomass and oil. In this context, a wide public debate on the use of hydropower surplus has been around in the country for many years. The different alternatives for its implementation are often characterized by the conflict between different objectives, such as political, social, economic, technical and environmental points of view. Under these circumstances, an approach based on multi-criteria decision analysis models (MCDA) is required. In recent studies an analysis of the problem was carried out using the analytical hierarchical process (AHP), which is probably the most popular method for prioritizing alternatives. With this proposal, it is planned to generalize the analysis hierarchical process, the Analytic Network Process (ANP) to develop a decision-making tool for the best use of Paraguay's hydroelectric surpluses within the framework of a sustainable policy, considering quantitative and qualitative aspects, difficult to identify through usual evaluation approaches. This tool has a high scientific and avant-garde component to make essential decisions that would produce the greatest benefits for the integral development of the country. This case study analyze four energy policies, that is (A1) a trend scenario, (A2) a scenario of high hydroelectric power export, (A3) a scenario of high level penetration of the Electro- intensive industry and (A4) scenario of high development of national industries. Finally, these strategies are evaluated according to economic, technical, environmental, social and implementation feasibility criteria The result of the use of the integral ANP model, taking into account the interaction between environmental, economic, social, technical and political criteria, as well as energy consumption in Paraguay and possible strategies for this case study showed that the most appropriate strategy for Paraguay is the development of its industrial sector through the use of available electric energy, which would bring great benefits in many aspects compared to other alternatives.

Measuring Poverty Governance with Fisher Information

ABSTRACT. We propose to use the concept of poverty governance in the same way it has been used in water governance, as the political, social, economic and administrative system in place that influence poverty phenomena. In a more abstract sense we conceptualize governance as system controllability and hence system observability. We pose that greater levels of observability (governance) are only achieved with large enough Fisher Information (FI) values. We use the formalism developed by Frieden, Cabezas and others to compute FI of poverty time series, generated following the CONEVAL's official methodology used in México. We analyzed the complete multidimensional poverty data as well as each component in order to understand if there are specific dimensions with low levels of governance and its relation with has been called in the poverty literature as poverty traps.

On the Agreement Between Small-World-Like OFC Model and Real Earthquakes from Different Regions

ABSTRACT. Despite all the existing knowledge about the production of seismic waves through slips on faults, much remains to be discovered regarding the dynamics responsible for these slips. A key step in deepening this knowledge is the study, analysis and modeling of the seismic distributions in space and time. The concept of self-organized criticality (SOC), widely used in statistical physics, refers, generally, to the property that a large class of dynamical systems has to organize spontaneously into a dynamic critical state without the need for any fine tuning of some external control parameter. A signature of self-organized criticality in a system is the invariance of temporal and spatial scales, observed by power-law distributions and finite size scaling. Aiming to contribute to the understanding of earthquake dynamics, in this work we implemented simulations of the model developed by Olami, Feder and Christensen (OFC model), which incorporate characteristics of self-organized criticality and has played an important role in the phenomenological study of earthquakes, because it displays a phenomenology similar to the one found in actual earthquakes. We applied the OFC model for two different topologies: regular and “small-world”, where in the latter the links are randomly rewired with probability p. In both topologies, we have studied the distribution of time intervals between consecutive earthquakes and the border effects present in each one. In addition, we also have characterized the influence that the probability p produces in certain characteristics of the lattice and in the intensity of border effects. Furthermore, in order to contribute the understanding of long-distance relations between seismic activities we have built complex networks of successive epicenters from synthetic catalogs produced with the OFC model, using both regular and small-world topologies. In our results, distributions arise belonging to a family of non-traditional distributions functions (Tsallis’ family), as we can see in Fig. 1. We also performed the complex network analysis for real earthquakes, taking in account two different ways. The first one, considering only regional earthquakes separately (in regions with high seismicity, as Japan and California, and low seismicity, as Brazil). In the second, considering events for the entire world, with magnitude larger or equal than 4.5, in Richter scale. It is noteworthy that we have found a good agreement between the results obtained for the OFC model with small-world topology and the results for real earthquakes. Our findings reinforce the idea that the Earth is in a critical self-organized state and furthermore point towards temporal and spatial correlations between earthquakes in different places.

The price of anarchy in smart energy newworks: designed versus emegent behaviours
SPEAKER: Oliver Smith

ABSTRACT. Please see the attached PDF.

Confronting Current Century's Problems: Interdisciplinary Methodologies for Complex Systems

ABSTRACT. The science of complexity and the constructivist theory of complex systems are two different scientific approaches applied to the study of complex phenomena. This poster describes an integration of both approaches into a practical workshop designed to share, learn, discuss, and apply interdisciplinary methodologies oriented to effectively confront current century’s problems. The aim for each person participating in the workshop is to be part of a multidisciplinary team that applies a proposed method to identify and describe a proposed problem as a complex system. The aim for the workshop is to set-up real and virtual interactive spaces where the proposed problems meet-up and match with proposed interdisciplinary methodologies. The expected outcome for the workshop is both, to identify and describe as a complex system some of the problems faced by the participants and to facilitate the beginnings of geographically distributed networks among participants to confront those problems. Furthermore, these networks will plausibly induce the creation of a specialized interdisciplinary organization to address each specific problem identified as a complex system during the workshop. This organization should thus apply a complex systems approach to deal with its chosen problem. A concrete step in this direction is the creation of the workshop’s virtual interactive spaces containing each at least three elements. First, the summary of the proposed interdisciplinary method and the summary of the proposed problem as matched by the networked participants during the workshop. Second, the description of the problem as a complex system. Third, future steps to address the problem.

New Visual Strategies to Teach Programming

ABSTRACT. It is common to encounter discouragement in students learning the language of programming while they cannot make sense of its logic. This fact tends to result in desertion.

Practice is one of the strategies used to achieve learning, but if the planted exercises do not stimulate it, it is possible that student will be driven to monotony, therefore causing the previously mentioned result.

The group of investigation GRIAS, from the Department of Systems in the University of Nariño, realized an investigation that aimed to study the behaviour of a group of students in their first programming course. New pedagogical strategies were applied to their learning process, which aimed to reinforce the following elements:

Motivation: achieving student participation in the learning process in a more dynamic way.

Simplicity: reducing the number of required instructions, avoiding irrelevant elements that could cause distraction from the central theme.

Precise verification of results: providing promptness in the discovery of the origin of the source of error, avoiding moving away from the concept being studied.

Facility in error correction: offering tools for error correction as part of the learning process.

Computer graphics-related exercise were created in order to reinforce the four elements mentioned above.

The investigative process was conducted with second-semester Systems Engineering students from the University of Nariño, who were randomly separated into two groups. The first group (Experimental), was given the First Programming Workshop (Taller de Programación I), using the new strategic pedagogy methods. Group two (Control), was taught the same course, using traditional teaching methods.

The investigation was divided into three stages:

Phase I (Design): in this stage, the investigative instruments were created, course themes were restructured, a new software tool was created for support, and the exercises to be used were defined.

Phase II (Application): in this stage, the First Programming Workshop (Taller de Programación I), was given to both groups, according to the design proposed in the first phase.

Phase III (Results and Conclusions): in this stage, data was collected through investigation instruments, and conclusions were obtained according to the experience of the process.

In all of the 14 categories evaluated, the students belonging to the experimental group achieved higher grades than those in the control group, obtaining an average of 4.2, in comparison to 3.5.

The learning process was easier for the professor, because the exercises were didactic, more specific, and easier to evaluate.

The students manifested significant pleasure due to the inclusion of exercises related to computer graphics. This resulted in greater student participation, due to their motivation at the time to resolve the planted exercises.

Agent Based Modelling of action efficiency increase and entropy reduction in self-organization

ABSTRACT. Self-organization is defined as the spontaneous emergence of order arising out of local interactions in a complex system. The central to the idea of self-organization is the interaction between the agents, the particles, the elements that constitute the system. In biological systems, particle-particle interactions or particle-field interactions are often mediated by chemical trails (chemotaxis), or swarm behaviour that optimizes system efficiency. In non-biological systems, particle-field interaction plays the crucial role, as these interactions modify the surrounding field or often the topology of the energy landscape. Interactions in a system, or a system and its surroundings, are governed by energetic and entropic exchanges, either in terms of forces or in terms of statistical information. Since, energy and time describe all motions, we look into the Principle of Least Action to search for answers as it involves both time and energy into its formulation. Since, the Action Principle minimizes action and directs the system elements along least action trajectories on the energy landscape (surrounding field), it is imperative that a one to one correspondence exists between the Second Law of Thermodynamics and the Action Principle. In a system in equilibrium, the system particles can occupy all possible microstates whereas in a self-organizing, out-of-equilibrium system only certain microstates will be accessible to the system particles. In these systems, in order to organize efficiently the system particles interact locally and coordinate globally, in a way that lets swarms of agents to uniformly follow least action trajectories and simultaneously degrade their free-energies in order to maintain the organizational structure of the system at the expense of entropy export along the least action paths. In order to address this issues, we perform agent based simulations, looking for dependence of rate of increasing of action efficiency on the number of agents, in order to compare it with similar data in CPUs, previously studied by us. We found that the rate of self-organization depends on the number of agents in the system. In this simulation it is also possible to calculate the entropy reduction, counting the reduction of the number of possible micro-states in the system. Other dependencies, such as the rate of increase of action efficiency, as a function of the size of the system, the separation between source and sink, and others are also studied. We aim to confirm experimentally measured values of those dependencies in our previous research with results from the simulations and to be able to find any universalities between the two.

Attractors landscape restructuring of the GRN-Arabidopsis thaliana root stem cell niche
SPEAKER: Jorge Posada

ABSTRACT. The genetical regulatory network (GRN) of the Arabidopsis thaliana root stem cell niche (SCN) proposed by Azpeitia et. al. 2010, allows an explanation for the gene expression observed in the cell-types of the root SCN. The Network converges to 4 attractors that belongs to the principal cell-types of the root SCN: vascular initials cells (VI), cortex-endodermis initials cells (CEn), quiescent center (QC), and lateral epidermis cap root initials cells (EpC). In Davila-Velderrain 2015, a method for the analysis of the impact of rate-decays values on attractors landscape (AL) was proposed. The objetive of this work was to explore the ability of the different states values ([0-1]) from each of the 9 nodes of the A. thaliana root SCN-GRN for making changes in the AL. In order to achieve that goal, the dynamics of the discrete and also his equivalent continuos model of the root SCN-GRN were analyzed. Finally, a bifurcation analysis on each of the 4 attractor by a systematic increase in the rate-decay values of each node was made. We found that 5 network´s nodes (SHR, SCR, Auxin, WOX5 and MGP) were capable of making changes in the AL, suggesting themselves as point controls for the cell in the differentiation route.

Modeling Structure in Macaque Monkey Movement
SPEAKER: Kelly Finn

ABSTRACT. Movement data has long been used as an indicator of health and wellbeing, as seen with the recent explosion of self-tracking devices. Similar technologies have become increasingly popular for long-term data collection of animal behavior. Multiple features of an animal’s behavior can be measured, though often we only capture summary statistics such as frequency, count, speed, or duration. In this light, the temporal structure of animal movement is an untapped yet fundamental source of information. Indeed, structure in behavior patterns appears to vary across individuals and within individuals across different behavioral or health states. However, little is known about which pattern properties might be relevant or might reveal such differences. To address this, we investigate the temporal structure of macaque movement using statistical measures of correlation and dependence adapted from information theory. Data include 56 activity sequences from 14 Japanese macaques on Koshima Island, Japan, (with observation length ranging between 17-68 minutes) and 17 activity sequences from 5 captive Japanese macaques in field enclosures at the Primate Research Institute (observation length of 34 minutes), recorded using continuous focal animal sampling. Activity was recorded as binary sequences of discretely categorized locomotion, and the location, terrain, and behavioral contexts (foraging, social, travel) of each observation were noted. From these time series, we estimated Shannon entropy rates using (i) block entropy from binary sequence distributions, (ii) Bayesian structural inference (BSI) of hidden Markov models, and (iii) closed-form expressions for estimated alternating renewal processes. As a calibration for unstructured behaviors, we also estimated entropy rates of simulated binary Poisson sequences and of randomized versions of the macaque movement sequences. Preliminary analysis reveals substantial differences in entropy rates of real and randomized data, indicating that macaques perform nonrandom, structured behavior. While all estimation methods converge on the same entropy rate values for simulated Poisson sequences, they yield different values from real macaque data. Further analyses will include measures of past-future mutual information (excess entropy) and memory (statistical complexity). They will employ BSI to infer epsilon-machines of renewal process and alternating renewal process model classes. By identifying the kinds of structure in macaque movement, we hope to eventually explore movement patterns within the context of individual characteristics and behaviors.

Creativity and Popularity of Fanfictions in Fandoms
SPEAKER: Elise Jing

ABSTRACT. Many creative products in human history are recreations, remixes, and modifications of archetypical products. For example, The Iliad was written by Homer based on oral traditions existing before his time; the Little Red Riding Hood was developed into various versions starting from the 10th century. In the contemporary pop culture, such practices of recreation and remixes can be better established and more easily shared through the Internet, particularly in the form of fanficitons — creative works made by fans based on existing original works. Emerging in the 1970s, this subculture has gained attention in cultural studies and gender studies, but few quantitative data-driven analysis has been carried out. Here we analyze data from the website Archive of Our Own (AO3) to study the relationship between popularity and creativity of fanfictions in a fandom (a community consisting of fanfiction authors and readers). We model each fanfiction with a unigram model, and use the Kullback-Leibler divergence to evaluate the distance between fictions. A “typical” fiction in a certain time period is constructed, and other fictions are compared to it. We show that the fictions that are more similar to the typical fiction receive more kudos, in other words, being close to the “mainstream” of a fanfiction is an indicator of its higher popularity. This result reveals a relationship between creativity and acceptance of audience, which may be extended to other creative works.

Testing nestedness metrics against a novel null model

ABSTRACT. No more than two decades ago, the translation of mutualistic communities into the language of networks revealed the existence of a non-trivial, structured organization of interactions. Positive links -defined as mutualistic when they involve individuals of different types and benefit both, as happens between plants and pollinators- do not simply match aleatory partners, but actually display a property known as nestedness. A perfectly nested structure appears when the counterparts of a species of degree k constitute a subset of the counterparts of all species of degree k'>k, this being true for both guilds. A configuration this special seems to entail, furthermore, special dynamics. Indeed, nestedness has been claimed to play a relevant role in the assemblage, stability and resilience of socio-ecological communities.

The interest arisen by these discoveries has stirred up the need for a clear and unified measure of nestedness. In a previous work, also presented in this conference, we addressed the question of how nested structures emerge. We showed that nestedness naturally appears as a consequence of solely local constraints -in particular, the number of interactions per specie-, without needing to include any global mechanism that shapes interactions' configuration. By employing the randomizing methods developed by Squartini and Garlaschelli, we found that a null model in which we fix the average degree sequence is enough to reproduce the observed nestedness of real mutualistic networks. Interestingly, this implies that the non-random appearance of mutualistic communities is a macroscopically emergent phenomenon, whose origin can be traced back to microscopic rules. It also provides us with a new set of sufficient requisites for randomizing nestedness' measures.

Here we exploit this novel null model to test the performance of various metrics. To achieve this, we produce a large sampling of the randomised ensemble using the probability of existence of each of the mutualistic links -which we had previously obtained by the aforementioned methods-. Among the ensemble, different networks sometimes slightly differ in their degree sequences as well as in their size and fill, since the constrained degree is only matched in average. Our aim is to take advantage of such variability as a way to weigh the robustness of the measuring system. For a robust metric, we expect that the measure is not sensitive to any matrix feature apart from nestedness. Our development, indeed, permits to check whether these conditions are actually met, by quantifying the fluctuations on the measure within the randomized sample. Large deviations indicate a great disparity between measures and thus a strong responsiveness to tiny network's variations. Moreover, in some cases we are able to compare the sampling outcome with the analytical result of the randomization, for instance for the NODF by Almeida-Neto et al. In this particular case we found that the two randomised values are significantly different and also that the metric's sensitivity is generally exacerbated for small dimensions. Beyond NODF, we applied these ideas to a diverse set of commonly used metrics and studied the connection of our results with the network's characteristics.

Banking Networks and Urban Dynamics: The Bogota Case

ABSTRACT. This article evaluates the urban transformation of the city of Bogota from the relationship between the population and banking, which have built extensive networks of attention and services, becoming part of the everyday life of people in the city´s complexity. Likewise, in the last six decades, banking institutions have been able to construct a series of interfaces that emerge as a way of transforming urban space and thus participate in the development of localities, which are not constants. To understand the city´s complexity and urban dynamics, it is necessary to use complex adaptive systems, open systems, swarm intelligence and network theory, to integrate the instability of social structures and the changes that have brought the banking in those places where the population lives, works or agglomerate. This is the result of a financial activity that offers products and services in branches, which has transformed the habitability and everyday life of the population of Bogota. The city is characterized by a permanent movement, which brings it closer to being a living organism. Likewise, it has been formed by a group of sub-systems that needs to be analyzed together to better understand its evolution, in this case, the banking sector represented by a network is a part of a sub-system. In addition, to integrate it with the urban dynamics of Bogota and understand how they have acted together and how they have harmonized spaces in the city. Although this methodology can use in other cities, but I will use to Bogota from 1949, the moment the city began its recovery after the massive riots. To achieve this goal, I establish first the city as an open system that interacts externally and permanently. Second, banking networks as a sub-system that adapt to the processes of the financial business and the interaction with the population through its service supply. However, banking networks have carried out their expansion process under the logic of urban transformation and agglomeration but not of human mobility. This logic can be a way to research the future change that bank networks will have in the city of Bogota and the city’s own design. To visualize the data of the expansion of the banking networks will use ArcGIS, from the modeling of a complex network that adapts and integrates to the changes of the urban system. For this, I will use some years as layers to simulate the transformation that has lived the city to integrate the banking networks. Similarly, the digital services and interaction with machines have determined a different form of expansion and transformation of urban space towards the increasing recreation of artificial systems where interactions will require convergent technologies.

Inducing transitions between alternative stable states of microbial communities by transient invaders
SPEAKER: Daniel Amor

ABSTRACT. The microbial communities that live in or on our bodies are complex ecosystems exhibiting alternative stable states. Perturbations such as changes in diet, infections or exposure to antibiotics can threaten the stability of these ecosystems, with important health consequences. However, little is known regarding the mechanisms driving long-term community dynamics after short-term perturbations. Here, we study transitions between stable communities in a bistable laboratory ecosystem that we expose to short-term perturbations. We find that a broad range of different perturbations favor one of the two stable community states, indicating that some states can be much more robust than others. In our case, this difference in robustness is driven by the need for one species to grow cooperatively, thus limiting its ability to re-establish following a severe shock. Moreover, we demonstrate that the introduction of an invader species can also lead to transitions between the stable states. Interestingly, in many cases the invading species did not survive in the final community state, making these species what we call a “transient invader.” This suggests that short-term invasions (such as infections) could be a common mechanism driving transitions between stable states in microbial communities.

The social dynamic of tourism in Real de Catorce, Mexico, as a complex system

ABSTRACT. This research is an approach to the social dynamic of tourism in Real de Catorce, located in the state of San Luis Potosí, México, proposed as a complex system.

Real de Catorce was founded in 1772 and during the 18th and 19th centuries was an important mining center. At the beginning of 1900, the mining industry collapsed and the town was abandoned in 1921. At that time, Catholic pilgrims began to arrive. They were going to adore San Francisco de Asís, as they do until our days. In the sixties and seventies, foreign visitors began to arrive and the municipality was positioned as a tourist destination.

In this system, many heterogeneous elements interact, such as neo-rural residents, Catholic Franciscan pilgrims, indigenous Wixaritari, and national and foreign tourists. It is also composed of several subsystems, such as the Transport, Lodgement, Feeding, Handricrafts, Tourist Guides and Government subsystems. They fulfill different functions and are interdependent and interdefinable, because among them maintain the operation of the system.

First level processes take place in the system, but this is also determined by the second and third level processes, wich occur at the national and international scales. Therefore the system has a hierarchical structure. It is also self-organized, adaptable to a complex environment and has emerging behaviors. It’s autopoietic, because it has generated the structures that have perpetuated its operation, such as the establishment of a network of people who offer tourist services and the implementation of ways of transportation for the visitors inside the town, such as horses and vans.

This phenomenon presents the characteristic of the fractality, since it occurs at a similar way at different scales. The system is open, because it has a great flow of exchanges with the environment. It also has the property of multifinality, because despite starting from initial conditions very similar to tourism in other regions, it has had different results.

Towards better User Experience: The Case of Universities’ Academic Portal

ABSTRACT. Nowadays, how to promote the existence of User Experience (UE), which is also known as interaction experience of the use of digital artifacts, such as Web sites, virtual worlds, personal digital assistants and so on has become a concern (Hassenzahl and Tractinsky, 2006). This is because the ‘loyalty decade’, where interaction experience will become the main success factor (Nielsen, 2008) is here. Hence, the need for models that measures the attributes of UE in order to be able to put a tab on the extent to which UE correlates and positively influence high-quality user-system interactivity. For Universities’ websites it is necessary to draw on the concept of UE. The aim will be to provide a utility driven UE to users in a manner that better supports their work (Pennington, 2015; 2016), The foregoing goal will be pursued from the specific objective point to (i) identify appropriate scales or items to measure UE, (ii) elicit ordinal data using the scale in (i), and (iii) formulate a measurement model for UE measurement and (iv) validate the model. To identify appropriate scales or items to measure UE, appropriate literature will be consulted in order to get relevant (and existing scales). The scales using the questionnaire and interview fact finding techniques will be employed to elicit ordinal data from stake holders. The stakeholders will be from within the University community and from a sample size of 1000 regular users of the University portal. The factor analytic technique will be employed to computationally using inferential statistics to formulate the intended measurement model. The model will be validated using appropriate fit statistical indices (Lee, 2010). This work will contribute to knowledge in the area of user experience evaluative modelling by providing useful information on how to identify appropriate scales or items to measure UE, validate user experience measurement model and elicit ordinal data from stakeholder. It will also develop a measurement model to measure the perception of users based on their experience from the use of digital artifact like the University Website. This will have implication on how UE should manifest as a quality in design, in interaction and in value, with diverse measures from many methods and instruments (Law and van Schaik, 2010). Researchers will be able to resolve issues that bother on the challenges that relate to UE with respect to how to select appropriate measures to address the particularities of an evaluation context. Professionals will be able to learn about measures that enable the benchmarking of competitive design artifacts and to select appropriate design options. It will also be useful to guide stakeholders how to model users’ experience as a basis for producing design guidance (Law and van Schaik, 2010). This work is limited in that more sample size will be needed to validate the result, and whenever the approach employed in this work is to be deployed the context of deployment must be well understood.

Dynamics of unidirectionally-coupled ring neural network with discrete and distributed delays
SPEAKER: Bootan Rahman

ABSTRACT. In this research, we consider a ring neural network with one-way distributed-delay coupling between the neurons and a discrete delayed self-feedback. In the general case of the distribution kernels, we are able to find a subset of the amplitude death regions depending on even (odd) number of neurons in the network. Furthermore, in order to show the full region of the amplitude death, we use particular delay distributions, including Dirac delta function and gamma distribution. Stability conditions for the trivial steady state are found in parameter spaces consisting of the synaptic weight of the self-feedback and the coupling strength between the neurons, as well as the delayed self-feedback and the coupling strength between the neurons. It is shown that both Hopf and steady-state bifurcations may occur when the steady state loses stability. We also perform numerical simulations of the fully nonlinear system to confirm theoretical findings.

Social interaction dynamics in adaptive working environments

ABSTRACT. Collective performance of a community / society is the result of all the interactions between its members internally and externally.

In recent years the use of tools such as Internet Relay Chat (IRC) have facilitated communication in the form of text between people. It is meaningless mentioning the success of platforms based on this concept on the internet among communities for different purposes, but it is remarkable the incorporation in academic and labor environments.

Understanding how these interactions happen between members enables streamlining of information flows that lead to the success of projects undertaken by the community or correct bottlenecks in the flow of information that leads to labor problems.

This can be achieved when social interactions are mapped to a graph and analyzed using the tools that have been developed to study complex systems. The properties obtained from these graphs can help make decisions that could be critical to the community.

An analysis of the interactions of a small community < 50 members who maintained communications using IRC-ish tools was performed. To this end, a data model was constructed to relate members, with their communications (messages, emoticons, etc.) in a directional way and to evaluate their influence in the community.

For this, the interactions were mapped to graphs. The information was stored using graph databases (Neo4J) and explored using graph theory and categories.

Results are shown from the computational perspective with the technological implications and limitations as well as results from the professional-labor perspective of the application of such constructions to efficiently process internal processes.

Meaningful Information in Multi-Agent Learning Systems

ABSTRACT. In this work, we propose to use information theory tools such as rate distortion and maximum entropy in order to include bounded environmental information in multi-agent learning systems, where the decision process follows an evolutionary game specification for large-scale networks. One of the cornerstone challenges in Large-scale networks is dealing with the amount of information needed to guarantee the proper operation of the system, in addition with the high computational burden for the decision process for such large-scale systems. Environment information is commonly used in agent learning algorithms as we can observe in literature related to machine learning, mostly focused on single agent cases. We can also find some research direction of learning in which the multi agent context or environment are included as it is shown in some reinforcement learning algorithms and evolutionary game implementations. However, most of the cases are not concerned with the amount of information from the environment needed to make decisions, which leads to have many data that could be overvalued. Therefore, it is necessary to define a boundary to identify the point where environmental information becomes redundant or misunderstood, i.e., the point in which every agent has just the information that it needs about its surroundings. In this regard, we propose a method that uses rate distortion theory and maximum entropy to find a border in which the environment information is neither misunderstood nor redundant. Additionally, based on ecological principles, we give a fitness value to this environment information in order to improve the agent strategies selection inside of a population ruled by an evolutionary game learning process. The combination of information theory and evolutionary game theory is the basis of the proposed approach, which define a multi-agent learning framework where redundancy and misunderstanding define a border in which we obtain valuable environment information that it is used to improve the learning process for every agent in a system. This valuable information can be understood as a meaningful information in the learning process, which aims to improve the versatility of the multi-agent system to respond to fluctuations in its environment.

Criticality in Complex Networks: A Hybrid Systems Approach

ABSTRACT. Technology today has made infrastructures becoming smarter and interdependent. Smarter infrastructure uses data in feedback loops, which provides evidence for coordinated decision-making. Also, the system includes agents that can monitor, measure, analyze, communicate and act without human intervention. Moreover, dierent kinds of infrastructure are coupled together, such as water systems, communications, transportation, fuel, nancial transactions and power grids. In particular, for power grids, failures due to interdependency and newer smart energy resources could have a severe impact on services, economic or productions losses, citizens well-being, even the proper functioning of governments. Thus interdependency and smarter technologies impact on networks operation are worth to be investigated. Under this motivation, nowadays, there exists an increasing interest in the research of power grids from the network perspective. New techniques of modeling and analysis have been developed to capture the whole power network structure complexity. These methods comprise both the requirements in structure and dynamics. The thorough understanding and application of complex network framework in power systems are essential to the advancement in the design and control of these smarter infrastructure. Complex networks deal with the behavior of nature and human systems displaying signs of order and selforganization. Mostly, they have a large number of interacting parts, whose collective behavior cannot be inferring from the behavior of its components. Also, the interaction between components can be as important as the parts themselves. Those systems may exhibit predictable behavior close to their regular dynamical mode. But certain events can drive them away from such modes, leading to large-scale flow rearrangements and possibly collapse. Although Network science approaches allow to nding some general properties to understand and control of networks dynamics under this phenomena, a lack of analytical tools to understanding and dealing with these operational changes exist. Also, most of the models found in the literature are conceptual approaches, and its analysis is not directly applicable in engineering. This work leads to study transitions in network dynamics by methods of control theory, proposing a model that includes the complete system behavior and facilitates its analytical study. We use a hybrid systems approach where it produces complex behavioral patterns by dierent discrete transitions triggered by stochastic events and the interaction of various types of discrete and continuous dynamics. Finding on the analysis of this model could have signicant implications for the design of control mechanisms and resiliency in engineering applications.

Fixed points and periodic orbits in the Q2R cellular automaton

ABSTRACT. The Q2R model is a dynamical variation of the Ising model for ferromagnetism that possesses a reversible and conservative dynamics. Because of its complex dynamics it appears that Q2R is a good benchmark to test the principles of statistical physics. Moreover, this conservative and reversible system appears to behave as a typical macroscopic system, as the number of degrees of freedom increases, showing a typical irreversible behavior, sensitivity to initial conditions, a kind of mixing, etc. However, the phase space is finite, hence the dynamical system only possesses fixed points and periodic orbits; therefore it cannot be ergodic, at least in the usual sense of continuous dynamics. Nevertheless, for large enough systems, the phase space becomes huge and it has been observed numerically that the periodic orbits may be exponentially long, thus, in practice, of infinite period. For any purpose, the observation of a short periodic orbit is really improbable for large enough systems with random initial conditions. In general, there is a huge number of initial conditions that are almost ergodic.

The Q2R model is based upon the two-step rule: $y^{t+1} = x^{t}$ and $x^{t+1} = y^{t} \Phi(x^t)$, where $\{x^t,y^t\}$ is the state of the system at time $t$ described by a square $L\times L$ matrix with even rank. Further, $\Phi(A)$ is a nonlinear Heaviside-like function that depends on its argument $A$ by the following rule: at every site $k$, $[\Phi(A)]_k= +1$ if $\sum_{i\in V_k}A_i \neq 0$, and $[\Phi(A)]_k= -1$ if $\sum_{i\in V_k}A_i = 0$, where $V_k$ is the von-Neuman neighborhood of the site $k$. As shown by Pomeau [1], the Q2R automaton preserves a energy-like function. Notice that, because the system is conservative, there are neither attractive nor repulsive limit sets; all orbits are fixed points or cycles.

The phase space of the Q2R system of $N$ sites possesses $2^{2N}$ states and, in this oral presentation, we partitioned it in different subspaces of constant energy and, more interesting, into a large number of smaller subspaces of periodic orbits or fixed points. In this context, we derive exact results concerning the number of fixed points and period 2 orbits. A fixed point is characterized by a state $\{x,x\}$ such that $\Phi(x)=1$ (one at every site of the lattice), and the cardinality of this set is $N_{FP}= K^2$, where $K$ is the total number of configurations $x$ without null von-Neuman neighborhood in its even (or odd) sites, i.e., $x$ restricted to its even (odd) sites must satisfy $\Phi(x)=1$. The period 2 cycles are characterized by $\{x,y\}\to \{y,x\}\to \{x,y\}$, and we show that the cardinality of this set is $N_2 = N_{FP}(N_{FP}-1)$. Finally, we discuss general results concerning orbits with larger periods.

From Information-Theoretic Causality Measures to Dynamical Networks of Financial Time-Series

ABSTRACT. Mutual Information from Mixed Embedding (MIME) and Partial Mutual Information from Mixed Embedding (PMIME) were developed by Vlachos and Kugiumtzis[1, 2] to assess non-linear Granger causality. Compared to other non-linear measures, PMIME offers three main advantages: - No significance test is needed, as PMIME = 0 if there is no significant causality; - No pre-determination of embedding parameters, as this is part of the measure; - No curse of dimensionality as more co-founding variables have no effect on statistical accuracy.

Therefore, PMIME is a good candidate for causality analysis of complex real-world systems, as also shown by some independent analysis[3]. In particular, the application to EEG data has been extensive and very promising, while financial time-series have only been marginally analysed, yet with interesting results.

In this work we firstly test PMIME accuracy on a synthetic dataset generated by the Tangled Nature model[4], further showing its ability to capture such complex dynamics. We then apply PMIME to global financial time-series, constructing dynamic networks of interactions and analysing them in light of the underlying economic and financial relationships.

[1] Dimitris Kugiumtzis. Direct-coupling information measure from nonuniform embedding. Physical Review E, 87(6):062918, 2013. [2] Ioannis Vlachos and Dimitris Kugiumtzis. Nonuniform state-space reconstruction and coupling detection. Physical Review E, 82(1):016207, 2010. [3] Xiaogeng Wan. Time series causality analysis and EEG data analysis on music improvisation. PhD thesis, 2014. [4] Kim Christensen, Simone A. Di Collobiano, Matt Hall, and Henrik Jeldtoft Jensen. Tangled Nature: A Model of Evolutionary Ecology. Journal of Theoretical Biology, 216(1):73–84, 2002.

New Metrics Characterising Global Migration Flows

ABSTRACT. In the International Migration Outlook 2016 from OECD, Stefano Scarpetta, Director of the OECD Directorate for Employment, Labour and Social Affairs, wrote that: “The public is losing faith in the capacity of governments to manage migration” and concluded its editorial stating that “Unless systematic and coordinated action is taken … migration policy will continue to seem abstract and elitist, at best trailing behind the problems it is supposed to be addressing. And, as is already apparent, the result is likely to be a more strident political populism.” Very recently Pope Francis, in an interview with Die Zeit, also warned of the dangers of the rising populism in western democracies. In fact, recent times have shown the rising of this new political trend in Western countries which is coincident with the rise of migration into Europe, both as regular migration but also as a refugee crisis from Syria, Libya, Iraq and Afghanistan.

Migration is a phenomenon which can be characterized by its motivation: there is labour migration which can be temporary or for long periods, migration for education, for family reasons, due to natural disasters or war. Migration at a global scale always has a long term positive effect through the efficient reallocation of human resources, thus improving the global economy. But, if not properly managed, can have short term unwanted local effects due to the overweight on labour, housing and transports, health and education systems. This management inability, per se, might induce social and political aversion and is a fertile ground for the emergence of populism and xenophobia.

In this poster we will address the issue of migrations from the perspective of complexity sciences. Supported on the recent release of the 2015 International Migration Flows Dataset from the United Nations and other economic and geographic data, we applied network based metrics to the migration flows between countries at a global level. Although the drivers of migration are mainly local, specific both in time and in space, the so-called episodic ‘push and pull factors’, we were able to obtain some wide-ranging results. One of these is the expansion over time of the reach of regular migrants, probably due to the progressive reduction of transport and communication costs. Other results concern measuring migration flexibility. We propose a new metric named Migration Flexibility Index(MFI) which is inspired in a economic complexity measure previously developed by Hausmann and Hidalgo. We found that some regions are prone to migrate to very diverse destinations, which by their turn receive from very diverse origins. Some other regions are instead rigid, with fewer migration destinations and very less cultural diversity. We found that, with the notable exceptions of China, India (which have exceedingly large population) and the UK, all the ‘New World’ countries that emerged from the 15th century maritime discoveries belong to a different class of global migration (see Figure 1).

Information, Computation and Linguistic System

ABSTRACT. Since the advent of molecular biology, it has been said that cell is a kind of 'machine', which stores its specification inside itself. Although the perspective of systems biology derived from this understanding well prevails, we still do not have a clue to address cellular system deductively, due to the lack of mathematical insights into the system. Here, I propose a conceptual framework where it is possible to abstract the essential features of the system and project them onto the purely mathematical problem. The framework mainly includes the following three concepts; information, computation, and linguistic system. Each concept can be understood independently with explaining specific features inherent to biological system. Nonetheless, the intersection of these concepts can provide us with the fertile results to understand their relationship and hierarchy. In this framework, 4 bases (A, T, G, C) in biology correspond to symbols in information theory and it enables us to discuss probability of occurrence of each symbols, channel capacity and entropies. The DNA-protein interaction, which is one of the most important chemical reactions within cells, corresponds to computation in automata theory, which leads to the understanding of genome as formal languages. What the molecular interactions (cascades, pathways, protein complexes and so forth) correspond in the framework is linguistic system, which I introduce as a definitely new concept in order to explain the interaction between matured components. The apparent discrepancies among those three concepts can be solved by mathematical explanation. Long-standing questions like whether viruses are to be categorized into life or not will be shed light on by viewing them as a mere set of strings which do not have a function of computation. In this paper, I aim at explaining biological system from the perspective, which is completely different from the previous ones.

An Extension of Artificial Communication System for Self-Organized System

ABSTRACT. Despite of its obvious relationship, it’s rather difficult to completely apply the whole picture of information theory to the essential process in biology (i.e. transcription, translation, replication etc) due to the lack of some components. According to the original paper by C. Shannon (1948), general communication system is composed of the following 5 parts; 1. an information source, 2. a transmitter, 3. the channel, 4. the receiver and 5. the destination. As pointed out in the paper by S. Ito & T. Sagawa (2015), there is no explicit channel coding within a cell if focusing on the robustness of signal transduction against noisy environment. Likewise, if one regards genome (DNA) as an information source, then there is no exact counterpart to channel coding or decoding, and on the other hand, if cellular state (a set of protein expressed at given time within a cell) is regarded as information source, it’s hard to find the correspondent of information coding in information theory to that in central dogma. One of the plausible reasons why this discrepancy can be observed might be that one is artificial system and the other is self-organized system. In such a physical system that power-law distribution appears, it has been widely accepted that scaling domain of q-exponential as one-parameter extension of exponential can be used to explain the phenomena, correctly. Being deduced from q-exponential, some important formulas for power-law system can be given, including, q-product, q-multinomial coefficient and Tsallis entropy. Particularly, non-additivity of Tsallis entropy (especially called ‘pseudo-additivity) has a potential to be applied to a system where statistical independence fails due to nontrivial interaction among each components in a system, including biology. Here, I will review the difference between artificial communication system and self-organized or biological communication system and show the possibility of development for extension of artificial communication system including it as a special case. In doing this, I will show evaluation of various entropies for self-organized communication system including Reny entropy, transfer entropy and Tsallis entropy and why Tsallis entropy is better choice to be utilized as in the system. This result will stimulate the research on self-organized communication system, not only like biological phenomenon but also like natural language.

Characterization of the development of Zurich road infrastructure network under a topological and planning prospective
SPEAKER: Ylenia Casali

ABSTRACT. Infrastructure systems are the backbones of cities for the services they provide. Their structures evolve in space and in time with the change of demography and development program of the administrative planning. Although there are about 200 years of experience in planning “modern” infrastructure systems, we do not well understand how infrastructure networks grow and how this growth is related to demographical and economic development. The purpose of this contribution is 1) to characterise the growth of the road network infrastructure, and 2) to explore if there are signature between the topological network growth and the economic and demographic development. We are using the 1955 to 2012 period to explore the development of Zurich, for which we have high-quality data. The study yields that 1) centrality patterns enable to recognise infrastructure development, 2) to the identification of densification process in the whole city. Future work will investigate the development of network infrastructures across different cities in the world.

Modeling agglomerations of stores

ABSTRACT. One of the most intriguing phenomena in the retail world are the causes for the different space distributions of retail stores according to their trade (the products they sell). For some trades it appears to be more convenient for retail stores to be aggregated in a small region, meanwhile for other trades the tendency is to be far away from each other, forming an homogenuos distribution throughout space (city). Why does this happen? For the case of agglomerations, how many stores can be part of an agglomeration? Where do these agglomerations form? In order to answer these and other questions, we need to understand how stores interact. This interaction has its origin in the customers, therefore we need to understand their behavior. We take two aspects into account in customer’s decision: one depends on the costs of buying the product under consideration and the other aspect characterizes the heterogeneity of the consumers. The model we propose here allows us to calculate the profits for a store in an agglomerate and for a monopoly. We compare these profits to give an estimation of the best place for a new business to be established. In this work we propose a discrete space model in order to define sites which can be capable of having more than one store, therefore agglomerations of stores may be formed. We intend to model the price competition at each agglomeration as arising from the Nash equilibrium between stores.

On the interplay between topology and controllability of complex networks

ABSTRACT. Complexity theory has been used to study a wide range of systems in biology and nature but also business and socio-technical systems, e.g., see [1]. The ultimate objective is to develop the capability of steering a complex system towards a desired outcome. Recent developments in network controllability [2] concerning the reworking of the problem of finding minimal control configurations allow the use of the polynomial time Hopcroft-Karp algorithm instead of exponential time solutions. Subsequent approaches built on this result to determine the precise control nodes, or drivers, in each minimal control configuration [3], [4]. A browser-based analytical tool has been developed in [5] which can be used by stakeholders in collaborative decision-making, and has been applied to policy-making in industrial networks. One key characteristic of a complex system is that it continuously evolves, e.g., due to dynamic changes in the roles, states and behaviours of the entities involved. This means that in addition to determining driver nodes it is appropriate to consider an evolving topology of the underlying complex network, and investigate the effect of removing nodes (and edges) on the corresponding minimal control configurations. The study of the ability of each node to control the network in [6] showed that control centrality is determined by the degree distribution and in some cases (absence of loops in directed weighted networks) by the node’s layer index. In our work we take this one step further and investigate the effect of removing nodes (and edges) on the corresponding minimal control configurations. We propose a classification scheme that combines existing work on topological features [7, 8], with principles of controllability in complex networks. In particular, we consider three categories of nodes based on the effect their removal has on controllability, in terms of the cardinality of the maximum matching, CMM, in the network: a node is delete-redundant, iff CMM is unchanged; delete-ordinary, iff CMM is reduced by one; and, delete-critical iff CMM is reduced by more than one. We experimented with randomly generated directed networks of varying size, and studied pertinent characteristics such as in- and out-degree, average degree distribution as well as connectivity and isolated nodes. Firstly, nodes from each category were removed and we examined the effect on the control configurations of the network. As the edge probability approaches 1, delete-critical nodes decrease rapidly while beyond 1 all nodes tend to become delete-ordinary. Secondly, some nodes (5%, 10%) were randomly removed and we examined the effect this has on the different categories of nodes in the network (for N=500, N=1000). It transpires that the delete-redundant category is the less stable category while delete-ordinary is the most stable as the probability of edge distribution increases. The results of our analysis confirm our hypothesis – structural control theory provides information which is orthogonal to network analysis. The combined information provides a solid basis for the behavioural modeling of a complex system and can provide an important dimension when it comes to scenario appraisal for collaborative decision-making.

Inferring Existence of Physical Feedback Mechanisms from Short-lived Synchronization Events in Complex Systems

ABSTRACT. Phase synchronization of periodic dynamical systems is a common phenomena observed in systems ranging from coupled mechanical pendulums to complex biological systems. In biological organisms synchronization may confer fitness advantages for foraging, mating and other essential necessities to sustain life. Equations of system dynamics, which include a physical feedback mechanism coupling the phases of interacting oscillators, result in phase synchronization of oscillators under constraints of noise levels, initial difference and so on. An example of a generic model for phase synchronization with such a feedback mechanism is the Kuramoto model, which exhibits global synchronization of a collection of independent oscillators above a critical coupling (feedback) strength. This model has been extended to include effect of noise in a stochastic environment such as might be expected in a biological setting. In such scenarios, very often we see synchronized behaviour only for a very short duration of time from which one needs to conclude if the observed synchronization event was result of an underlying feedback mechanism or by pure chance. This question is of critical relevance in the context of complex systems where it may be very difficult to postulate feedback generating mechanisms due to the large degrees of freedom.

In this context, we will describe experiments with a model biological organism called, Caenorhabditis elegans, which is a cylindrical microscopic worm with length and width around 1 mm and 50 microns respectively, to look into swim stroke synchronization between worms swimming close to one another. These worms, when put into a drop of water, will start to swim randomly in an undulatory fashion. When two worms come close to each other, they synchronize their swim strokes for a brief period of time before moving away from each other. From these observations one may conclude the existence of a feedback mechanism leading to swim stroke synchronization. For instance, the worms may be sensing the strokes of its neighbour and modulating its own strokes to avoid collisions. It is rather difficult to search and demonstrate the exact mode of feedback due to the complexity of the problem. Therefore, before embarking on such a laborious task, one would like to get some confidence on the hypothesis that a feedback mechanism exists.

Intuitively, observation of long periods of synchronization may seem like a good evidence supporting feedback. However, our numerical simulations show that oscillators with significant inertia leading to a persistence time scale in their frequency fluctuations can produce synchronization events without any feedback which are comparable in length to those arising from systems with physical feedback. Therefore distribution of length of synchronization events is not a robust metric to infer the existence of a physical feedback mechanism. Subsequently, we have explored the statistics of synchronization without feedback in such systems extensively. Based on these studies we propose a metric based on the behavior of the fraction of time the system spends in a synchronized state as a function of external noise as a more robust metric to infer the existence of a physical feedback mechanism.

Degree distribution of the six main taxonomic classification trees of human languages.

ABSTRACT. We built the taxonomic classification trees of human languages of the six main families from the information on Ethnologue website and we analyzed statistical properties of them. Six linguistic families were analyzed, Afro-Asiatic, Austronesian, Indo-European, Niger-Congo, Sino-Tibetan, and Trans-New Guinea, which cover more than 85 percent of number of speakers and represent more than 60 percent of the living languages of the world. We obtained the degree distribution for the whole taxonomic classification trees not only some levels. We found that the six languages families have similar degree distribution and that this can fitted with a power-law distribution, all fits with correlation coefficients above 0.98. Another important feature found in these trees was that rate between leaves and nodes is close to 0.7, i.e., a considerable part of this structure are leaves.

Enhancement of the early warnings in the Kuramoto Model and in an Atrial Fibrillation Model by noise introduction.

ABSTRACT. When a system is externally disturbed, some statistical moments are affected depending on the nature and the amplitude of the perturbation. This lead us to the following question: how does a perturbation affect the behavior of the early warnings (EWs)? This is the purpose of this work, to determine how EWs are enhanced depending on the amplitude and complexity of the external perturbation introduced to the system, and to show what kind of alerts are more sensitive to this effects. For our purpose we have analyzed several EWs, based in the metric and the memory of the system, for two models: The Kuramoto Model, which is a paradigm of synchronization for biological systems; and an atrial model based on a cellular automata, which has been used to diagnose and treat reentry fibrillation. For each model we have introduced different kinds of perturbations by changing either its nature or its intensity. We have observed that, regardless of its nature, a system can present EWs; further, we have found that the stronger and complex the perturbation introduced into the system, the more the EWs are increased helping to detect them better.

Non-stationary individual and household income of poor,

ABSTRACT. Despite Mexican peso crisis in 1994 followed by a severe economic recession, individual and household income distributions in the period 1992–2008 always exhibit a two-class structure; a highly fluctuating high-income class adjusted to a Pareto power-law distribution, and a low-income class (including poor and middle classes) adjusted to either Log-normal or Gamma distributions, where poor agents are defined as those with income below the maximum of the uni-modal distribution. Then the effects of crisis on the income distributions of the three classes are briefly analysed.

Distance distributions of human settlements

ABSTRACT. We study the spatial distribution of human settlements based on the distance between pairs of cities and in terms of its population. The pair-wise distance distributions can be described by a variety of shapes ranging from quasi-symmetrical to left skewed. Next, we evaluate the degree of inequality by means of a Geo-Gini index, which is defined in terms of distances between pairs of cities and their sizes. Moreover, the spatial correlation between cities is calculated by using the Geary Index. We find that countries with high spatial correlation also exhibit a low level of inequality in city space distribution with a more symmetrical distance distribution. Finally, we discuss our results above in the context of some socioeconomic indexes.

An artificial agent model to achieve emergence of social behaviours in self-organized communications networks

ABSTRACT. Nowadays communication systems require a high level of self-organization in order to build scalable networks, consisting of a huge number of heterogeneous and autonomous components. However, there are several challenges that need to be addressed before to achieve systems that can face inherent complexity of large scale networks, heterogeneous architectures, resources constraints and dynamical environments. Until now, many self-organization methods have been developed to face these problems, particularly in the context of ad hoc networks. Most of the proposed solutions are based on bio-inspired computing and successfully solve problems related with routing, synchronization, security and distributed search. Nevertheless, there is a new challenge related with developing mechanisms to improve the cooperation processes among a set of agents. It is necessary to design strategies to achieve mutual benefit and enhance the system ability to solve problems through collective actions. These models are required to handle common- pool resources, to control selfish behaviours and in general to achieve new kinds of social organization that improve the overall system satisfaction. To do this, we can use self-organization mechanisms of human society like trust, justice, reputation to inspire computational models in order to increase the social ability of artificial systems.

In this paper, we propose an artificial agent model to exploits both self-organizing principles: biological and social. We argue that many challenges of modern communication networks can be faced through massively use of self-organization as control paradigm. Bio-inspired computing to achieve efficient and scalable networking under uncertain conditions. Socio-inspired computing to generate new behaviour patterns to solve problems through collective actions. These mechanisms allow a collection of agents adapt their behaviour towards an optimal organization according to the operation context and at the same time, face the tension between individual and collective rationality common in social dilemmas. An artificial agent architecture based in a hierarchy of behaviour is proposed. We formalized the model using a finite state machine and used builder software pattern to implement the agent in a real ad hoc network. To test our proposal, we modelled a non-cooperative game and used the concept of social reciprocity and genetic algorithms to adapt the cooperative behaviour of the agent during the network operation. This test scenario is completely distributed and the result showed that the agent adapts effectively to the environment changes. Furthermore, the agent behaviour is extensible from the point of view of the implementation, making possible use it other contexts.

Computation and Robustness in Spatially Irregular Cellular Automata

ABSTRACT. Dynamic, spatially-oriented systems found in nature such as plant stomatal arrays can produce complex behavior without centralized coordination or control. Instead, they communicate locally, with neighborhood interactions producing emergent behavior globally across the entire system. Such behavior can be modeled using cellular automata (CA) because of the system’s ability to support complex behavior arising from simple components and strictly local connectivity. However, CA models assume spatial regularity and structure, which are not present in natural systems. In fact, natural systems are inherently robust to spatial irregularities and environmental noise. In this work, we relax the assumption of spatial regularity when considering cellular automata in order to investigate the conditions in which complex behavior can emerge in noisy, irregular natural systems. We conduct both density classification and lambda parameter experiments on CA systems with spatially irregular grids that are created through a Voronoi process using generator points derived from plant stomatal arrays. We then quantify their behavior and compare them to traditional cellular automata.

In the density classification experiments, we see that the task performance profiles of the irregular grid automata are similar to those of standard automata, suggesting that irregular grids could be suitable for supporting complex behavior much like standard, square grids. Note that this result is achieved despite the lack of periodic boundary conditions in the irregular grids, which decreases neighborhood connectivity and hinders information transfer across the grid space. For comparison, local majority task performance for regular grid is reduced significantly when periodic boundary conditions are removed.

We also generate lambda-entropy profiles of irregular cellular automata in order to detect the presence of critical transition points. Despite the irregularity in cell orientation and non-periodic boundary conditions in these grids, these lambda-entropy profiles exhibit the same shape and jumps in entropy indicative of a transition event in regular CA systems. Furthermore, the transition characteristics are retained when we degrade the connectivity of these grids, an indication of the robustness of computation conditions even in irregular grids.

Though we have relaxed many of the spatial assumptions made in traditional CA systems, both our density classification and lambda experiments have produced remarkably similar results to their regular grid counterparts. It appears that features of regular grids that initially seem crucial for supporting CA behavior such as uniform neighborhood sizes and periodic boundaries are in fact not necessarily required for supporting complex behavior. These results help explain how natural dynamic systems can support emergent computation while handling environmental imperfections.

Dynamical analysis of wind data for detection and classification

ABSTRACT. Each year the Santa Ana Winds (SAW ) are presented In California and Baja California as a dry, Foen desert type winds, that blows from the Northeast or East quadrant. This winds flow with an intense magnitude and strong bursts that circulate to the sides of the mountains and plateaus. The SAW reach that region with a velocity greater than 4 m/s, the direction of the sustained wind in the quadrant 0-100 degrees and a decrease in relative humidity.

The empirical method of identification takes into account four factors: Speed, direction, relative humidity drop and atmospheric pressure increase. Experimental data was obtained from an automatic station that captures information every 10 minutes, for 5 years, operated by the National Water Commission (CONAGUA). This station is located in the mountainous zone of ``La Rumorosa'' (1263 m a.s.l.), Baja California, México.

The aim of this research is to identify the presence of the SAW automatically by means of a clustering algorithm. For clustering, Gaussian mixtures model is used. A mixture model can be defined as a probabilistic model to represent the existence of subgroups contained in a group. Mixing methods are used to create statistical inferences, approximations, and predictions about the properties of subgroups. The Gaussian mixture model (GMM) is a parametric probabilistic density function represented as the sum of densities of Gaussian components. Each group is modeled by a density function, which represents a family of density functions.

The GMM can be used to represent characteristic distributions in data obtained from meteorological stations is motivated by the possibility that individual components of densities may contain subsets of wind modes hidden in the data. There are previous studies in regions of USA where the k-means algorithm has been used to identify winds. One problem with using K-means or GMM is that there is no way to choose the optimal number of clusters because the number of clusters is an input parameter of the algorithm. However, there are 2 criteria (AIC and BIC) that help to find a grade for each cluster, so that when comparing these grades it is possible to determine how many groups are better.

The analysis was performed on experimental data for the years 2010-2014, as well as an empirical analysis of the SAW, with the characterization of the winds in ``La Rumorosa'' region. The results obtained by GMM were compared with those obtained by the empirical analysis. The comparison shows an agreement rate above 85 % in some cases and the best ones up to 94 %.

Econophysics Study of World Income.

ABSTRACT. The main objective of the work, is the comparison of the world distributions of the income over several years, obtained by Milanovic Branko and Martín I. Salas. The purpose of this comparison is to know how far apart the two forms of calculating this distributions are, and then to determine a limit distribution to which all distributions tend to exist if such a distribution exists. Obtaining the powers that are believed to follow the tails of the Pareto distributions at the end of higher incomes. This is expected to determine a better approximation of the distribution of income to subsequently make an adjustment, in which the model to be used (regardless of whether there is a limit distribution or not), is based on the analogy of economic transactions, with particles in a system sharing kinetic energy through collisions. These transactions also include saving factors, since in reality (and in general) an economic agent does not lose all its capital (a particle its energy) in a single interaction.

The network topology and the agility of a supply chain

ABSTRACT. We analyze the influence of the network structure on the agility of a supply network, which is understood as the “ability to respond rapidly to unpredictable changes in demand or supply”.

Previous results point to the prevalence of scale-free structures and derivatives as the most convenient topologies for agile supply networks. Our main hypothesis is that this is not the case for supply networks where every agent is supplied by other agents in the precedent tier and supply to agents in the subsequent tier. This condition is adapted to the case of some real food supply chains.

In order to test our hypothesis, we build a model that represents a supply network with three tiers (suppliers, wholesalers and distributors) and specific rules to allocate orders and supplies. We assume that the agents' degree distribution in every tier can follow three probability distributions (regular, zero-truncated Poisson, zero-truncated power-law). We simulate a sudden demand change and measure the order fulfillment rate (OFR). The results show that the OFR when assuming power-law degree distributions is lower than when assuming other homogeneous distributions. Thus, the highest agility of the supply network is achieved when degree distributions are homogeneous (regular or Poisson). These results illustrate that the most efficient topology in a supply network is not necessarily scale-free, but depends on the conditions of the specific supply chain.

The model is tested with a real fish trade market, the Mercado del Mar (MM) in Guadalajara, Mexico. The data was collected from several interviews to a sample of 10 wholesalers in the MM. The results show that the pattern of interrelationships in the MM is suitable for the aspects of the supply chain efficiency analyzed theoretically.

Stochastic risk model for risk management of exchange rate Peso / Dollar.

ABSTRACT. The exchange rate represents the relationship between two currencies and their behavior is a key factor in the economic performance of a country. During the last few years there has been a significant increase in the exchange rate peso/dollar. From the point of view of risk management this situation is a challenge since the entities should have the sufficient capital to face any sustained increases in the price of the dollar. This paper aims to present an stochastic model based on Brownian motion of Robert Brown and Norbert Wiener for the management of the exchange rate risk. It uses of historical exchange rate information during the 2015-2016 period to simulate the possible trajectories over a year that determines the 0.01, 0.05, 0.5, 0.95, and 0.99 percentiles for each time instant, resulting different confidence bands. It is observed that the behavior of the exchange rate for each year follows a consistent trend over the expected value of the model and in moments of high volatility is located within the risk bands obtained. Due to the above the proposed model can be a useful tool within an entity for risk management purposes.

SPEAKER: Marivel Zea

ABSTRACT. Currently, the city of Queretaro requires a counting and a comparative analysis of the emissions produced by the vehicle park since the population is growing exponentially every year, causing the increase of pollutants.

There are different studies to get information of this type, one of them are the vehicular census which use traffic gauges in order to gather data of an area that is representative of a city.

The aim of this project is to develop a system for the detection of moving objects through artificial intelligence to classify and count the real-time vehicle park, which will allow decision-makers to carry out innovative actions in favor of better mobility.

The study will be carried out on Avenida Universidad, which is one of the busiest avenues in the city of Querétaro.

For the development of this system were integrated different tools such as image processing, artificial vision, Gaussian models and Feed-Forward Artificial Neural Networks. Gaussian models are a parametric probability density function represented as a weighted sum of densities of the Gaussian components. We managed to integrate artificial neural networks, which made it possible to recognize the vehicles and classify them as automobile or microbus type.

The data used for the training and verification of the neural network were derived from an experimental design. The variables examined were the number of video frames, the number of Gaussian mixes and the learning speed.

The main findings after implementing this new system were the reduction of at least 70 percent of the time needed to carry out the vehicular census and a reduction of at least 50 percent of the costs of performing the vehicular census, mainly with regard to the personnel that is usually required.

A chaotic time series that studies the average speed of a Metrobus route

ABSTRACT. This poster presents the results of the chaotic analysis of time series obtained from the data of average speed per hour of route 5 of Bus Rapid Transit (BRT) of Mexico City, during the month of May 2014. The idea of studying the velocity data of a transport system arose from Shang and Kamae's paper ``Chaotic analysis of traffic time series'' (2004), in which researchers assert that they are complex entities. In this work a statistical study was carried out and its most relevant characteristics were examined, as well as apply the techniques of chaotic analysis described by Takens's Theorem. The results obtained are as follows: the series is stationary and its mean is 38.94 km/h. For the phase space reconstruction, the delay time is 2 hours and the immersion dimension is 8 units. The correlation dimension is 0.1063152, an indicator of the chaotic behavior of ``minor dimension''. The maximum Lyapunov exponent results from 0.02231185, positive and finite value implying that the trajectories diverge exponentially and that there is a chaotic behavior. Thus, the rebuilt attractor turned out to be chaotic. Given the study of the time series with the mathematical and computational tools described by different methods, which together encompass the techniques of chaotic analysis allows us to analyze which are the stations with more influx and the time when it happens, or makes us question whether the data they provided is correct. Route 5 is suggested as an alternative transportation system for the Metropolitan Zone of the Valley of Mexico along with other forms of mobility. Some problems are discussed in its operation: that the BRT units reach speeds of more than 100 km/h after 11 pm and very early in the morning, that the speed is very low in hours of heavy traffic. The suggestions to these scenarios are: punishments with speed increase and introduction of intelligent traffic lights.

A Principle of Decreasing Average Unit Action in Self-organization of Complex Systems

ABSTRACT. First principles determine the equations of motion and the conservation laws in physics. Those same principles should determine the evolution of complex systems towards more organized states. The principle of least action states that all motions occur with the least amount of action as compared to alternatives. This can be translated that they occur with maximum action efficiency, if we divide by the action necessary for the occurrence of one event. Since this is a first principle in physics, there is no reason that it will be violated in complex systems. Therefore we investigate the action efficiency of organized systems, as a measure of their level of organization. We find that they evolve toward more action efficient states. Therefore the principle of least action, not only determines all motions and all conservation laws in all of physics, but also the evolutionary states in complex systems. In order to measure action efficiency in complex systems, the principle of least action needs to be modified, from the minimum of action along a single trajectory, to the minimum of the average action per one event in a complex system, within an interval of time. This is an extension of the principle of least action for complex systems. We measure that the increase of action efficiency in the evolution of complex systems happens in a positive feedback with the rest of the characteristics of the complex systems, such as the total amount of action for all events in it, the total number of elements in the system, the total number of events, the free energy rate density in it and others. This positive feedback leads to exponential growth in time of all of those characteristics, and a power law dependence between each two of them, which is supported by experimental data. This causal loop we find is the mechanism of self-organization in complex systems. Further, the tendency for action minimization expresses itself by decreasing action not toward a fixed value, as in simple motions, but further without an obvious limit. This can be termed as a principle of decreasing average unit action, which expresses the tendency to action minimization through bringing the endpoints of the trajectories closer, achieved by increased density in complex systems. Other mechanisms are decreasing the size of the agents in a system, such as in CPUs. Therefore, instead of achieving a state of least action and stopping its evolution there, a system keeps decreasing its action, without any obvious bounds. This makes it an open-ended process, with an attractor – the least possible average action per one event in a system. Thus, systems can self-organize and evolve, starting from simple physical systems, through chemical, biological, social and future. Action efficiency, theoretically can be increased at least to the Planck’s limit. This is when one Planck’s constant of action is necessary for the occurrence of one event in a system. It is possible that quantum computing can overcome even this barrier.

On the basic reproduction number for vector-borne disease in a metapopulation system with human mobility: a dynamical network approach

ABSTRACT. The basic reproduction number (Ro) is an index whose value defines a threshold to know if a given disease will spread into a completely susceptible population. Furthermore, in the context of dynamical systems, the basic reproduction number is related to the stability of the disease-free equilibrium point in an epidemic compartmental model as the well know Susceptible-Infected. In specific, if Ro<1 the disease-free point is locally asymptotically stable, and if Ro > 1 it is unstable. Then, from the mathematical analysis, it is possible to determine and explicit form for Ro via the next generation matrix or by a direct linearization of the model over the free-disease equilibrium point. Even more, an expression for Ro in vector-borne disease as those transmitted by mosquitoes (Dengue, Zika or Chikungunya) it is also possible to obtain in terms of mosquitos-entomological parameters. However, it is not straightforward to get an expression for Ro in a metapopulation system in which the disease is propagated over a set of multiple connected communities and where humans are able to travel between them. In this work we use the formalism of dynamical networks in order to obtain Ro in a metapolution model using both methodologies: the next generation matrix and the master stability function. We address this problem from the Lagrangian perspective where human-mobility is introduced via the resident dwell-time parameter pij, which stand from the fraction of time that residents from the community i spend in community j. In this context, the system is modeled by a dynamical network with weighted and unidirectional links given by pij and where the dynamic of each node is described by a compartmental model of Susceptible-Infected for both humans and mosquitoes. We prove that Ro depends on the following parameters: mosquitoes-entomological (effective biting rate, per-capita mortality of adult female mosquitoes, etc.), human features (recovery rate) and human mobility (pij). We analyze under which range of values from the above parameters occurs that Ro<1 and we investigate the effect over Ro of network’s topology and the distribution of the values for pij.

Improved Characterization of Naturally Fractured-Vuggy Carbonate Reservoirs Through a Statistical Multivariate Analysis

ABSTRACT. Characterization of naturally fractured carbonate reservoirs has been a major challenge in the oil, gas and underground water industry. The world’s largest oil and gas reserves are present in these giant carbonate reservoirs. Dynamic reservoir modeling is still under study, specially due to the triple porosity which adds another dimension of complexity to the characterization of these reservoirs, with multiple solutions.

In order to characterize these reservoirs through well test analysis, we use a triple-porosity double-permeability model, and a triple-porosity single-permeability fractal model. Using the analytical solution of the model, a statistical sensitivity analysis of a large number of synthetic cases is performed, and the behavior of the different parameters involved in the model is studied by averaging the effect of each parameter in the pressure and the magnitude of this effect. Also the importance of each parameter over different time values is evaluated. By unifying this information, an ad-hoc methodology for reservoir characterization (with the studied model) is proposed.

This methodology significantly reduces the estimation error of the reservoir parameters, from 29% to 11% on the median overall error for the less-sensitivity parameters, making it possible to estimate these parameters to changes in wellbore pressure. It also predicts the range of values in which each parameter has a higher estimation error and can eliminate multiple solutions that can be reached with an optimization algorithm due to numerical precision.

We present results on Mexican carbonate reservoirs and make a comparison with results using commercial dual porosity software. We show that our characterization increases the amount of information of the reservoir, gets more accurate fit, and demonstrate that considering triple porosity is crucial for these types of reservoirs.

Comparative analysis of the molecular evolution of the Gene Regulatory Network underlying trichoblast formation in over 850 ecotypes of Arabidopsis thaliana.
SPEAKER: Alma Piñeyro

ABSTRACT. Plants are sessile organisms that contend with environmental changes through the dynamic modification of their physiology, genetic expression and morphology. While these changes occur at the individual level, advantageous changes can diffuse and eventually fixate in a population. Thus, populations from a given species subjected to different environmental cues can locally adapt, giving way to distinct varieties or ecotypes that can in turn have contrasting phonological, developmental or growth patterns. The comparative study of natural variations in response to different environmental cues across the ecotypes of a particular species can allow to empirically assess the robustness and evolvability of particular nodes within a Gene Regulatory Network (GRN). In this work we analyze the effects that potential local adaptation to contrasting environments can have in trichoblast patterning in the root, through a comparative analysis of the molecular evolution of the genes recovered as necessary and sufficient in the GRN underlying trichoblast and atrichoblast patterning in plants. The study case presented in this work involves a comparative analysis of the mutational changes fixed in over 850 ecotypes of the plant model species Arabidopsis thaliana. These ecotypes have evolved in different parts of the world, some under contrasting ecological and climatic conditions. Thus, we use an ecological evolutionary developmental biology (eco-evo-devo) approach to survey which genes within the trichoblast GRN have incorporated more non synonymous mutations as well as which ecotypes present divergent mutational patterns with respect to the most widely used A. thaliana ecotype for genetic and functional studies: Columbia-0. Furthermore, we present preliminary data pertaining correlations between select A. thaliana ecotypes with extreme patterns of gene variation with their ecological settings.

Propagation of cascading overload failures in interconnected networks

ABSTRACT. Cascading failures are frequently observed in networked systems and remain a major threat to the reliability of network-like infrastructure. To assess system resilience, we analyze the effect of link failure on the process of the sandpile avalanche propagation through interconnected networks. We observe a positive feedback between link failure due to overuse and sandpile dynamics, where damage spread is controlled by the link strength and density of interlayer connections. Our work provides insight into the problem of optimal robustness of systems of interconnected networks. We consider a classic model of cascading failure, the BTW sandpile model, on a system of interdependent networks. Additionally we assume that links in the system fail after they have transported more than θ grains of sand. For simplicity and ease of visualization, we consider a system of two square lattices, with periodic boundary conditions in each layer, where both inner and interlayer links are characterized by the same strength θ. In a weakly connected system where θ is low structural damage to the network propagates radially from a site of initial failure causing an abrupt collapse of the entire system (Fig.1; top, left). An increase of link strength θ causes more gradual and uncorrelated damage spread, with different parts of the system failing at different times (Fig.1; bottom, left). In both cases an increase in coupling P between layers leads to increase in the number of sites at which failures originate followed by simultaneous destruction of remaining links (Fig.1; right). Strong and weak links, however, affect system resilience in diametrically different manner. Increase of coupling P between layers in a system with weak links leads to greater diversity of times at which links fail, with an abrupt collapse of the network occurring at later times (Fig.1; top, right). Thus when operating a system built on weak components the increase of coupling between layers comes as a strategy improving resilience to failures. On the other hand, an optimal resilience for a system of strong components is reached at low connectivity, where greater variability of failing times is observed. These results come in line with observations of numerous nature and man-made networks characterized by a modular structure, where clusters of strongly connected nodes are weakly coupled with each other. Our work suggests that such mixed topology might be most robust one with respect to failures propagating through the system.

A simple model of friendship rank-distribution on a social closed network and its academic impact.

ABSTRACT. We present an analysis of an ordinary classroom as a (closed) social network. We show that the relations of friendship amongst the agents of the classroom are given by a degree distribution that follows a strong random trend, and that its connection probability distribution function P(k) is of a Gaussian kind. On the other hand, we show that the relations of animosity amongst the agents have a highly correlated behavior, which can be seen as a connection probability distribution described by power laws. Later on, we analyze the impact of the level of harmony (or disharmony) on the academic performance of the agents as a whole, and we propose an analytic model that describes this impact by a Beta-like function. To finish, we conclude that as the degree distribution of the agents of the network becomes more homogeneous, the academic performance of the agents improve.

Regulating loggerhead sea turtle fishing bycatch in Gulf of Ulloa Mexico: An exploratory modeling approach

ABSTRACT. The case of incidental fishing bycatch of loggerheads epitomizes the challenges in policymaking regard-ing highly migratory endangered species. Sea turtles are protected by multilateral agreements that identify fishing bycatch as a major threat. In our case, the U.S. government notified Mexico of possible trade sanctions because the lack of proper regulations to address this threat. In Mexico, authorities, scientists, and stakeholders faced deep uncertainty arising from a combination of very down-to-earth factors, such as limited information, conflicting descriptions of causal relationships, disagreement about evidence, restrictions embedded in formal spatial models, linguistic imprecision, statistical variation, and measurement error. In addition, “politically induced uncertainty” exacerbated conflict as government agencies used disciplinary authority and knowledge to narrow the scope of the available scientific data and downplayed gaps of knowledge in favor of their institutional mandates. Accordingly, we implemented a transdisciplinary approach to address the conceptual, institutional and social barriers that historically had impeded consensus building about regulations to curtail fishing bycatch of loggerheads.

We addressed the analytical and methodological challenges of tacking such an urgent, political and contested issue through exploratory modeling. In particular, our research was framed in conformity with the Mexican legal requirements of duly grounded and reasoned decision making. Hence, we combined two techniques. The first aimed to generate an early warning indicator and entailed the implementation of ecological risk assessment. The second aimed to elicit an optimal zoning scheme and entailed the implementation of the area-oriented multiple use framework. One innovative aspect of the latter was the use of multicriteria modeling to estimate the relative costs of segregating fishing or conservation activities in each zone. Results constituted the backbone of the regulations aimed to set a bycatch cap and a refuge area for loggerheads in Gulf of Ulloa, which were considered comparable in effectiveness as those of the U.S. government apply in Hawaii.

We argue that our results identified the short-term course of action that was consistent with the long-term goal of preserving the Eastern Pacific loggerhead population, in accordance with the international commitments of Mexico. We generalize that this case shows how exploratory modeling can be used to enhance the ethical and epistemological dimensions of transdisciplinary inquiry. The corollary of the whole process teaches that an exploratory modeling rationale enabled a structured interpretation of the stakeholders´ positions, which not only exposed some key logical fallacies but also improved governance of the loggerhead population-halibut-fishermen system.

Urban Growth and Complexity related to Social Energy and Food Consumption.
SPEAKER: Regnier Cano

ABSTRACT. Many branches of science try to define complexity, it does not exist a unified definition of it, however some physicist attempt to construct a definition. This definition basically fixes a way of measuring the complexity, that is the energy rate density, that considers the amount of energy flowing through a system per unit time and per unit mass. This free energy is related with the entropy changes in a system (Chaisson, 2011). In this work, it is proposed to understand the complexity of human societies through their urbanization processes. Particularly, it is analyzed the effect of these processes on energy consumption and how it is related with complexity. This research explores the idea that, as a society is more urban (complex), the greater is its energy consumption; this process is also related to an increase in food consumption. Many countries in the world were studied and the analyzed data were: (i) the total primary energy supply, obtained from the International Energy Agency; (ii) the urban population, total food and animal food, obtained from the Food and Agriculture Organization of the United Nations. Both the primary energy data (in the period from 1990 to 2014), and the food consumption data (in the period from 1990 to 2011) were conversed to energy rate densities, as a quantification of complexity. These values were related to those of urban population to establish different behaviors of national energy and food consumptions. Indeed, while some countries show excessive energy consumption, some others are moderated or even efficient in their urbanization growths. We conclude that the urban population of a society is an adequate indicator to describe the corresponding values of total primary energy supply and food. The results presented here will provide clues for the construction of a model that will allow identify key factors to find solutions for sustainable energy consumption.

Reference Chaisson, E. J. (2011). Energy rate density as a complexity metric and evolutionary driver. Complexity, 16(3), 27-40.

Approximation to the quantum planar rotor coupled to a finite temperature bath

ABSTRACT. An approximation to the description of the dynamics of a quantum planar rotor coupled to a finite temperature bath is derived by considering a microscopic model of interaction based on an angular momentum exchange with two different environments coupled independently to the positive and negative angular momentum spectrum. A non-Lindblad master equation is derived for this microscopic model by using the Born–Markov approximation in the weak coupling limit. We show that under this approximation the rotor dynamics presents the correct damping behavior of the motion and the thermal state reached by the rotor is in the form of Boltzmann distribution. The case of the quantum rotor in an external uniform field and the quantum kicked rotor are briefly discussed as exemplification.

Short time dynamics determine glass forming ability in a glass transition two-level model: A stochastic approach using Kramers’ escape formula

ABSTRACT. The relationship between short and long time relaxation dynamics is obtained for a simple solvable two-level energy landscape model of a glass. This is done through means of the Kramers’ transition theory, which arises in a very natural manner to calculate transition rates between wells. Then the corresponding stochastic master equation is analytically solved to find the population of metastable states. A relation between the cooling rate, the characteristic relaxation time, and the population of metastable states is found from the solution of such equation. From this, a relationship between the relaxation times and the frequency of oscillation at the metastable states, i.e., the short time dynamics, is obtained. Since the model is able to capture either a glass transition or a crystallization depending on the cooling rate, this gives a conceptual framework in which to discuss some aspects of rigidity theory, for example.

Emergence of Criticality in Infant Brain: An EEG study
SPEAKER: Marzieh Zare

ABSTRACT. Background: Self-Organized Criticality (SOC) is a characteristic feature of complex dynamical systems. Study of SOC in the human brain has attracted widespread attention among physicists, and there is extensive empirical evidence that the brain works near criticality. At the criticality, avalanche size and lifetime distributions display power-law exponents limited to a specific range. Invasive studies investigated criticality in brain activity by examining Local Field Potentials (LFP) and ECoG data; however, noninvasive methods also demonstrate criticality in human brain activity. Recent studies report power-law behavior (PLB) in EEG, MEG and fMRI data. Some studies propose that the human brain is a non-linear self-organized system and thus power-laws represent a signature for a healthy, well-functioning brain. On the other hand, neuronal avalanche dynamics represent a particular type of synchrony. Large-scale synchronization has been shown to emerge in higher frequency bands. Here, we explore whether PLB is a signature of SOC that emerges in infancy and if that signature also tracks developmental trajectories. We also examine the evidence for the origin of criticality and PLB specific to different frequency bands.

Methods: EEG from 62 scalp electrodes was recorded from 12 healthy infants at 6 and 12-months-of-age (N=24) who were awake, and comfortably seated on their parents’ laps. Each child’s EEG data was examined for changes in evoked activity. EEG signals were sampled at 250Hz and band-pass filtered online at 0.1–100 Hz. We examined evoked data from an oddball paradigm that presented two blocks of paired complex tones, one separated by a 70ms ISI and a second by a 300ms ISI. Average duration of the recording was ~10 minutes for each child for each ISI condition. Using automatic channel rejection (EEGLAB), noisy channels were identified, removed and interpolated. Time series were broken into six frequency bands: delta, theta, alpha, beta, gamma and high gamma. To detect avalanches, data were Z-scored. Neural avalanche analysis was performed separately for each frequency band for each subject at each ISI. Activity was defined based on a threshold computed as a factor of SD and whether the negative peaks of the EEG exceeded that threshold. If separation of two activity periods was less than a specific window length, regardless of electrode location, we counted them as the same avalanche. Thus the “size” of an avalanche is the number of activities and their “duration” equals the time difference between the first and last activity within an avalanche.

Results: These results demonstrate that, even in infancy, the human brain is a SOC system. Power-law behavior was demonstrated in 6-month-olds. In a subset of subjects, however, this pattern only emerged at 12-months, suggesting an extended developmental trajectory for appearance of PLB. Our results also revealed that, at least at these ages, only higher frequency bands exhibited PLB. As higher-frequency brain activity has been associated with conscious attention, cognition and critical thinking, the significance of this finding needs further exploration. But we propose that local brain synchrony can propagate via a neuronal avalanche mechanism and that SOC follows a variable developmental progression.

Correlation of pollution and traffic flow in Mexico City. Predictive analysis for the impact in possible scenarios of chaotic events in the future.

ABSTRACT. Abstract — Mexico City Metropolitan area is one of the biggest and most polluted urban areas in the world; no specific data exists indicating the impact and correlation of the traffic flow, on the air pollutants and how the increase on motor vehicles and changes in the infrastructures will impact in Critical condition of contamination and possible collapse of vehicle circulation. This aim of this study is to begin the acquisition of data in the critical season of high pollution of spring that in year 2016 was in critical pollution index that make the government change regulation for verification of vehicles but not in traffic flow. Begin to develop a model for the correlation of traffic flow vs pollutants and climatic variables and propose scenarios of change.

Routing in temporal networks

ABSTRACT. A temporal network (TN) can be described as a set of devices capable of computing and communication, interacting amongst each other for varying time periods, ie, since nodes are mobile it is possible that new nodes join, meanwhile others leave the network. To establish a route between a pair of nodes of this temporary network is not a trivial matter, classical algorithms choose the route as a large and computationally expensive static trajectory, nevertheless, in a Temporal Network, a chosen node may not be available for communication in a period of time. One strategy is that for a specific instant, the route may be represented as graph describing the configuration that a local network can adopt in current time. Therefore, it is not suitable to use traditional routing algorithms for TN since they are designed for static topologies in which it is necessary to know about the whole network in order to be able to determine any optimal route. Traditional routing algorithms use a path discovery algorithm each time the topology changes, it requires the generation of a global state of TN continuously.

In this research, it is proposed an algorithm for routing discovery, whose objective is to decrease network flooding. The algorithm is named Consensus Routing Algorithm (CRA). CRA accounts the characteristics of the TN, such as idle time in the scheduler, network’s availability, mobility, and the distance between nodes. Idle time in the scheduler node is a significant feature because the path is considered broken if there is not enough time in node’s scheduler to transmit a message.

The algorithm obtains a route through a dynamic topology, by establishing successive consensus among nodes. Inside a local group, each node gets routing information from its neighbors while exists an active connection. This provides resilience to local changes of the dynamic topology: any change is kept at a local scale.

When some change occurs (a mobile node entering, moving, or leaving), it does not affect the overall state of the TN, but it only involves the local group of the implicated nodes. Then, it is not necessary to recalculate the entire route, but only those groups of nodes affected by the change. The union of the local states of the nodes forms the local network status at a given time T. To determine the cost of sending a message between two nodes, it’s defined a metric in terms of how long it takes to deliver a message over a given route.

The route obtained by the CRA is represented as a sequence of nodes with the best conditions for transmitting like a router and thus reducing the communication efficiency of such a route. By using a limited neighborhood, the number of active connections is significantly reduced, particularly when compared with flooding. However, the flooding algorithm obtains a route in less time, but this route is more likely to be inefficient compared with the route calculated by the CRA.

The Proposal of a Methodology for Applying Social Network Analysis Metrics in the Biodiversity Informatics Domain

ABSTRACT. In the last decade, several researchers have used Interaction Networks for analyzing the role of species in network structure, focusing on the factors that have contributed and influenced biodiversity maintenance. The concepts, algorithms, metrics and computational resources commonly used in this field are the same as those in Social Network Analysis (SNA), which uses the graph theory concepts, computing techniques and resources to analyze the interdependencies among nodes in the network. Therefore, we propose a methodology to guide researchers to apply SNA metrics to biological Interaction Networks, in the Biodiversity Informatics domain. The methodology is formalized by means of Business Process Modeling Notation (BPMN) and structured in four steps: (i) mapping the data types and the interactions available; (ii) defining the key-questions to be answered and the analysis variable; (iii) choosing the SNA metrics appropriate to the context of the research; and (iv) performing the biological analysis with the support of SNA. As material resources, a set of computational (such as R packages, Dieta, Pajek and Ucinet software) and Statistical Analysis (Exploratory and Multivariate Data Analysis) tools were used, as well as the SNA metrics. This proposal was born at the Research Center on Biodiversity and Computing at the University of São Paulo (BioComp-USP), by means of the collaboration with researchers from different areas (Ecology, Genetics, Microbiology, Social Network Analysis, Statistical and Data Analysis). To assess the suitability of this methodology, it was submitted to pollinator-plant and microbiological Interaction Network case studies. The results show the benefits that providing a systematic method to guide the steps of a research can bring to a researcher – be it due to the support of the resources recommended, be it by the organization of the research activities. Furthermore, when a researcher has interaction data organized in a bipartite matrix, it is possible to apply SNA resources to identify clustering patterns and to discover new knowledge regarding the data. As an example, we can mention that, by means of the w-clique metric, it was possible to discover new knowledge regarding a simple interactions database (phylogenetic subgroups frequency in water bodies) – the water bodies were clustered in polluted and unpolluted sites; this pattern had not been revealed using classical grouping methods. Finally, as future work, we consider the possibility of applying this methodology as a complementary resource to underexplored knowledge areas (such as Agrobiodiversity and Molecular Genetics), in the identification of patterns to support decision makers.

Discovering high-dimensional functional directed networks of the human brain using the Fast Greedy Equivalence Search algorithm for up to a million variables.

ABSTRACT. Effective connectivity inference algorithms applied to brain data from non-invasive imaging techniques, allow us to characterize the functioning brain as a dynamic and complex causal network of interacting regions that support motor, sensorial, and cognitive activity. New functional magnetic resonance imaging (fMRI) acquisition protocols, such as multi-band techniques, are producing data with increasingly higher temporal and spatial resolution. These new data demand inference methods that scale up well to high dimensional complex systems, in order to achieve novel and detailed mechanistic characterizations of functional brain networks, in terms of causes and effects. Here, we describe two modifications, based on parallelization and caching reorganization, that massively scale up the well-known score-based Greedy Equivalence Search (GES) algorithm for discovering directed acyclic graphs on random variables. The first modification, called the Fast Greedy Equivalence Search (fGES) algorithm is capable to recover with high precision and good recall high dimensional directed networks with up to one million variables. The second modification rapidly finds the Markov Blanket of any node ( ie. the minimum set of nodes needed to fully predict the behavior of a node in a network) in a high dimensional system. These two modifications are tools to obtain global and local detailed mechanistic descriptions of high dimensional complex systems, such as the human brain. But they are general enough to be applied to other domains, such as genome data for cancer drivers discovery, or macroeconomic policy. We illustrate the fGES algorithm with a high resolution human resting-state functional magnetic resonance imaging (rs-fMRI) dataset for which the brain cortex was parceled out into 51,000 voxels recording blood oxygenation level-dependent (BOLD) time series of 10 minutes. We describe properties of the resulting rs-fMRI voxelwise network. Finally, we show how from the high resolution brain network produced by the fGES algorithm we can reconstruct brain networks at coarser spatial scales, such as the popular region of interest (ROI) mesoscale networks.

Interplay between cooperative and competitive effects in multi-pathogen interaction systems

ABSTRACT. Pathogens do not spread alone and share their host with other pathogens often interacting with each other in non-trivial ways. Both cooperative and competitive interactions have been observed. These two mechanisms have been mainly studied separately and non-trivial dynamical effects like hysteresis and bistability have been shown to arise from them [1-3]. Here we consider two pathogens competing with each other for hosts in presence of a third pathogen cooperating with both of them, mimicking in this way ecological mechanisms observed in bacterial infections [4], see Fig. (1). We address the impact of cooperation on the outcome of the two-pathogen competition, defined in terms of dominance of one competing pathogen or the co-circulation of both of them. Stability theory within the mean field approximation is combined with computer simulations assuming different contact networks among hosts.


[1] W. Cai, L. Chen, F. Ghanbarnejad and P. Grassberger, Nature Physics 11, 936 (2015). [2] L. Chen, F. Ghanbarnejad and D. Brockmann, arXiv preprint arXiv:1603.09082 (2016). [3] C. Poletto, S. Meloni, V. Colizza, Y. Moreno, A. Vespignani, PLoS Comp Biol 9(8): e1003169 (2013). [4] S. Cobey, M. Lipsitch, The American Naturalist 181, no. 1: 12-24 (2013).

Analysis of social epidemic phenomena on SNS using social physics approach
SPEAKER: Akira Ishii

ABSTRACT. In the present age where consumer behavior remains on record through the Internet, purchase records and action records for huge quantities of consumers are left. In this paper, we propose a method based on social physics for analyzing and forecasting social phenomena, and possibly applying it to marketing etc. by using the voices of society’s people recorded by blogs and Twitter as data. Social physics is a new frontier of physics alongside economic physics, but if there is a huge amount of data, the methodology of physics that has been the subject of experimental data on natural phenomena can also be applied to social science. The approach using this article will be one of significant approach in computational social science. In this paper, we focus on social epidemic phenomena and consider how break and convergence can be measured on the theory of social physics. The social outbreak mainly treated in this research is Pikotaro and hydrogen water[1]. Both are rapidly prevalent, and convergence abruptly in the case of the hydrogen water in Japan. The theory used for analysis is a mathematical model of the hit phenomena. This theory was submitted by Ishii et al. In 2012[2], supposing that humans advertise the opportunity to be interested in a certain topic, a review from a friend and rumors heard in town. This theory is the theory of social physics that quantitatively treats breaks and convergence of people's interests. It is a theory that can analyze breaks and convergence of reputation such as movies, music concerts, social incidents. Analysis shows that fashion breaks and convergence correspond to the strength of rumors that can be measured from the theory in social media.

[1] ANNE BAUSO, 2016. We Tried the Miracle Water People in Japan Are Obsessed With, [online] 7 March, Available at: [Accessed 31 January 2017] [2] Ishii A, Arakaki H, Matsuda N, Umemura S, Urushidani T,Yamagata N and Yoshida N, ”The ‘hit’ phenomenon: a mathematical model of human dynamics interactions as a stochastic process”, New Journal of Physics 14 (2012) 063018

Sincronización en Sistemas Químicos

ABSTRACT. En el presente trabajo se muestra la sincronización entre sistemas químicos de distinta naturaleza mediante la construcción de modelos matemáticos de los mecanismos de reacción de dichos sistemas. Se emplea un acoplamiento unidireccional simétrico. Se muestra que mientras el parámetro de acoplamiento crece, la sincronización aparece, además, para cierto valor del parámetro de acoplamiento emerge un estado de caos intermitente, el cual puede ser una ruta a la sincronización en sistemas químicos. Se cuantifico la evolución de la dinámica de sincronización mediante el cálculo de la dimensión fractal de las series temporales obtenidas de la solución numérica del sistema de ecuaciones diferenciales no lineales. Los modelos matemáticos obtenidos podrían ayudarnos a entender la compleja dinámica existente en la regulación hormonal en humanos.

Escort distribution of order q and its implications in multifractals.

ABSTRACT. A multiplicative cascade can be characterized by p, a probability vector defined on the unitary interval. This can be possible using the Halsey’s measure H(M,q,τ), this measure consider the geometric and the probabilistic properties of a optimal cover of the unitary interval, the first associated with τ and the second with q. We use the fact that the condition for obtain a non-degenerate value of the measure H implies the existence of a function between the τ and q parameters, i.e. τ = τ(q). We show the relationship between τ(q) and the Halsey partition function, and prove how H(M,q,τ(q)) generates a multiplicative cascade with the escort probability introduced by Beck and Schlögl. The statistical multifractals are characterized by their dimension spectrum D(α) in terms of the Holder exponent α. We analyze how is distributed the measure generated by the escort distribution of order q on the different sets J(∆), and show that this measure is concentrated on a particular set with a Holder exponent α*(q) which define the condensation set of the measure.

Compartmental analysis of infrastructures

ABSTRACT. Networked engineering systems are often referred to as complex engineering systems (CES), as much as human beings and social communities. Some CES have a large social component. In an electric network, for example, the load is defined by the individuals’ demand, which is highly dependent on the individuals’ routine. Cascade failures are a common problem to CES, which mirror in the spread of epidemics for socio-biological systems. While biological systems can usually count on immune systems to avoid infection or to heal, there is not such a thing in CES. Compartmental models are mature tools used in epidemiology to track the spread of diseases across populations (see for example [Longini 1988]). However application to technological fields has only recently been attempted, and only for validation purposes, using unmodified epidemiological models as in [Mehrpouyan et al. 2015]. This study assesses the feasibility of applying specifically designed compartmental models to complex engineering systems, with a particular focus on infrastructures, to anticipate the cascade failure risk and, most importantly, define immunization strategies to minimize their likelihood. In CES different components have different probability of failure and different failure modes. Most importantly, these are often strongly dependent on the age of the component and its load in the time domain. We shall hence define a compartmental model for complex systems where node susceptibility to infection is based on their load and age. As such they have to take into account non-deterministic variables such as the users load and its shifts due to external factors (e.g. traffic congestions following public transportation strikes. What part of a road network will jam first and how will the congestion propagate? Is this going to have long term consequences such as road degradation?). First the literature is surveyed searching for compartmental model features that best adapt to be translated from immunology onto complex engineering systems. Attention will be put on network systems with non-uniform node. We shall introduce an explicit time dependency, a novel aspect in compartmental models [Huang et al. 2016], well suited for the case of CES, mirroring the concepts of “births and deaths” from epidemiological models [Liu 2004]. These features are included in a mathematical model and their effect is explored through numerical simulations. Wherever possible an analytical, steady state solution is offered. The model seeks to provide quantitative indicators of the network resilience and allow performing stability analysis.

-Longini, I. M. (1988). A mathematical model for predicting the geographic spread of new infectious agents. Mathematical Biosciences, 90(1–2), 367–383. -Mehrpouyan, H., Haley, B., Dong, A., Tumer, I. Y., & Hoyle, C. (2015). Resiliency analysis for complex engineered system design. Artifcial Intelligence for Engineering Design, Analysis and Manufacturing, 29(1), 93–108. -Huang, Y., Ding, L., & Feng, Y. (2016). A novel epidemic spreading model with decreasing infection rate based on infection times. Physica A: Statistical Mechanics and Its Applications, 444, 1041–1048. -Liu, J. (2004). The spread of disease with birth and death on networks. Journal of Statistical Mechanics: Theory and Experiment, 2004(8), 8.

Mapping the carbon curse with system dynamics modeling for Mexico, 1965-2000

ABSTRACT. Climate change mitigation is for many a chimera, primarily due to the unsuccessful attempts at emissions reductions over the long term, evident already by the beginning of the twenty first century, when the first complying periods of international regulations were expected. There are many reasons for the emissions’ obstinacy at negating hopes for curving down, but fossil fuel producing countries can be an excellent unit of analysis to begin to inquire. The carbon curse has emerged from the natural resource economic literature to underscore the possible structural drivers of emissions in countries whose economic, social and political machinery has historically been built around, or parallel to, oil production. This article maps a system dynamics model of the carbon curse, where CO2 emissions do not stem solely from the direct consumption of fossil fuels, but from indirect structural forces that place government contracts to the most energy intensive sectors of the economy in the center of the problem. With the resulting systems of differential equations, we compare three different periods of Mexican history: that when oil was used mainly for internal production (1965-1975), a second one when Cantarell is discovered and the country starts an intensively extractive period that continues until today (1975-1985) and the third one, when oil is Mexico´s primary export commodity (1985-1995), and the most energy intensive sectors of the economy (cement and steel) account for an important part of the highest levels of carbon intensity in the history of Mexico. The resulting scenery does imply a relationship between government investment and contracts, which do appear to have a causal relationship with carbon intensity. The importance of such findings lies in the fact that such indirect forces are not taken into account on emission accounting, much less into climate mitigation policy. A further comparison between oil producing countries, looking to duplicate the results and for additional drivers behind the carbon curse, can give light to the structural drivers behind an oil producing economy. It is only when these barriers are taken into account that the societies will be able to follow a consistent and permanent future low carbon development path.

Determining the micro-effects of dimensionality on agent mobility in Polycentric City Regions using fractal scaling
SPEAKER: Herman Geyer

ABSTRACT. The metaphor of fractals is thus the repetition of patterns at different scales, both as the repetition of a form in a subsystem nested within a system and between autonomous subsystems of different scales. However, it is expected that there will be a very real variance between real discrete structure of the city and predicted effects. Fractals serve to illustrate that within the perceived randomness and unpredictability in systems, structure emerges. Applied to the problem of complexity theory and polycentrism, the main research question in fractal scaling is this: if change occurs in the city region as a discrete in reality, should it correspond to changes in the fractal simulacrum, being the idealised representation of city regions in terms of a static maximum of equilibrium economic model. The paper attempts to analyse the self-similarities in fractal geometry by creating a model of change in South African cities.

Fractal and multifractal analysis in time series of ECG

ABSTRACT. Fractal and multifractal analysis represents a mathematical theory and a method to analyze and identify a wide variety of natural phenomena. Such analyzes have recently been used to study time series of physiological systems and other systems.

A standard technique for diagnosing heart disease is the electrocardiogram (ECG). The heartbeat time series obtained from ECG aren’t stationary, and according to many researchers, multifractal characteristics are present in healthy individuals and monofractal in patients with chronic heart disease.

Fractal dimension provides information about changes in the internal dynamics of the time series. Various “multifractal formalisms” have recently been developed to describe the statistical properties of these measures in terms of their singularity spectrum f(α), this spectrum provides a mathematically precise and naturally intuitive multifractal description with singularity strength α, whose fractal dimension is f(α) for each value of α. In this thesis the method of Chhabra and Jensen is used. They developed a simple and accurate method for direct calculation of the singularity spectrum. To analyze the multifractal spectrum, the parameters of symmetry, and the degree of multifractality ∆α, which depend on the values of ∝max, ∝min and ∝0, give us a description of the time series.

In this work we used multifractal formalism for the analysis of time series of heartbeat obtained from the databases of the Physionet website, and time series of 24 continuous hours (Holter) of patients with metabolic syndrome.

The time series were previously treated to eliminate artifacts and two segments of six hours were obtained: one was obtained when the subjects were asleep and the other when the subjects were awake. The three parameters that describe the multifractal spectrum were calculated for each one. However, both the degree of multifractality and the asymmetry of the multifractal spectrum do not provide sufficient information about the health status of the study populations. For this reason, we introduce more variables to provide more information on the people health state. The proposal of this work is to measure the curvature of all multifractal spectra and introduce another parameter “r”, called “the symmetric parameter” which identifies the preferential inclination of the multifractal spectra. We found that the parameters of curvature and the symmetric parameter of the multifractal spectra provide a correct assessment of the health status of the individuals.

Confirmatory bias as incommensurability. Micro-grounding the contrarian effect

ABSTRACT. The possibility of working with increasingly complex models led to the detection of emerging patterns difficult to explain and even to interpret, demonstrating that simple models still have significant explanatory potential. One of the emerging patterns with the greatest impact in the social sciences has been, perhaps, the revelation that polarization seems to have a greater role in our world than it has consensus. The contrarian effect, as a macroscopic phenomenon, is nothing more than the presence of agents who are "against the flow", that is to say, that base their own personal assessments in the antipodes of the hegemonic constructions of the society, making possible explanations to these generalized phenomena. From presidential elections (United States, Spain, Austria, Argentina, France) and Brexit, to public opinion formation in our contemporary societies, lot of examples are repeated and have gained strength during the last ten years The present work seeks to approach another possible interpretation on the microscopic behavior of the contrarian effect, starting from the incommensurability of the paradigms as a concrete manifestation of confirmation bias. Therefore, there is no longer a proportion “p” of agents behaving like contrarians, but now, there is a probability “p” that any agent can show difficulties in the language mechanisms and behave in that particular case as a contrarian. Our model consists of N agents, which interact in groups of two people during T consecutive periods. Each of the agents can ascribe to the paradigm A or B with an intensity for each agent that we will call a and b, respectively, in such a way that a=-b (that is, the paradigms are symmetrically opposed). The agents send the signals α=a, when they ascribe to the paradigm A, and β=b, when they ascribe to the paradigm B. Although there are no problems in the emission of the message, we contemplate a certain tendency towards a confirmation bias at the moment of the reception, which makes possible the polarization within the system without this being a deterministic phenomenon. Using these agent-model based simulations, whether we consider this an endogenous or an exogenous probability, we can see how the system can generate language problems that makes it impossible for agents to reach consensus scenarios, or to do so with greater difficulties. In fact, agents with a neutral position are ultimately key to achieving, in some cases, consensus scenarios.

Comparing Adults’ and Children’s Lexicon Structure with Graph Modeling from WAN Corpora

ABSTRACT. Semantic relations allow us to explore which traits -functional, categorical, or phonological- determine the distance between words in the lexical space. An interconnected lexicon allows for rapid and efficient word processing, as well as reference and meaning anticipation. A lexical space with large distances between words would imply slow linguistic processing. Research has shown that stronger semantic connections correlate with greater language proficiency (Kotz & Elston-Guttler, 2004).

NLP needs an efficient method for capturing semantic relations, commonly represented in Psycholinguistics with graph structures. Distributional Space Models (DSM) have been adopted in the latter field as a way to represent word meaning in linear space (Baroni et al., 2014).

DSM representations are generally useful for analyzing the semantics of natural language. Recently, Biemann (2016) proposed an understanding of the DSM structures as sparse graph-based representations of word relations. With this vision in mind, we seek to connect the tools of Natural Language Processing with the analysis of semantic relations in the lexical space.

In this paper, we expand the ideas of Biemann (2016) such that the tools developed for DSMs can be applied to the analysis of graphs and used to study lexical relations in the cognition of adults and children. We present a study of Word Association Norms (WAN) graphs obtained from psychological experiments with 6 to 11-year-old children (Arias-Trejo & Barrón-Martínez, 2014) and young-adults (Arias-Trejo et al., 2015). For the analysis and comparison of these networks, we propose an architecture based on spreading-activation and the use of a linear transformation between graphs to predict unseen words (Itawari et al, 2016). This is performed in order to observe the behaviour of the words in a corpus where they are not present.

The adoption of the graph structure perspective allows us to understand DSM in new ways; at the same time we can look semantic relation graphs with new eyes and combine the methods of graph theory and DSM’s in order to understand better the language behaviour. Evenmore, this architecture is useful for analyzing the semantic correlation between words in the mental lexicon.


Arias-Trejo, N. & Barrón-Martínez, J. B. (2014). Base de Datos: Normas de Asociación de Palabras para el Español de México en Escolares.

Arias-Trejo, N. et al (2015). Corpus de Normas de Asociación de Palabras para el Español de México. México: UNAM.

Kotz, S.A., Elston-Guttler, K. (2004). The role of proficiency on processing categorical and associative information in the l2 as revealed by rts and event-related brain potentials. JNL, 17(1): 215–235.

Baroni, M. et al. (2014). Frege in space: A program of compositional distributional semantics. LiLT: 9.

Biemann, C. (2016). Vectors or graphs? on differences of representations for distributional semantic models. COLING 2016: 1

Levy, O., Goldberg, Y. (2014). Neural word embedding as implicit matrix factorization. NIPS14: 2177–2185.

Mikolov, T., et al. (2013). Distributed representations of words and phrases and their compositionality. NIPS13: 3111–3119.

Ishiwatari, S. et al (2016). Instant translation model adaptation by translating unseen words in continuous vector space. In CICLing’16.

Analysis of heartbeat intervals time series realizing stress tests using the DFA method

ABSTRACT. Heartbeat intervals time series can be analyzed to look for long-term correlations in the functioning of the heart and to identify a pathology. The Detrended Fluctuation Analysis (DFA) which is a method that eliminates the local trends in time series providing information about the long term fluctuations and the existing scale relations it, in addition is a suitable method for the analysis of non-stationary signals. The ECG records obtained from a Holter monitor, which records the electrical activity of the heart, were determined the heartbeat intervals time series, while the subjects were doing stress tests. These tests are low cost, non-invasive and are of great importance to detect changes in the cardiovascular function after an exercise program, the objective is that the effort test will show electrocardiographic changes that not are evident in a patient at rest. In this work, we analyzed R-R series of healthy young subjects performing stress tests on a commercial treadmill in which the speed, the duration time and the inclination were changed. With the DFA method were found correlations between the DFA exponent and the parameters of the stress test, so that it is possible to determine at least qualitatively the stress to which the heart is submitted during the different tests.

Network analysis on government R&D projects performing inter-firm in Korea

ABSTRACT. The Korean government is expanding the portion of support for R&D tasks centered on SMEs in order to support R & D capabilities of companies(Government R&D support portion: 10.7% in 2008 → 18.0% in 2015). This way will contribute to productivity improvement and job creation of SMEs. Due to lack of internal resources such as human resources, funds, and R&D capability, SMEs can increase their productivity by enhancing synergy through acquisition of external resources. Therefore, it can be assisted that inter-firms cooperation tasks of government R&D projects can lead to synergy creation rather than individual tasks of government R&D projects. The purpose of this paper is to identify the level of strategic network for mutual cooperation and to find out whether the cooperation company has a concentration on specific companies. To do this, we extracted the subjects with relatively strong relationships with the companies participating in the R & D task simultaneously, and confirmed that they have significant relationship characteristics with the growth and productivity indicators of the company. In addition, the Korean government's R & D projects are led by SMEs, but it is confirmed that there is a strong relationship between large companies and specific industry sectors in terms of cooperation. In Korea, SMEs account for 99% of the total companies, but at the level of global competitiveness, a small number of large companies are leading. It is difficult to confirm directly visible effect of the government R & D project by supporting SMEs that are lacking in competitiveness, but it can be confirmed that they actively make efforts through cooperation with external cooperation.

Cascaded failures in complex networks: what’s the role of centrality measures in initial seeds?
SPEAKER: Mahdi Jalili

ABSTRACT. In complex networks, different nodes have distinct impact on overall the functionality and resiliency of networks against failures. Hence, identifying vital nodes is crucial to limit the size of the damage during a cascade of breakdowns triggered by single component failure. This information enables us to identify the most vulnerable nodes and take solid protection measures to deter them from failure. In this work, we study the correlation between the cascaded failures and centrality measures in complex networks. The failure starts from a seed node, and propagates through the network. This research study investigates how the centrality of a node (in terms of different centrality measures) affects the outcome of the failures started from that node. For each node, we obtain its cascade depth, which is the number of removed nodes when that node fails; the larger the cascade depth of a node, the higher its centrality from the point of view of cascaded failures. The cascade depth is correlated with a number of centrality measures including degree, betweenness, closeness, clustering coefficient, local rank, eigenvector centrality, lobby index and information index. Networks behave dissimilarly against cascading failure due to their different structures. Interestingly, we find that node degree is negatively correlated with the cascade depth, meaning that failing a high-degree node has less sever effect than the case when lower-degree nodes fail. Betweenness centrality and local rank show positive correlation with the cascade depth indicating the higher the betweenness centrality or local rank of a node, the more the number of removed nodes by failing that node.

Evaluation of Partial Discharge Using Artificial Intelligence

ABSTRACT. In electrical engineering, the partial discharge (PD) is a common phenomenon which occurs in insulation of high voltage. PD can occur in a gaseous, liquid or solid insulating medium. The partial discharges are in consequence of local stress in the insulation or on the surface of the insulation. In PD diagnosis test, is very important to classify the measures of PD, since PD is a stochastic process. The occurrence of PD depends on many factors, such as temperature, pressure, applied voltage and test duration. Moreover, PD signals contain noise and interference. This paper is an approach for a diagnosis selecting the different features to classify measured of partial discharges (PD) activities into underlaying insulation defects or sources that generate PD. Self Organizing Maps (SOM) is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map, and is therefore a method to do dimensionality reduction. The SOM has been used for nonlinear feature extraction as PD. The results present different patterns using a hibrid method with Self Organizing Maps (SOM) and Hierarchical clustering, this combination constitutes an excellent tool for exploration analysis of massive data like partial discharges on underground power cables for CFE. In the cases analyzed, the original dataset is one million of items, was used a U-matrix of 20×20 cells to extract features and detect patterns. We have tested 63 dataset of diagnostic test at power cables, obtaining a very fast data representation and 95% confidence in the discrimination of partial discharge source, considering noise and combined sources. Therefore, this new approach has been fast, robust, and visually efficient.


ABSTRACT. Exposure to some complex aesthetic expressions (classical music) can improve cognitive abilities (Rauscher, Shaw, & Ky, 1993; Rideout & Taylor, 1997). Moreover, works of art lacking complexity do not achieve the same effect (Rauscher, Shaw, & Ky, 1995).

Since music and visual art share physic dynamics such as a universality of rank-ordering distributions (Martínez-Mekler, G; 2009), this brings up the question: Could an acute exposure to complex visual art improve cognitive abilities as well as music does?

We hypothesize that complexity in visual art can produce a similar effect on the cognitive abilities such as that produced by classical music.

Goals: Evaluate the cognitive effect of exposure to complex computer-generated paintings.

Method: In the frame of dynamic systems, we have created computer-generated paintings with a stochastic model based on the fact that complexity appears in a phase transition of the dynamic elements of a given phenomenon (Solé, Manrubia, Luque, Delgado, & Bascompte, 1996). We will test the participants with a Paper Folding and Cutting task from the Stanford-Binet Test.

Results: Behavioral data to be obtained.

Second-Order Complexity - An Example
SPEAKER: Eric Sanchis

ABSTRACT. Although there does not exist a definitive consensus on a precise definition of a complex system, it is generally considered that a system is complex by nature. The presented work illustrates a different point of view: a system becomes complex only with regard to the question posed to it, i.e. with regard to the problem which has to be solved. According to the asked question, the same system may be considered as being simple, complicated or … complex. Because the number of questions posed to a given system can be potentially substantial, complexity does not present a uniform face. Two levels of complexity are clearly identified: (1) a first order complexity centred on its measuring or on the system dynamics, (2) a second-order complexity related to the system composition. First order-complex systems are well-known because they have been studied by the scientific community for a long time. They profit from specialized institutes and take advantage of universal tools essentially provided by physics or mathematics. In second-order complex systems, complexity results from the system composition and its articulation that are partially unknown. The term vagueness is the key word characterizing this kind of systems. Therefore, the purpose of modelling is to circumscribe the inherent complexity of the system in one or more precise places and to identify components and relations that can be caught. The tools used to study second-order complex systems are not universal anymore but are specific to a particular complex system. The human cognitive system will be the starting point making it possible to illustrate the aspects previously mentioned of a second-order complex system. According to the objective of the modelling, questions asked to the human cognitive system can be addressed to different levels: (1) the physical level (brain), (2) the functional level (cognitive architecture) (3) the level of its productions (concepts, ideas and mental states). The modelling of one of these productions, the property of autonomy, is a typical example of second-order complex system when this property has to be implemented in a software agent (indistinct components, fuzzy relationship between components). The described method makes it possible to model properties of the same type as autonomy such as free will but also to categorize them (autonomy and free will as complex properties, mobility or replication as simple properties). The final outcome is an implementable computational object that distinguishes the solid aspects of the model from those that are uncertain. It is also an invaluable tool which permits a critical analysis of the models produced by the method itself but also of the models found in the specialized literature. The weakness of the tool is that it strongly depends on the nature of the studied system, or more precisely on the nature of the question asked to the system.

A complex systems approach to modelling multicellular self-organization in the plant stem cell niche

ABSTRACT. Individual cells within multicellular tissues communicate in order to self-organise into complex organs. In plants, development is continuous and modular, where new tissues arise from groups of self-organising stem cells in the meristem. Plant cells cannot move, so physical interactions with their neighbours are fixed. To model the role of these associations, a complex systems approach to capturing, modelling and predicting the self-organising outputs of multicellular organisation was used. Using live 3D imaging and image analysis, cellular connectivity networks were extracted, describing all cell-cell interactions in the meristem. Network analysis was used to identify features in the cell connectivity network, and revealed a counter-optimised global topological feedback across the multicellular system, regulated though cell division. A classical study of cell division by Errera in 1888, shows that the shortest wall that bisects the cell equally is often observed. In a comparison to this rule, predictions of division plane orientation were made using topology based division rules, and simulated topological cell divisions mostly conformed to, or outperformed Errera’s rule. The local geometric property of individual cell divisions in fact encodes a global topological property of the multicellular system.

Agent-based Models to Comprehend the General Data Protection Regulation

ABSTRACT. In the EU, differences in personal-data protection levels were an early concern. Such differences would induce a “race to the bottom.” In 1980, the OECD established harmonizing principles. In 1995, Directive 95/46/EC set a uniform minimal protection level for EU member states. Recently the General Data Protection Regulation (GDPR) was adopted in the European Commission, to succeed the Directive after 22 years of service. The GDPR is in 88 pages. It has 173 considerations that explain the why and how of its 99 Articles. This document is/contains our data.

By establishing the GDPR the EU shows sensitivity to a complexity concern. The scope of personal-data use is widening. Currently almost everyone is connected and forms networked communities. Zhang (2014) coined the community that nurtures on personal data the PDC (for personal-data user community, a complex adaptive system which includes consumers, social-media providers and digital government services as its agents.) The GDPR aims to domesticate the PDC. We focus our research on one new element brought to the fore by the GDPR: the right to data portability. We are afraid that the concept will be understood differently by IT-scholars, economists and legal scholars. This leads to our research question: “can the right to data portability (as formulated in GDPR art. 20) be understood, coherently and concurrently, by professionals from respectively the law, the economy and information technology?”

Agent-based models create toy worlds. Our pont of departure is this: we cannot reasonably discuss the capabilities of the law to help a complex social system survive in the real world without having a model of how a toy complex system will react to internal and external adaptations in technology, economics and law. No single discipline will find a `best solution.' Working examples of adequate mechanisms are the next best thing.

Our approach is a new one. We use the 88-page GDPR as source material for our pilot study. We harvest the requirements for two agent-based models through two different disciplinary filters: alpha and beta (for arts and sciences) in a manner that takes de Marchi [2005] seriously. Running these models leads to repeatable stochastic encounters between agents. The encounters translate into working towards the selection of the best strategy sequence, conditional to the “political season” these evolve in. Inspired on Alexander [2007] the games that can dynamically form in this manner are prisoner’s dilemmas, stag hunts and bargaining games. “Political seasons” reflect stable political periods as presented in the considerations. The two (alpha, beta) collections of available strategy-payoff combinations are also harvested from the document.

Our results show that we can use the GDPR to design working toy versions of mechanisms that realise the right to personal-data portability as seen through both perspectives. And that the evolutionary-game-theoretic simulation approach allows for blending these perspectives' expectations in a rational manner.

References S. De Marchi, Computational and mathematical modeling in the social sciences, Cambridge University Press, 2005 REGULATION (EU) 2016/679, Official Journal of the European union, 4/5/2016

HIV gene regulatory network dynamics
SPEAKER: José Díaz

ABSTRACT. In the present work, we built the gene regulatory network (GRN) of the HIV-1 from the data reported in the specialized literature. We proposed a Boolean and a continuous mathematical model of this GRN to analyze the dynamics of the molecular interactions that regulate gene expression of latent proviruses in resting CD4+ T cells. Both models reproduce several in vitro and exvivo observations of provirus gene expression and indicate that the GRN operates in a critical regime. We found that the network architecture restricts the dynamics of the HIV-1 to just two states: latency and activation. The ODEs model shows that the GRN exhibits bistability, which restricts the conditions to switch on the provirus and favors latency over activation. Virus activation occurs though a transcritical bifurcation in which the NF-kB availability is the bifurcation parameter. The results obtained from the models can be used to design new latency reversing agents (LRAs) to reactivate latent viruses in infected cells. The analysis of the network with perturbation methods revealed unexplored LRAs synergies that can maximize viral reactivation in resting CD4+ T cells.

Imperialist Competitive Algorithm to determine energy needs in a planning horizon for a high marginality zone and using a reactive model with high uncertainty
SPEAKER: Alberto Ochoa

ABSTRACT. The Imperialist Competitive Algorithm (ICA) use a basic system of knowledge source to determine the better situations under uncertainty using a model of countries, each one related to the knowledge observed in several aspects of social behavior. These knowledges are combined in order to direct the decisions of the individual agents to solve optimization problems or in the solution of distribute resources in different communities. In the present research, we simulated a reactive model under uncertainty to distribute energy resources in the Southwest Chihuahua using a reactive model under uncertainty to integrate these diverse sources of knowledge to direct the population of the agent. The different phases of solution of the problem emerge combining the use of these source of knowledge and these phases give rise to the appearance of individual rolls within the population in terms of leaders and followers to each country (group of agents). These rolls give rise to an exit of organized grouping or organized groups in the population level and knowledge groups or knowledge grouping in the social belief space. This application optimizes a function revalued in the design of problems of social modeled of social modeling, allowing illustrating a better reactive model under uncertainty

Rational Moral Intuitions

ABSTRACT. The human cognitive architecture should contain many domains of human social interaction: (consider, e.g., exchange, kin altruism, mating, cooperative foraging, warfare). These different types of social interaction require different concepts, inferences, sentiments, and judgments to regulate behavior adaptively. Therefore, the different domains require specialized subsystems of moral cognition that takes into account many moral considerations that are often contradictory (e.g., incompatible duties). If we consider that the selection produced adaptations designed to weight conflicting moral sentiments to produce judgments the subjects choosing which option they “feel is morally right” will produce judgments that are internally consistent. We experimentally explored the design of the integrative psychological process that weighs the different moral considerations to produce all-things-considered moral judgements. Specifically, we wanted to know whether the subjects produced rational moral judgments in the sense of GARP (general axiom of revealed preferences), and whether they responded to relevant moral categories (such as motivations) in a consistent way. Using three moral dilemmas involving warfare, we quantitatively varied morally-relevant parameters: Each dilemma presented 21 scenarios in which sacrificing C civilians would save S soldiers (0 ≤ C < S), varying S, C, and S/C (soldiers saved per civilian sacrificed). Judgments were highly consistent. Bootstrapped choices would violated aprox GARP 50 times, yet there were no GARP violations for 49% and 64% of subjects (unwilling conscripts vs. willing warriors). Of the >250 who sacrificed some, but not all, civilians, 55% and 62% made 3 or fewer GARP violations. Fewer civilians were sacrificed when soldiers had volunteered.

Self-organized traffic lights: A comparison of spatial arrangement

ABSTRACT. In the cities there are several problems trying to solve, most of the limitations to the solution of these problems are due to the high degree of interaction between components in they and the number of components that they have, It is so we can only provide some degree of estimation but not of prediction of the phenomena they present. From the systemic point of view the current attempt to address problems from perspectives involving as many possible components in the study and treatment of certain problems. Among these ways to address the problems we found the selforganization, which is a property of complex systems, which can be used to build adaptable and robust systems, so the system builds the solution to a problem at a particular time in which is required. The objective of the present work is to show that the interaction with a different spatial configuration, such as hexagonal, allows an improvement in the efficiency of the processes of the algorithm of selforganization of semaphores in comparison with the classic quadric scheme. Although studies have been carried out on the best use of space through different forms of space tessellation, there are currently no approaches to the dynamics related to the flow that can be developed on this type of hexagonal tessellation, and more specifically on vehicular traffic. In addition to the fact that using agent-based modeling gives the proposed model a better approach to the reality of the problems of cities; because the entities of social systems can be modeled as autonomous agents that interact with each other and with their environment. The methodology is implemented with the NetLogo, an ABM tool, this tool allows us to do computer simulations of self-organization algorithm of traffic lights at the intersection of different types of models proposed cities. Through ABM is used the property of self-organization of complex systems that allows to develop efficient and high degree of adaptability the solutions to specific problems in time, as in the case of vehicular traffic, the solution may be an adaptation to change traffic flow. For improving flow efficiency it has been proposed the idea of allowing efficient spatial arrangements do flows, one of these spatial arrangements is hexagonal. In this research, an amendment is proposed regarding the implemetation of the algorithm of self-organization of traffic lights in patterns of abstract cities, for it takes into account a hexagonal spatial arrangement with respect to the quadrangular classical spatial arrangement, the results show that the difference in efficiency is significant.

A (Data Driven) Network Approach to Schizophrenia

ABSTRACT. In recent years, network models in the field of public health (e.g., psychopathology, psychiatry) have gained considerable attention and recognition. In such network models, psychological processes are conceptualized as complex systems in which observable psychological behavior, such as the critical transition to a psychotic episode, is assumed to arise from interactions between symptoms and other psychological, biological, and sociological agents rather than reflective of an unobserved disorder. The access to large datasets investigating symptoms of mental disorders led to the advancement of the network methodology, allowing scientists to disentangle potential causal pathways towards a disorder, directly from the data. Network models can therefore provide key insights into the complex system of mental disorders. In order to provide an example of how network models can be used in the field of public health, we investigated the association between schizophrenia and environmental exposure. We constructed a network model of data from a prospective study investigating vulnerability and risk factors for onset and progression of psychopathological syndromes. We analyzed the relation between three environmental risk factors (cannabis use, developmental trauma, and urban environment), dimensional measures of psychopathology (anxiety, depression, interpersonal sensitivity, obsessive compulsive disorder, phobic anxiety, somatizations, and hostility), and a composite measure of psychosis expression. Results indicate the existence of specific paths between environmental factors and symptoms, most often involving cannabis use. In addition, the analysis suggests that symptom networks are more strongly connected for people exposed to environmental risk factors, indicating that environmental exposure may lead to less resilient symptom networks.


ABSTRACT. The aim of this work is to present an interesting connection between the behavior of economic agents and long memory features - as an emergent phenomenon, that generally occur in a wide set of time series found in economic/financial problems.

Why is it relevant? Because the incorrect specification of stochastic processes can provide misleading conclusions. If the stochastic process exhibits or not long memory directly affects the description of the autocorrelation structure of a wide range of problems, such as asset pricing, macroeconomic modeling and other time series phenomena.

Hence, the misspecification of such features may induce very different results in long term, affecting the way that optimal policy making may be conducted, since these effects last longer than short memory.

It is shown that heterogeneity between agents, large deviations from the equilibrium points (in conjunction with the laws of motion) and spatial complexity are very important in the emergence of long memory features, by means of extensive usage of computational multi-agent based models, stochastic analysis and Monte Carlo simulations.

Keeping that in mind, three different computational models are presented and simulated in this work, showing that long range dependency may simply arise from the interactions between the agents, establishing what can be called “long memory emergence”.

On the other hand, none of these models were developed for this work. Their respective authors separately made them for specific purposes and that is why the present author have decided for such strategy (of picking models made by third parties). Instead of building models (which usually takes a considerable amount of time to make them work properly) that might contain biases in terms of finding such long memory properties – as a consequence of the present work idea – they were chosen, simulated (in their respective platforms) and analyzed using the R Statistical Package.

Despite the fact that heterogeneity is a widely known characteristic that affects the rise of long memory in complex systems, the other two factors are not. It is also important to state that there may be several other kinds of factors that are present in a complex system that potentially can lead to the emergence of this phenomenon.

Moreover, when a long memory filter is applied over time series with such properties, interesting information can be retrieved.

Study of seismicity as a self-organized critical system.

ABSTRACT. In this work a study of synthetic seismology of the Gutenberg-Richter relationship is carried out. This relation links the frequency of seismic events with their magnitudes. For this, we use a cellular automaton that follows the Olami-Feder-Christensen model based on the spring-block. From the generation of synthetic events we make an analysis of the properties and conditions of the parameters of this model that seem to be more related to real seismicity. Furthermore, based on a study of the relation between the frequency of events (y-intercept ) and the slopes that are generated when plotting the Gutenberg-Richter relationship, an analysis of this approach was performed with the cellular automaton mentioned for synthetic seismicity. In particular, we are interested in the examination of seismicity as a self-organized nonlinear system.

Rhythms, collectivity and interpersonal synchronization of brain dynamics

ABSTRACT. Hyperscanning is the simultaneous registration of the electrical brain activity of two or more subjects. In the present study we investigate possible interpersonal synchronization of male and female couples performing a coopertive task within a particular acoustic environment. In a first pilot study we found a pronounced gender difference for the interpersonal synchronization, extended between zero and 25Hz. Furthermore, different tempos of the rhythmic acoustic stimuli imprint slightly different characteristics of the interpersonal synchronization pattern. Most surprising, the synchronization between monozygotic male twins is more pronounced than other male couples.

A Complexity Perspective for Quality of Experience (QoE) estimation: The Case of e-Health in rural contexts

ABSTRACT. Information and Communication Technologies (ICT) have become an important tool with potential to improving living conditions. ICT are immersed in practically all human affairs; under these circumstances, technology and society are inseparable. By the same token, technology developers must understand the conditions and requirements of the users, as well as the nature of the context. They must understand that technology development also involves human aspects, i.e., people must be considered the central element and purpose of technology design. In this regard, Quality of Experience (QoE) has been used as an important tool for assessing the usability and user acceptance of a particular device, service or technology application. International standard setting agencies, technology manufacturers, as well as academic groups have relied on QoE estimations to understand the interactions of technology and human behavior. In this contribution, we draw on complexity science to incorporate in the estimation of QoE the interplay of the ecosystem, the behavior, and the interactions among the agents. We provide a platform to estimate QoE to assessing the relationship between technology and human factors involved in e-Health projects. Our proposal is focused on a rural environment, given that e-Health interventions have been useful to respond to critical sustainable development needs in these contexts. In this paper, we apply a heuristic procedure to incorporate complexity principles for estimating QoE using Fuzzy-Logic simulations in order to understand the influence of human factors in a gynecology intervention intended for a rural setting. The results of our simulation show that the applications of complexity principles in our e-Health intervention may contribute to develop integrated design strategies of devices and systems, thus providing, a balance of technology performance and human behavior, i.e., a balance between QoS, Quality of Service (technology performance metric) and QoE (human and technology performance metric). In other words, QoE offers valuable information for developing and designing devices and equipment taking into account factors involving emotions and other variables of human nature associated to a particular context. Likewise, it is relevant to stress that our ecosystemic approach was also key to identifying and analyzing complexity traits in the resulting simulation scenarios. Furthermore, complexity was an important enabler to refine the estimation of QoE in the e-health intervention studied. Hence, complexity is immersed in QoE as well as in e-Health, we argue then, that they inherently behave as dynamic complex systems. This realization is fundamental for: a) to refine the problem statement; b) to avoid doing more of the same; and c) to develop a holistic vision with solutions tailored to society as a whole when plaining and implementing e-Health interventions.

Spatial and temporal patterns in bike sharing systems: Modelling using a gravity law

ABSTRACT. With a high proportion of the world's population living in cities, the study of mobility and the coupling between different transportation systems in urban areas is of utmost importance. Most of the studies on public transportation systems have been performed considering: taxis, subway, bus and trams. The case of bike sharing systems, although facilitates mobility for a important fraction of inhabitants, has been less explored.

In this work, we analyze spatiotemporal patterns in bike sharing systems in the cities of Chicago and New York. In the first part we characterize the temporal dynamics of users, we found similarities in the time day use of public bicycles in both cities. In addition we identify an inverse power-law relation for the probability distribution of the time that users employed the bicycles. Then, by using a origin destination matrix we are able to characterize the spatial structure of travels in the whole bike sharing system. We found the same inverse power-law relation for the probability distribution of traveled distances in both cities.

An important fact in the study of bike sharing systems is that the locations of stations are fixed and in this way we can describe the information about the statistics of travels by means of a origin destination matrix. In order to explain the obtained results we introduce a gravity law model to describe the spatial dynamics. In this case, the mass of each station is associated to its importance in the whole system, we also use the geographical distance between stations. Then, via Monte Carlo simulations we recreate the system assuming a Markovian dynamics and the introduced gravity model; the obtained results fit very well with real data revealing that the this model captures important aspects of the global dynamics in bike sharing systems.

Our analysis and results introduce new ways to process the data available for bikesharing systems. This approach can be implemented for different existing bicycle sharing systems to identify temporal and spatial patterns associated to human mobility in urban areas.

A model of social interaction from complexity; Dissemination of Culture, ENCUP 2012 in CDMX with Model Based Agents (MBA)

ABSTRACT. Studying the opinion about citizen participation and democracy in Mexico City (acronym in Spanish CDMX) allows us to have variables to know how it spreads the socialization of perceptions, attitudes and behaviors, in a territory and how this process of interaction conforms the political culture, understood as a set of features conformed by traits, that finally gives us a panorama of what is the policy for the agents that interact in the most populated entity of Mexico. The objective of our research is to Implement the Model Based Agents (MBA) "The Dissemination of Culture: A Model with Local Convergence and Global Polarization", using nominal variables extracted from the “National Survey on Political Culture and Citizen Practices 2012” (acronym in Spanish ENCUP 2012) which extend on a map of the CDMX, the purpose is to visualize how the political culture is propagated in a territory, and to analyze the probability that a certain culture is dominant. The 3 variables we use are 1) the perception about democracy, 2) what is the level of dialogue and 3) the level of citizen participation in civil and social organizations, each one has 7 options, these represent the traits, together the 3 variables make up the features in the model. The model was implemented in NetLogo to perform 300 repetitions with 8000 agents. The purpose is to find the culture that dominates in the dynamic, the result of extending the model of Robert Axelrold, allows us to have a panorama on the political culture in the CDMX, at Interacting agents consider their degree of similarity and the boundary between neighborhoods, the result shows the aggregation of traits to features without a central address. The probability that a culture dominates presents a hierarchical structure in the frequencies of the surviving sets at the end of the experimentation, it approaches the "Beta-Like" function in semilogarithmic scale, also noticed three moments, a) increase b) decrease c) stagnation, when conforming the diversity of the sets during the process of the dynamics, this due to the aggregation of traits. The conclusions we have a diversity in the model tends to socialize pessimism about the practices of political and citizen participation in CDMX because the political culture that is more likely to dominate is the set 1) dissatisfied with democracy, 2) silent when someone says something that goes against their thinking, in other words there is no dialogue, and 3) it is difficult to organize with other citizens to work for a common cause, ie participation is low.

Fractal Behavior in the Withdrawal of Program Students as a Potential Early Warning Signal
SPEAKER: Sami Houry

ABSTRACT. Education research has examined the program student withdrawal problem from multiple perspectives. Our research proposes a novel approach by examining the problem through the lens of fractal analysis. This approach is supported by a review of literature which suggest that one property for organizational emergence is its potential fractal-like form, i.e. self-similarity in patterns at multiple levels of observation. The patterns are termed fractals or fractal-like as per Mandelbrot who coined the original term and developed a mathematical theory of roughness to describe the natural world known as fractal geometry. After all, according to Mandelbrot, clouds are not perfect spheres and mountains are not perfect cones. He suggested that fractal geometry is a better model for the natural world than conventional geometric notions that assume smoothness of the shapes under study. In the business and organizational world, Thietart and Forgues’s (1995) proposition 5 suggested that similar organizational structure patterns as well as process patterns could be identified at different organizational levels, such as the organizational, unit, group and individual levels. To this end, our research seeks to address the program student withdrawal problem at the individual level by investigating whether the emergence of the student’s program withdrawal status is self-similar at multiple levels of observation, specifically the program level and the course level. In other words, whether the act of withdrawal at the program level is also present at the course level. Our research findings, based on analyzing data from a test group and a control group, moderately supported the presence of self-similarity or fractals. The significance of the finding is that this self-similarity of emergence at the program level and the course level could be an early warning signal in and of itself as withdrawal at the course level might be a predictor of what is to come at the higher program level. The self-similarity could potentially add a new generic early warning signal to the existing set of early warning signals identified in literature, which include rise in variance, skewness, autocorrelation, flickering and critical slowing down, pending further research and applications in other case studies.

CO2 emission reduction, complex networks and world commerce network

ABSTRACT. In 2009 Ostrom produced a conjecture of how to reduce world CO2 emissions based on polycentric approach, her paper offers an intuitive process in which a simple economical policy based on the execution of reducing to commerce based on the knowledge of goods/services origin - knowledge of CO2 footprint - and game theory applied to world market commerce. This work apply these ideas into world commerce network (WCN) - which we build and compare its topology with some other articles - and discovers how this conjecture could be right, due to a computer simulation which mixes complex networks, game theory and economy of ecology - theoretical knowledge of economy indexes and ecology calculation of CO2. Results from a series of simulation scenarios using both several coordination games and ecology scenario comparission - IPCC scenarios - verified with statistical analysis in results, are basic blocks for this work (formely Master Science thesis) which helps to understand how a policy can work on WCN and how Ostrom’s polycentric approach can be implemented.

Online Organized Political Participation of the Civil Society in Mexico

ABSTRACT. We used survey data and collected data from the Online Social Network (OSN) Twitter between October the 5th and November the 9th 2016 (time window) to provide an overview related to political participation in Mexico. With the survey data we provided a qualitative assessment of political participation in Mexico by examining interest in politics and their sources of political information. With our collected data, we described the intensity of political participation in this OSN, we identified locations of high Twitter activity and identified political movements including agencies behind them. With this information, we compare and contrast political participation in Mexico to its counterpart through Twitter. We show that political participation in Mexico seems to be decreasing. However, according to our preliminary results political participation in Mexico through Twitter seems to be increasing. Moreover, we study the case of three online protests and how different actors of the civil society organized within the OSN to debate. In this regard, our research points towards the emergence of Twitter as a significant platform in terms of political participation in Mexico. Our study analyses the impact of how different agencies related to social movements can enhance political participation trough Twitter.We show that emergent topics related to political participation in Mexico are important because they could help to explore how politics becomes of public interest. The study also offers some important insights for studying the type of political content that users are more likely to tweet.

Given that our aim is to describe political participation in Mexico, we turn to survey data and collect data from Twitter to: firstly, explore and examine trends in traditional ways of political participation. Secondly, to spot shifts in sources of political information. Lastly, to assess and investigate trends in non-traditional ways of political participation. It is important to recall that we refer to political participation as any activity through which individual express their own opinion with the goal of exerting influence regarding political decision-making.

This study examines online and offline political participation in Mexico. Through the use of survey data, our research underscores the low levels of interest Mexicans have in politics. This level of interest reflected in the low level of political participation. In particular, we note that Mexicans receive political information mainly from television, with other sources of information such as newspapers, radio, the internet and online social networks well behind. In terms of political participation, we see that as the level of personal interaction needed to take part in political action increases, participation seems to decrease.

On the other end, the emergence of new technologies such as Twitter facilitate social interaction to levels never seen before. Therefore, we considered important to examine the way in which political participation in Twitter compared to levels of political participation offline. In our sample of tweets, we found that the general level of online political participation seemed to increase. The analysis of this data allows us to study how the civil society uses Twitter to organize online protests.

Beyond Contact Tracing: Community-Based Early Detection for Ebola Response
SPEAKER: Vincent Wong

ABSTRACT. The 2014 Ebola outbreak in West Africa raised many questions about the control of infectious disease in an increasingly connected global society. Once the disease spread to dense urban areas, limited availability of contact information made contact tracing difficult or impractical in combating the outbreak. To address this, we consider the development of multi-scale public health strategies that specifically target epidemics in a highly connected and physically proximal society. We simulate policies for community-level response aimed at early screening of communities rather than individuals, as well as travel restrictions to prevent community cross-contamination.

Our analysis shows the policies to be effective even at a relatively low level of compliance. In our simulations, 40% of individuals conforming to these policies is enough to stop the outbreak. Simulations with a 50% compliance rate are consistent with the case counts in Liberia during the period of rapid decline after mid September, 2014. We also find the travel restriction to be effective at reducing the risks associated with compliance substantially below the 40% level, shortening the outbreak and enabling efforts to be focused on affected areas. Our results suggest that the multi-scale approach can be used to evolve public health strategies for defeating emerging epidemics.

Mezo-scale dynamics of the semantic space

ABSTRACT. Culturomics, the analysis of frequency changes of discrete semantic units of human language over historical time, has shed light on many intriguing social and cultural phenomena in the recent years. Here we propose a step towards a more comprehensive theory of semantic dynamics by defining different types of semantic interactions between these units (i.e., words, or more preferably, lemmas) based on their co-occurrence statistics obtained from large text corpora. In particular, we focus on two types of interactions: one is of cooperative nature, analogous to syntagmatic relations between words, and the other is a competitive one similar to paradigmatic relations. In order to understand global mechanisms governing semantic changes, we regard densely interacting clusters of units as basic building blocks of the semantic space. This allows us to focus on structural properties of these clusters, such as their size, coherence, or susceptibility to external disturbances. Furthermore, by using clustering algorithms that permits the death, birth, merge and split of the clusters, we are able to determine the types of interactions that are indeed realized between clusters in the function of their inner structure. This modeling scheme also has the practical advantage of being insensitive to actual word forms, making cross-linguistic comparisons and universal, language-free modeling possible.

Spatial structure of the distribution of land in urban areas

ABSTRACT. A combination of rapid population growth and an accelerated demographic shift from rural to urban areas has resulted in a high proportion of the world's population living in cities, making of utmost importance the interdisciplinary study of cities from a complex systems perspective. In particular, cities are organized structures that evolve with a dynamics influenced, at different temporal scales, by economical, political, historical, topographical, among other phenomena. From this intricate set of interacting elements and situations that shape its structure, it has been shown in many cases, that cities present similar emergent patterns in, for example, their spatial morphology, size and number of inhabitants.

In this work, we explore the spatial structure of built zones and green areas in diverse western cities by analyzing the probability distribution of areas and a coefficient that characterize their respective shapes. From the analysis of diverse datasets describing land lots in urban areas, we found that the distribution of built-up areas and natural zones in cities obey inverse power laws with a similar scaling for the cities explored. On the other hand, by studying the distribution of shapes of lots in urban regions, we are able to detect global differences in the spatial structure of the distribution of land.

Our findings introduce new information about spatial patterns that emerge in the structure of urban areas, this knowledge is useful for the understanding of urban growing, to improve existing models of cities, in the context of sustainability, in studies about human mobility in urban areas, among other applications.

Attention & Tempo: Stroop Effect Modulation by Auditory Stimulus

ABSTRACT. Music is a human creation that implies many questions about how it is related to our cognitive processes. Music theory shows that music may be conceived as mathematical structures, as a complex organization of sounds and silences that can be traduced in rhythms, scales, modes and other arrangements, able to generate emotions and promote the appearance of different “mental states”. But, in which way music influences brain dynamics? And which features of the musical rhythm like tempo or pitch plays a major role in this interaction?

It is well known, that the auditory cortex is widely connected to other brain areas, beside of its strong link to the motor cortex. Therefore, musical stimulation might improve attention, visual-spatial imagination, working memory and verbal cognition. On the other hand, considering that brain is organized in a hierarchy of feedback loops with the ability to synchronize neural activity, it is conceivable that such synchronized oscillations have preferred frequency. The present study was conducted to provide evidence of a possible influence of music tempo on high level cognitive functions. We follow a resonance hypothesis as a potential explanation of our empirical results.  

The Complexity of Child Labor
SPEAKER: Josue Sauri

ABSTRACT. Current Child Labor Policies focus on the issue treating it as a lineal problem, addressing it as the main cause of school dropouts, therefore affecting de development of children, since low education levels are highly related to poverty and deprivation of rights. However, in Mexico, data from the Child Work Module (CWM) collected by the National Institute of Statistics and Geography since 2007, suggest otherwise. According to the CWM of 2013 , at age 14, the legal age for working established by the Work Federal Law , occupation rates at 13.6% while school dropout rates at 8.6%, and by the age 17, occupation rates at 27.5% while school dropout rates 31.4%. Furthermore, reasons for work before age 14 have 59.3% of answers in work for pleasure or learning a job, while work for economic needs stands at 32.5%, after age 14, reasons for pleasure or learning drops to 30% while work for economic reason grow to 60%. In the other hand, just 5.2% children below age 14 are identified to dropout school because of work, while the same is true for 13.2% for children above 14 years old. Main reasons for dropping out of school are lack of interest or skill that stands at 37% in children above 14 years old and 13.7% below work age, followed by lack of economic income with 20.1% on children above 14 years old and 24.1% below work age. These results from the Child Work dynamic indicate that there is no real correlation between child work and school dropout. Parting from this hypothesis, this paper presents a summary of the third chapter of the research thesis presented for obtaining the Master Degree in Complexity Science at the Autonomous University of Mexico City , which explores the dynamic of the variables from the CWM using complexity tools to model relationships in data networks, analyzing these interactions on a Boolean Network. The model extracts thirteen variables from the CWM database making a correlation analysis and configures those variables into the Boolean Network, establishing the interaction relationships with a lineal regression model between variables and other statistical results of the data. This configurations results into 8 attractors on the Boolean Network that are then analyzed and categorized to match different dynamics of child work, identifying the properties that result in Child Labor and comparing to those that suggest a healthier dynamic of Child Work. With the results of the Boolean Network, the population measured on the CWM is classified into the network’s attractors to probe the hypothesis of the research, suggesting that policies that treat the Child Work issue as a linear problem are doomed to fail.

Wealth of the world’s richest publicly traded companies per industry and per employee: Gamma, Log-normal and Pareto power law as universal distributions?

ABSTRACT. Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand F orbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two- class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.

Solar potential scaling and the urban road network topology
SPEAKER: Sara Najem

ABSTRACT. Cities appear to have allometric laws which, when revealed and integrated in state policies, could help achieve or maintain sustainable growth. In this work we follow the solar potential of multiple cities in relation to their road networks’ lengths and lengths distribution and found them to be governed by power laws; these we show to be valid down to the scale of a block. This is based on a simple observation: in an urban setting the rooftops solar potential depends on the number of erect buildings, which in turn is linked to the road length. In the process we identified a measure of social stratification according to which resources could be allocated based on common needs.

This is a first attempt at exploring the relation between a city's solar potential and its road network topology by drawing parallels with living systems’ allometry and its dependence on the topology of the vascular network. More precisely, our findings raise the question about the existence of universal laws typifying rural and suburban areas' solar potential and serve as a tool for estimating metrics which are relevant to sustainability science and to cities economic development.

A programming language for the implementation of Ad-hoc networks through social-inspiration techniques

ABSTRACT. Ad-hoc networks can be considered as an open system [1], given their operating characteristics such as: decentralized control, dynamic topology, limited physical resources, no defined infrastructure. Considering these properties it could be said that also the behavior of this type of networks tends to be complex [2]. In order to deal with the complexity, techniques must be available that allow the adaptation of the systems to the environmental conditions [3]. In this research it is proposed as a means of adapting the systematic techniques of social-inspiration in computer systems. The idea of ​​social inspiration is that, based on the solution of social dilemmas, we can have a basis for the creation of techniques and algorithms that solve computational problems [4]. This perspective is important to have, since conventional computing solutions can be set to generate the adaptability required in Ad-hoc networks.

Taking into account these arguments, it is proposed to develop a computational tool for the management of the elements that make up an Ad-hoc network. It is proposed the design of a programming language, which is oriented to contain native functions that effectively allow the implementation of an ad-hoc network. The design of the language is designed so that its first three layers, the lexical, syntactic and semantic part are in harmony with the needs of Ad-hoc networks. It is proposed that the language manages services through a multi-agent system which takes into account the necessary requirements of different applications. For the conception of language, the nature of the open systems must be taken into account, so that it may be possible to create adaptation functions for the Ad-hoc network to face the changing conditions of the environment in which it is located. One of the expected results is to define a multi-paradigm language, which would be the combination of object programming paradigm, functional paradigm and agent-oriented paradigm. It is noteworthy that language is part of the conception of a complete computational model [5], which considers the construction of a computer system on a dynamic system such as Ad-hoc networks

How complexity, clyodinamics and network science can offer a new narrative in the historical research

ABSTRACT. The analytical approach of Complexity, and in particular Social Network Analysis (ARS), as a theoretical-methodological perspective applied to the study of historical processes, offers a wider and more articulated reading than the methodologies used in traditional historical research. In this sense, this approach allows to understand the structure, articulation, and different configurations of a concrete social system over time.

What at first glance could be read as issues of philias or phobias between political elite factions at any given time, can be explained from the analysis of how the structure of kin, and socio-economical networks have been configured and reconfigured over time. That is to say, the use of this approach allows to identify "evolutionary" processes, as well as the dynamics of social structures, for example, the emergence and decay of elites.

In this work we present the reconstruction through the analysis of social networks, of the emergence of an elite group in the Mexican state of San Luis Potosí, from 1787 to 1855. This elite was conformed by people from commerce, mining, agriculture, the army, and local politics. San Luis Potosí has a privileged geographic axis since it is a point of interconnection for the trade and transfer of goods with Mexico City, the north of the republic, the United States, and towards the Atlantic through the port of Tampico in the Gulf of Mexico.

For this analysis, it has been revised the composition of key social spaces, such as the city council, the local congress, the state government, the army, and commercial companies. For this purpose, data recovery has been done in primary sources, systematized in databases, and network analysis.

The network analysis has been done using Gephi and Cytoscape. The analysis performed allowed us to understand the emergence of groups within the elite, thanks to connections of kinship, friendship, and mutual collaboration in different institutions. Morover, it revealed the presence of characters with great centrality in the network. Although these characters importance was previously recognized according to the mexican historiography, it did not realize the great importance they had within the local elite when considering their whole network, which lead us to a new reading of the historical processes. It also revealed the existence of characters that served as a link between clearly defined sectors, such as the army or the city council, and others who had simultaneous presence over long periods connecting different generations. It also allowed to visualize the "evolution" of the network and to understand how the groups or networks within the local elite, managed to survive and maintain political influence over time. Finally, it raises the question that if armed uprisings and changes in political system, supposed an alteration or continuity within the studied network.

This paper seeks to contribute to the proposal of Turchin, who upholds that it is time for history to become an analytical and even predictive science. We propose this type of analysis for the study of the conformation and transformation of elites in different societies.

Agency Network Analysis (ANA) method for Social-Ecological Systems participatory modeling

ABSTRACT. Problems in social-ecological systems (SES) have been described as wicked because of their complex nature. Some of this wickedness is due to the intractable dependencies between the multiple interacting scales of the ecological and the social. Although the ecological component has been addressed abundantly, the social, cultural and subjective part of a SES is still seen as a black box. Moreover, there is a need for a better understanding of SES by incorporating the knowledge of diverse actors that are part of the SES. Addressing these gaps requires an active search for methods ―i.e. Multicriteria Decision Analysis and Multicriteria Mapping― designed to capture stakeholders preferences and possible actions. Here we present a methodological approach developed in the context of our Transformation Laboratory or T-lab project that intends to foster collective agency towards the transformation of Xochimilco, the last remaining wetland of Mexico City. The method has been designed for capturing and modeling actors’ agency related to environmental management, as subjectively perceived by the actors themselves. Our approach is based on the idea of participatory modeling in which actors are involved in the process of constructing and validating models. The method comprises and articulates three different techniques: a) ego-networks identifies an actor’s (ego) networks of collaborators (social capital) around certain topics (e.g. farming), their perceived closeness, and how positive or negative is the the relationship beween ego and his collaborators, and the sectors to which collaborators belong (i.e., government, NGOs, Academia, etc.); b) action networks, which are bipartite networks connecting ego collaborators to a set of practices; where collaborators may converge in a practice (e.g. collaborator 1 and 3 participate in farming capacity building programs); and c) Fuzzy Cognitive Maps that capture ego's perceptions of a situation, problem or SES. The aim of integrating these diverse techniques is threesome: 1) to generate a comprehensive profile of the actor from its own subjective perspective. This is a relevant way of describing an actor's profile because it identifies the social capital of the actor, its place in the SES, its ecological and social role and the activities it performs in the SES. 2) Since our method integrates three forms of networks, ANA makes possible to identify pathways of action through the networks, from the activation of collaborators, through practices that finally have an impact somewhere in the system. 3) To assign probabilities to the different actionable paths. Future and ongoing developments of ANA include an implementation of a simulation strategy with stakeholders. This simulation has two objectives: 1) collaboratively generate alternative scenarios about the SES by mobilizing social acquaintances (social capital) in order to open-up and assess possible pathways for sustainable transformations, and 2) identify the route from perceptions to actions to be integrated in Agent-based Models on individual actions in response to social or environmental perturbations. Finally, a central motivation in designing this approach is to develop a method that would translate from qualitative and subjective information into quantitative data for a better understanding of SES dynamics.

Power relations in a socio-ecological system evolution

ABSTRACT. Power relations have been a subject of recurring matter for social sciences in general and most particularly for anthropology. The present “paper” addresses the stated issue, specifically the different ways in which a regional elite duplicates, reproduces and diversifies its power. The proposed variant sought to carry out an analysis of the elite as a complex system from an anthropological perspective. However, different insights on power within the social sciences share one particular characteristic, and this is that most of them perceive power as a capacity, as a set of conditions, qualities and skills. All this prevent us from understanding how power emerges, reproduces and diversifies, that is why for most people power is the relative inequality that exists between two or more actors and / or groups immersed in a particular social relation, such inequality results from the control that certain participants of the relationship have on symbolic or material resources of interest to the participants. In this respect capacity means control and power an aspect of all social relations, without exception. This variation with regard to the perception about the power is the main difference between the classic analyzes of social sciences and the one we are presenting. This paper develops and proposes a model based on a network approach that allows describing, analyzing and understanding the evolution of power in a regional elite in Mexico, distinctively in the State of Guerrero on Mexico’s Pacific coast. The model is built on the basis of kinship relations of elite members and their interaction with three boundary constraints concerning the evolution of networks of power, namely World Bank recommendations, an amendment to Article 27 of the Political Constitution of the United Mexican States and the establishment of agribusiness in the region. The information that fed the model was obtained through anthropological field work: participant observation, in-depth interviews, genealogical construction, documentary research (literature, archive and hemerographic). Once the analysis was made, it was found that the evolution in the networks of power has moved from a state in which only conformed by ties of kinship has been a structure that bases its links in economic and political activity at local, regional and even national scale.

The Role of Online Published Scientific Research on Online News Services

ABSTRACT. Traditionally, medical coauthorship networks has been studied by many researchers using online medical literature. However, there has been an increasing interest in understanding how mass media obtain information to disseminate news and how “hot topics” propagates through population density. The purpose of this research is to assess to what extent the online news services talking about hot health topics, are supported on scientific research published online. We examine the case of Zika outbreak.

We selected the Zika disease as a case study due to the impact of Zika virus since its arrival to America Continent in 2014, causing a worldwide health crisis (more than one million people were affected in Brazil). In addition, malformations such as Microcephaly was found in some pregnant women who were exposed to the Zika virus. The Zika virus is considered as a global threat and requires a promptly intervention for its prevention. The World Health Organization (WHO) has declared it as a new health emergency.

The first phase of this study involves mining scientific research sites such as the Web of Science (WoS) aimed to get a significant amount of publications related to Zika. We also collect Online News Services such as: Yahoo News and Googe News. Moreover, to complement this study, we use a dataset of the World Data Atlas about Zika confirmed cases in 2015 and 2016 all over the World. The final phase involves a correlational statistical analysis to find relations between these sources of information.

From our perspective, Zika makes an interesting case study because before the 2016 outbreak there were 200 scientific publications approximately -from a total of nearly 2000 publications to date-, on the subject since the 1947 when the virus was first isolated from the serum of pyrexial rhesus monkey. 2016 as mentioned in the media news, was the year that the world came to know about Zika but also it was year that Zika epidemic came to an end as well. This aspect of Zika epidemic makes it specially interesting since it displayed a tracktable cycle from its outbreak to its end and in which more than the 90% of scientific publications were produced.

This research provides valuable information regarding online news services which address epidemic affairs. This study also provides an assessment of the role of online medical research as source of such services. Our findings indicate that the news sites on hot topics such as the outbreak of epidemics are seldom based on online scientific and medical literature. This indicates the preference of news services from other sources such as official speeches given by medical authorities, which can be sources of information of greater impact on the readers. Future and ongoing work is expect to find out more about how mass media obtains medical information to promote popular topics and acquire more readers of their online news.

Synchronization of a variation of the Kuramoto model on complex networks

ABSTRACT. In this work we present a Kuramoto inspired model designed to capture group formation. Our model runs on a network where each individual is an oscillator whose frequency represents its individual behavior that when shared with other oscillators can become a group behavior. Our proposed model differs from the Kuramoto model in that we consider two types of dynamics affecting synchronization. 1) A local mean field (closest neighbors) that updates the oscillator positions. 2) A positive feedback process to couple in certain degree the oscillator frequency related to its closest position neighbor. Every link between any two oscillators in this network have a different value commonly referred to as a weight. For oscillator i, a specific constant value Ki_avg is provided and is the average of the weights of the connections between oscillators i and its neighbors. K_avg is the local coupling coefficient through which the position of oscillator i (θi) is updated. Each oscillator changes its own frequency according to the frequency of its best-coupled-neighbor (BCN) and also according to parameter A, that is defined as the closeness of the position of oscillator i regarding to the position of its neighbors. We implemented our model in a random Erdos-Renyi network, Watts-Strogatz small-world network and Ring grid network. In order to characterize the dynamics, we used the position of each oscillator and also the space of frequencies. The results of our model were the following: in the case of the Erdos-Renyi network, oscillators did not synchronized forming clusters and the frequency of each oscillator remains stable. For the ring grid network, we see the emergence of stable groups of coupled oscillators, and in the space of frequencies, there is a clear difference among the different regions -each with a different frequency- due to the topology of the network. The results in the small world network, instead, present formation of coupled oscillators groups. Each group oscillates with a different frequency. More precisely, in the space of frequencies two different dynamics can be identified, there are regions that converge to a fixed stable value, while there are others that are unstable, continually flipping from one region to another and that separate stable regions. Moreover, in the small world networks, we detected multiple-cascading effect through the network in which the frequency of a cluster spreads over some other parts of the network. We suggest that these results are the expected outcome when considering that, in terms of their clustering coefficient and long distant reach edges, the topology of small world networks is in the range between random and grid network topologies. Therefore, we suggest that high clustering coefficients must lead to the emergence of stable regions in both the grid and small world networks. On the other hand, the flipping regions in the small world network must be the effect of the long distance edges connecting different clusters. Also, we believe that the long distance reaching edges may be also causing multiple-cascading effect through the network.

Crossovers determination in detrended fluctuation analysis (DFA) graphs of heartbeat intervals time series by using the curvature concept

ABSTRACT. The detrended fluctuation analysis (DFA) is a very effective methodology to study the time series of complex biological systems, such as heartbeat intervals time series. This analysis helps to know a patient’s health status and it has been determined that in healthy and young patients usually the DFA’s analysis graphs are linear, whereas in the patients with cardiac problems the graphs present crossovers, these are points there the slope changes. The determination of the position of these crossovers and the involved slope change is relevant to support a possible diagnosis. The proposal of the present work is to use the geometric curvature concept to characterize these crossovers in heartbeat intervals time series obtained from patients with heart failure. A comparison was made with the subjects of a control group of healthy people and the method was validated by manually quantifying the positions and slope changes.