In Prigoginian terms, all systems contain subsystems which are continually ‘luctuating’. At times, a single luctuation or a combination of them may become so powerful, as a result of positive feedback, that it shatters the preexisting organization. At this revolutionary moment […] it is inherently impossible to determine in advance which direction change will take: whether the system will disintegrate into ‘chaos’ or leap to a new, more diferentiated, higher level of order or organization, which they call a ‘dissipative structure’. One of the key controversies surrounding this concept has to do with Prigogine’s insistence that order and organization can actually arise ‘spontaneously’ out of disorder and chaos through the process of ‘self- organization.’ (Prigogine & Stengers, 1984, p. xv)
Apart from structural inhomogeneities in networks, the impact of individual neurons on the dynamics of neural networks may also differ substantially. A number of recent in vitro studies of 1D and 2D dissociated developing cortical and hippocampal cultures have shown that such networks typically express spontaneous neural activity characterized by network bursts, and ongoing repetitions of distinctive firing patterns within those bursts [15–19]. Fur- thermore, several studies have shown that the activity of certain neurons reliably precedes pop- ulation spikes [16–19]. These early-to-fire neurons have been termed leader neurons  and have been found to form functionally connected networks, the activity of which collectively precedes most of the observed population bursts . In the 1D case, population bursts have been found to be triggered by “burst initiation zones”  and in the 2D case recent studies [18, 20] have shown that leader neurons not only precede but also are able to initiate popula- tion bursts. Nevertheless, the underlying network structure and specific topological properties of leader neurons and subnetworks of such cells remain to be discovered. Experimental studies in constrained  or chemically modulated  cultures give reason to believe that a compli- cated process of self-organization underlies their emergence. However, from the modeling point of view, little is understood about how strong inhomogeneities, such as the aforemen- tioned, could evolve in a self-organized manner by means of activity-dependent synaptic plasticity.
In this paper we analyze the ubiquitous coordination dynamics of populations in which interactions involve multiple sectors of a society. These are of particular importance in cases where there is a need to shift the current overall status-quo in order to move into another overall equilibrium state that, e.g., may bring long-term benefits to society as a whole. Traditionally, coordination games incorporate mostly interactions between individuals of a single class. Some are specially relevant for understanding the role of information transmission in human societies [1–4,8,24,25]. However, the intrinsic complexity of socio-political relations calls for modeling techniques that account for typical properties of complex systems, such as self-organization and historical contingency. Evolutionary game theory stands as a toolbox capable of providing the needed formalism to model the coordination dynamics. In this context, the inclusion of multiple sectors stands as a requirement to cope with the intricate and co-evolving nature of actual socio-economic systems. Here, we extended the toolbox of EGT to incorporate a set of multi-sector coordination dynamics.
The degree of efficiency of the self-organization capability is strongly dependent on how the learning mechanisms are implemented. In the design of self-organized systems the key issue is to define powerful intelligence mechanisms, not only including static intelligence mechanisms but also learning capabilities, that enable the system to improve its behaviour in the future as result of its experience. For this purpose, the embodied intelligence concept, associated to the artificial life field, assumes a crucial role. This concept suggests that intelligence requires a body to interact with ; in this case, the intelligent behaviour emerges from the interaction of brain, body and environment. This illustrates the existence of new fields of computer science, such as artificial life and evolutionary computing, that try to mimic some biological concepts. Namely, the artificial life  is a discipline that studies the natural life in artificial environments, e.g. through simulations using computer models, in order to understand such complex systems. Note that artificial life is not similar or is not included in the artificial intelligence field: the last one is mostly related to the perception, cognition and generation of actions, and the former one focuses on evolution, reproduction, morphogenesis and metabolism processes .
This invaluable book is the first of its kind on "selforganizology", the science of self-organization. It covers a wide range of topics, such as the theory, principle and methodology of selforganizology, agent-based modelling, intelligence basis, ant colony optimization, fish/particle swarm optimization, cellular automata, spatial diffusion models, evolutionary algorithms, self-adaptation and control systems, self-organizing neural networks, catastrophe theory and methods, and self-organization of biological communities, etc. Readers will have an in-depth and comprehensive understanding of selforganizology, with detailed background information provided for those who wish to delve deeper into the subject and explore research literature. This book is a valuable reference for research scientists, university teachers, graduate students and high-level undergraduates in the areas of computational science, artificial intelligence, applied mathematics, engineering science, social science and life sciences.
based organizations: the less capable individuals tend to follow those who are better at solving the problems they all face. We find that relatively simple rules lead to hierarchical self-organization and the specific structures we obtain possess the two, perhaps most important features of complex systems: a simultaneous presence of adaptability and stability. In addition, the performance (success score) of the emerging networks is significantly higher than the average expected score of the individuals without letting them copy the decisions of the others. The results of our calculations are in agreement with a related experiment and can be useful from the point of designing the optimal conditions for constructing a given complex social structure as well as understanding the hierarchical organization of such biological structures of major importance as the regulatory pathways or the dynamics of neural networks.
ADACOR innovates by introducing a dynamic adaptive con- trol approach that considers insights from the self-organization principles, which is a powerful concept found in several do- mains, such as biology (e.g., ants foraging and birds flocking), chemistry (e.g., the Belousov-Zhabotinsky reaction), physics (e.g., 2nd thermodynamics law) and social organization (e.g., traffic and pedestrian walk in crowed environments). Basically, self-organization is a process of evolution where the develop- ment of emergent, novel and complex structures takes place pri- marily through the system itself, and normally triggered by in- ternal forces. These forces require the integration of autonomy and learning capabilities within entities to reach, by emergence, a behavior that is not programmed or defined a priori (Massotte, 1995), providing the ability of an entity/system to adapt itself to prevailing conditions of its environment (Thamarajah, 1998). Having this in mind, ADACOR dynamic adaptive control ap- proach uses a self-organization model to combine the best features of hierarchical and heterachical control approaches, i.e. using a hierarchical approach in presence of stable operat- ing conditions, and a more heterarchical approach in presence of unexpected events and modifications. For this purpose, the adaptive control behaviour balances between the stationary and transient states, as illustrated in Fig. 1.
The self-organization phenomenon exists in wide range of disciplines extending from physics to biology [9, 10]. It has also attracted attention from computer science and is now a very active research area . Some case studies for self- organization in computer science have been presented in  which are inspired from the nature. This reflects the vision of autonomic computing that was coined by IBM . They envisioned automatic computing as a grand challenge and outlined four aspects to be the core of automatic computing such as self-configuration, self-optimization, self-healing, and self-protection . The requirement of these self-* capabilities in massively distributed systems have been further discussed in . The definition of self-organization varies in different disciplines befitting the respective goals and criteria. In general, self-organization can be considered as a system which organizes itself automatically i.e. without any intervention from outside sources [9, 12]. However, keeping outside source completely out of the loop is still a research challenge. Even the vision of autonomic computing states: “a system should organize itself according to high-level objectives, and will collect and aggregate information to support decisions by human administrators ” . This has further been outlined as “Put simply, the autonomic paradigm seeks to reduce the requirement for human intervention in the management process through the use of one or more control loops that continuously reconfigure the system to keep its behavior within desired bounds ” .
Self-organization is a universal mechanism in nature. In the past thirty years, numerous phenomena, theories and methods on self-organization have been founded around the world. The book, Self-organization: Theories and Methods, is published to present recent achievements in theories and methods of self-organization. This book includes such theories and methods of self-organization as ant algotithms, particle swarm algorithm, artificial neural network, motion and migration algorithms, self-adaptive Kalman Filter, finite state approximation, etc. Chapters are contributed by more than 20 scientists from China, Italy, Spain, Japan, Russia, Serbia, India, Turkey, in the areas of mathematics, computational science, artificial intelligence, aeronautics and astronautics, automation and control, and life sciences. It will provide researchers with various aspects of the latest advances in self-organization. It is a valuable reference for the scientists, university teachers and graduate students in mathematics, natural science, engineering science, and social science.
Abstract. The method of X-ray structural analysis (X-ray scattering at small angles) is used to show that the structures obtained by self-organization on a substrate of lead sulfide (PbS) quantum dots are ordered arrays. Self-organization of quantum dots occurs at slow evaporation of solvent from a cuvette. The cuvette is a thin layer of mica with teflon ring on it. The positions of peaks in SAXS pattern are used to calculate crystal lattice of obtained ordered structures. Such structures have a primitive orthorhombic crystal lattice. Calculated lattice parameters are: a = 21,1 (nm); b = 36,2 (nm); c = 62,5 (nm). Dimensions of structures are tens of micrometers. The spectral properties of PbS QDs superstructures and kinetic parameters of their luminescence are investigated. Absorption band of superstructures is broadened as compared to the absorption band of the quantum dots in solution; the luminescence band is slightly shifted to the red region of the spectrum, while its bandwidth is not changed much. Luminescence lifetime of obtained structures has been significantly decreased in comparison with the isolated quantum dots in solution, but remained the same for the lead sulfide quantum dots close-packed ensembles. Such superstructures can be used to produce solar cells with improved characteristics.
In order to show the place of the bifurcations being investigated (those correspond- ing to the second, third, and …fth transitions in Table 4.1) in the general pattern of self-organization in dc glow microdischarges, we will brie‡y introduce the latter, re- ferring to  for details. In the framework of the basic model, the problem admits a 1D solution describing a mode in which all the variables depend only on the axial variable. This mode exists at all values of the discharge current and may be termed the fundamental mode. There are also multidimensional modes which bifurcate from, and rejoin, the fundamental mode; the so-called second-generation modes. Figure 4.2 depicts the current-voltage characteristics (CVC) of the fundamental mode and the …rst …ve second-generation modes. hji in this …gure is the average current density evaluated over a cross section of the discharge vessel (which is proportional to the discharge current). The schematics illustrate distributions of current density on the cathode surface associated with each mode. a i and b i designate bifurcation points
How order emerges from noise? How higher complexity arises from lower complexity? For what reason a certain number of open systems start interacting in a coherent way, producing new structures, building up cohesion and new structural boundaries? To answer these questions we need to precise the concepts we use to describe open and complex systems and the basic driving forces of self-organization. We assume that self-organization processes are related to the flow and throughput of Energy and Matter and the production of system-specific Information. These two processes are intimately linked together: Energy and Material flows are the fundamental carriers of signs, which are processed by the internal structure of the system to produce system-specific structural Information (Is). So far, the present theoretical reflections are focused on the emergence of open systems and on the role of Energy Flows and Information in a self-organizing process. Based on the assumption that Energy, Mass and Information are intrinsically linked together and are fundamental aspects of the Universe, we discuss how they might be related to each other and how they are able to produce the emergence of new structures and systems . KEYWORDS: Complex Systems. Self-organization. Matter. Energy. Information.
Changeover to the electronic elemental base with transistor elements of a nano-domain size ( 10 nm) [1, 2] indicates that traditional technological approach- es are faced with the problem that is due to fundamen- tal physical limits, making progress in nanoelectronics critically difficult. This is dictated in many ways by the fact that the changeover to nano-domain is accompa- nied by a manifestation of new physico-chemical effects , in particular, self- organization and self- assemblage, on practical use of which development en- gineers of both electronics and new nanostructured materials place great hopes.
The exact nature of a-catenin binding to the N-cad/b-catenin complex and the F-actin cytoskeleton has been a much debated topic . This study clearly shows that a-catenin spatial localization is sensitive to changes in contractility and boundary conditions dictated by N-cad topography. The stoichiometry of a- catenin’s dynamic binding to the N-cad/b-catenin complex can be regulated by many factors . The simplest explanation of our results could be that one of the factors enhancing the dynamic binding of a-catenin to the N-cad complex is the degree of internal stress at the cell-cell junction. The spatial organization of adhesion complexes is sensitive to geometric cues and boundary conditions (corners) and may be responsible for the observed anisotropy of the myofibrillar structures. Cytoplasmic pools of a-catenin can assume a homodimeric complex, regulating F-actin bundle assembly and lamellipodial activity at the cell-cell junction [29,30,31]. It can be postulated that the enhanced a-catenin localization to the apices could be a result of increased lamellipodial activity in response to high stresses at the apices through N-cad adhesions, by analogy with cells cultured on Fn coated micropatterns that show enhanced lamellipodial dynamics at the apices . One can infer from the results presented in this paper that these a-catenin pools form as a direct result of force gradients generated at cell-cell junctions and allow further F-actin bundling locally. However, further studies will be required to show the magnitude of these forces and the extent to which a-catenin in these areas is homodimeric or in a heterodimeric complex with b-catenin. These results provide supporting evidence that the a-catenin/cadherin complex may serve as a key mechanosensory regulator of cardiac myocyte cytoskeletal structure. This single cell standardized model system can also be used to identify other potentially interesting mechanosensory candidates such as p120 .
Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and artificial systems is seen in the ability for independent exploration . In animals and humans, the ability to modify its own pattern of activity is not only an indispensable trait for adaptation and survival in new situations, it also provides a learning system with novel information for improving its cognitive capabilities, and it is essential for development. Efficient explora- tion in high-dimensional spaces is a major challenge in building learning systems. The famous exploration-exploitation trade-off was extensively studied in the area of reinforcement learning . In a Bayesian formulation this trade-off can be optimally solved , however it is computationally intractable. A more conceptual solution is to provide the agent with an intrinsic motivation [4,5] for focusing on certain things and thus constraining the exploration to a smaller space. To approach this problem in a more fundamental way we consider mechanisms for goal-free exploration of the dynamical properties of a physical system, e. g. a robot. If the exploration is rooted in the agent in a self-determined way, i. e. as a deterministic function of internal state variables and not via a pseudo-random generator it has the chance to escape the curse of dimensionality. Why? Because specific features of the system such as constrains and other embodiment effects can be exploited to reduce the search space. Thus an exploration strategy taking the particular body and environment into account is vital for building efficient learning algorithms for high-dimensional robotic systems. But how can goal-free exploration be useful to actually pursue goals? We show that a variety of coordinated
Our results come from simulations with a fixed rate of environmental change and a fixed value for the parameter that measures selection strength x. We show that population fitness, determined by its size, reaches a broad maximum, while the average genetic load reaches a minimum, for some intermediate range of the mutation rate at birth (model A1). So, nature has self-organized its cellular error correction machinery to ensure a mutation rate within some range.
A new method for image clustering with density maps derived from Self-Organizing Maps (SOM) is proposed together with a clarification of learning processes during a construction of clusters. Simulation studies and the experiments with remote sensing satellite derived imagery data are conducted. It is found that the proposed SOM based image clustering method shows much better clustered result for both simulation and real satellite imagery data. It is also found that the separability among clusters of the proposed method is 16% longer than the existing k-mean clustering. It is also found that the separability among clusters of the proposed method is 16% longer than the existing k-mean clustering. In accordance with the experimental results with Landsat-5 TM image, it takes more than 20000 of iteration for convergence of the SOM learning processes
Another important issue to consider is the average amount of time that is typically required to reach a global behavioral consensus and to complete the process of structural self- optimization. The time taken for structural convergence will be briefly described in the discussion section below. Here we focus on the time scale of individual behaviors. We calculated the number of behavior updates taken to reach the final utility value for each of Figure 4. Ten examples of self-organized constraint satisfaction before self-optimization. During each behavior update a representative in the network is randomly selected, and allowed to adjust its behavior (to +1 or 21) if the new choice satisfies more constraints posed by its connections with all the others. Because a representative’s received utility is calculated as the weighted sum of its satisfied constraints, priority is given to satisfying more important (more weighty) connections. As representatives repeatedly optimize their choices, the network’s sum of utilities U increases. However, as shown by these 10 independent trajectories starting from arbitrary initial conditions, this weak form of self-organization typically becomes trapped in one of several suboptimal behavioral configurations (U = 173.58 for global optima). Only first 400 behavior updates shown; after that U stays the same until the end of the run in all cases.