Abstract—Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a **scale**-**free** power-law distribution. This feature was found to be a consequence of three generic mechanisms: (i) networks expand continuously by attaching addition of new vertices, (ii) new vertex with different number edges of **weighted** selected connectioned to different vertices in the system, and (iii) new vertices attach preferentially to sites that are already well connected. A **model** based on these three ingredients reproduces the observed stationary **scale**-**free** distributions, which indicate that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.

Mostrar mais
3 Ler mais

Lymphocyte-mediated immunity is a remarkable charac- teristic of jawed vertebrate organisms. Immunology has made spectacular advances in the last decades in terms of defining the genetic, molecular, and cellular components in- volved in these phenomena. In this work we propose a mini- mal **model** for immune regulation in a lymphocyte network. Immunological activity is based on the activation of T and B lymphocytes generated by special 共“combinatorial”兲 so- matic processes of gene rearrangement in which each lym- phocyte acquires a unique membrane receptor. If activated, a lymphocyte may multiply and form cell clones with a vari- able number of cells; if it is not activated, it will likely die by apoptosis without ever dividing. Binding of the specific re- ceptor 共BCR or TCR兲 is important for cell activation and in determining whether or not the lymphocyte will survive and expand. B lymphocytes may further turn into plasma cells, which are short-lived cells that secrete soluble forms of the BCR to the extracellular space, where they are known as immunoglobulins 共Ig兲.

Mostrar mais
8 Ler mais

GRNsight is a web application and service for visualizing models of small- to medium- **scale** gene regulatory networks (GRNs). A gene regulatory network (GRN) consists of genes, transcription factors, and the regulatory connections between them which govern the level of expression of mRNA and protein from genes. Our group has developed a MATLAB program to perform parameter estimation and forward simulation of the dynamics of an ordinary differential equations **model** of a medium-**scale** GRN with 21 nodes and 31 edges (Dahlquist et al., 2015; http://kdahlquist.github.io/GRNmap/). GRNmap accepts a Microsoft Excel workbook as input, with multiple worksheets specifying the different types of data needed to run the **model**. For compactness, the GRN itself is specified by a worksheet that contains an adjacency matrix where regulators are named in the columns and target genes in the rows. Each cell in the matrix contains a “0” if there is no regulatory relationship between the regulator and target, or a “1” if there is a regulatory relationship between them. The GRNmap program then outputs the estimated weight parameters in a new worksheet containing an adjacency matrix where the “1’s” are replaced with a real number that is the weight parameter, representing the direction (positive for activation or negative for repression) and magnitude of the influence of the transcription factor on its target gene (Dahlquist et al., 2015). Although MATLAB has **graph** layout capabilities, we wanted a way for novice and experienced biologists alike to quickly and easily view the unweighted and **weighted** network graphs corresponding to the matrix without having to create or modify MATLAB code. Given that our user base included students in courses using university computer labs where the installation and maintenance of software is subject to logistical considerations sometimes beyond our control, we enumerated the following requirements for a potential visualization tool. The tool should:

Mostrar mais
24 Ler mais

Centrality in graphs is widely used to measure the importance of a node in a **graph**, especially in SNA (Le Merrer and Tr´edan, 2009). Our recommender engine implements these centralities to measure the relevance of people in the social network. Some centrality measures like closeness and betweenness are based on the calculation of the shortest distance to reach all other nodes in the **graph**. Our algorithms to calculate centralities are applied to the network of persons so we can infer the most popular nodes (degree), the capacity of a node to reach any other in the network (closeness), and to identify the leaders interconnected within a neighborhood in the **graph** (betweenness) (Newman, 2005). De- gree centrality is a measure that counts the direct relationships a node has, and thus, the nodes that are in direct contact. Closeness is defined as the inverse sum of the shortest paths between any two nodes and betweenness is defined as the number of shortest paths from all vertices to all others that pass through that node. Centrality measures are calculated over the network at a topolog- ical level given a **scale**-**free** **graph** of persons. Thus, these mea- sures are not exploiting our **weighted** **graph**, they are applied only at a social-network level. Terms and objects of interest can be seen as sub-graphs of the global network that can be exploited by using flow-based centrality measures.

Mostrar mais
8 Ler mais

For this strategy to be effective in flies, the animals must not only exhibit consistent saccade direction in order to spiral, but also must exhibit consistent saccade amplitude. Mean saccade angle in Drosophila is 90 degrees [1]. Our own numerical simulations (Fig 7a) for the searching efficiencies of fruit flies show that saccade angle should be constant at 290 and 90 degrees. This is in stark contrast with the original **model** in which predicted angles between successive flight segments are randomly and uniformly distributed between 0 u and 360u [4]. The previous report considered an idealised **model** in which a searcher moves on a straight line towards the nearest target if the target site lies within a ‘direct vision’ radius, r, otherwise the searcher chooses a direction at **random**, and then a travel distance, l, is drawn from a Le´vy distribution. The simulation moves incrementally towards the new location whilst seeking targets within detection radius, r. If no target is sighted, the searcher stops after traversing the distance l and chooses a new direction and a new distance, otherwise it proceeds to the target [4]. Here we modified the **model** so that the angle between successive straight line segments is constant. Searching efficiencies were determined from the simulated trajectories of 10 5 searchers that began their searches in the immediate vicinity of a target. When the target is low (L/r$100) the mean distance travelled, and so the energy expenditure, is a minimum when m<2 and when the turning angle equal to or larger than 90 u (Fig. 7). This new result complements that of Viswanathan et al. [4] and may account for the 90 u saccade amplitudes exhibited by Drosophila [1,23].

Mostrar mais
9 Ler mais

Genome-**scale** models of metabolism have only been analyzed with the constraint-based modelling philosophy and there have been several genome-**scale** gene-protein-reaction models. But research on the modelling for energy metabolism of organisms just began in recent years and research on metabolic **weighted** complex network are rare in literature. We have made three research based on the complete **model** of E. coli’s energy metabolism. We first constructed a metabolic **weighted** network using the rates of **free** energy consumption within metabolic reactions as the weights. We then analyzed some structural characters of the metabolic **weighted** network that we constructed. We found that the distribution of the weight values was uneven, that most of the weight values were zero while reactions with abstract large weight values were rare and that the relationship between w (weight values) and v (flux values) was not of linear correlation. At last, we have done some research on the equilibrium of **free** energy for the energy metabolism system of E. coli. We found that E out (**free**

Mostrar mais
7 Ler mais

robustness of a **scale**-**free** network when links are irreversibly removed after failing. Due to inherent charac- teristics of the fuse network **model**, the sequence of links removal is deterministic and conditioned to fuse tolerance and connectivity of its ends. It is a different situation from classical robustness analysis of complex networks, when they are usually tested under **random** fails and deliberate attacks of nodes. The use of this system to study the fracture of elastic material brought some interesting results.

6 Ler mais

Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but **graph** theory has been used with success. We present a **graph** theoretical **model** of the posterior lateral line sensorimotor pathway in zebrafish. The **model** includes 2,616 neurons and 167,114 synaptic connections. **Model** neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our **model** is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used **graph** theoretical tools to compare the zebrafish connectome **graph** to small-world, **random** and structured **random** graphs of the same size. For each type of **graph**, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish **graph** than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The **graph** was found not to be **scale**-**free**, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three **model** brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This **model** is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.

Mostrar mais
13 Ler mais

In this chapter, we investigate a **random** **graph** **model** in a class known as GLP in the literature of Computer Science and Physics. Informally, it is a modification of the **model** proposed by Barab´asi - ´ Albert in [2], in which links among existing vertices are allowed. The effect of this alteration has positive consequences in the sense that this **model** outperforms others popular models when the task is predicting or mimicking real-world complex networks (In [30], the authors have a quantitative version of this statement).

72 Ler mais

1) Selection of project with Network growing **model**: When the agent selects a project, some strategies to a network growing **model** are required. In this paper, Barabasi- Albert(BA) **model** with dynamic fitness[8] is adapted to decide the behaviors of agents. BA **model** is an algorithm for generating **random** **scale**-**free** networks based on the preferential attachment mechanism. **Scale**-**free** network is widely used in natural and human-made systems such as social community. In BA **model**, an edge is connected to the node with high number of degrees in high probability[9]. BA **model** with dynamic fitness employs the fitness of node is dynamically changed [8]. A degree of software is the number of agents worked in the software development and a minor agent selects a software based on this **model**. In this **model**, the linkage to the other agent is implemented by using BA **model**, but a new node is not added.

Mostrar mais
6 Ler mais

The existence of higher order structures like modules and communities is a signature of a non-**random** system and provides insights into their functional organization [18]. A module represents a densely connected group of nodes that, however, is weakly connected to the remaining network. The presence of modular structure may drastically change dynamical processes that occur in networks. Spreading processes like virus epidemics and synchronization strongly depend on network modularity [19]. Moreover, modularity itself is heterogeneous and modules may have a variety of density of edges, sizes and structural features in general [20]. This large variety of features and patterns makes the detection of modularity an important albeit challenging problem. In this work, we focus on a network that combines protein functionality information with drug interactions, in an attempt to unveil new strategies and structural features to combat complex diseases. Proteins interact with each other and define protein complexes. Protein complexes have a rich variety of functions in cells and play a key role in many human disorders. However, in spite of their importance, network studies based on protein complexes in human are still lacking. Moreover, an investigation of the interactions between all available drugs and all discovered protein complexes has not been attempted to our knowledge. The existence of modularity in this bipartite **graph** may lead to develop new strategies to deal with key diseases from a systemic point of view, shifting the focus from targeting individual genes or proteins to disrupting protein complexes formation. It is worth noticing, however, that several works have investigated protein complexes networks in **model** organisms such as yeast [21,22]. Authors used an integrative approach in the context of gene association studies. In contrast, here we focus on human protein complex and drug associations and our methodology relies on module identification via maximization of a modularity objective function.

Mostrar mais
13 Ler mais

Os resultados mostraram que o parâmetro mais afectado pelos parâmetros de aquisição é a pseudo-difusão (D*). Além disso, foi também demonstrado que quanto maior o número de b- values usado, melhor será a estimativa dos parâmetros IVIM-DWI. No entanto, a partir de um determinado número de b-values e para baixa razão sinal-ruído (SNR), o efeito do ruído nos extra b-values contraria o efeito de usar mais b-values. Também foi demonstrado que a sequência de b-values usada para a amostragem, influência bastante as estimativas IVIM-DWI. Concluímos que a sequência de b-values convencionalmente utilizada não fornece estimativas óptimas relativamente ao IVIM-DWI. Além disso, os resultados demonstram que devem ser atribuídos pesos diferentes a cada parâmetro IVIM-DWI para obter uma melhor estimativa. Também foi observado que a influência do relaxamento T2 deveria ser tomada em conta no modelo Intravoxel Incoherent Motion – Diffusion **Weighted** Image (IVIM-DWI). Finalmente, o nosso estudo mostrou que na presença de Esteatose, o valor D* decresce significativamente enquanto que D descresce pouco. No entanto, as diferenças entre pacientes com esteatose e saudáveis é extremamente influenciada pelo número de b-values usados, levando a diferentes diagnósticos dependendo desse mesmo número.

Mostrar mais
63 Ler mais

Abstract. The first author has associated in a natural way a profinite group to each irreducible subshift. The group in question was initially obtained as a maximal subgroup of a **free** profinite semigroup. In the case of minimal subshifts, the same group is shown in the present paper to also arise from geometric considerations involving the Rauzy graphs of the subshift. Indeed, the group is shown to be isomorphic to the inverse limit of the profinite completions of the fundamental groups of the Rauzy graphs of the subshift. A further result involving geometric arguments on Rauzy graphs is a criterion for freeness of the profinite group of a minimal subshift based on the Return Theorem of Berth´ e et. al.

Mostrar mais
27 Ler mais

In summary, we have constructed a condensed matter analog **model** for fluctuations of the geometry of spacetime due to quantum gravity effects. The **model** leads to the interpretation that such classical background fluctuations induce effective interactions on **free** fields. We expect that such sound cone fluctuations can be tested in suspensions like colloids since excitations of acoustic modes in these environments are described by **random** wave equations such as the one we discussed in this Letter [19]. A possible course of action is to remember Ref. [14], where it is shown that a nonlinear dispersion relation generated by a **random** wave equation induces peaks in the density of states. Another direction is the analysis of localization of acoustic waves. Such an issue is related to the problem of how sound cone fluctuations in our analog **model** can change the variation of flight of pulses. The study of wave locali- zation using diagrammatic perturbation method was pre- sented in Refs. [20–23]. It is known that in one dimension any disorder is strong enough to induce exponential locali- zation of eigenmodes. On the other hand, a **random** fluid with a supersonic acoustic flow is also an analog **model** that opens the possibility to discuss the effect of the fluctuation of the geometry in the Hawking radiation. Specific ex- amples studying Anderson localization in the analog **model** of the present Letter as well as the implementation of randomness in an acoustic black hole will be reported elsewhere.

Mostrar mais
4 Ler mais

while the estimated relative median error is typically about 1/5 of the interval width. These values can also be distin- guished in Fig. 13 for X = 1 km, as this error bar represents the averaged errors of all OFs with this X value. The ris- ing error bar values with the WT distance X are constituted by the decreasing TSGB width for higher distances between DVOR and WT, caused by the **free** space loss. According to Eqs. (3) and (4) this leads to the rising relative error values.

8 Ler mais

(TINEO et al., 2016; LOPES et al., 2016), and egg production (CRUZ et al., 2016). These models allows to adjust the **random** production curve of each animal, expressed as deviation from the mean curve of the population or group of animals (BONAFÉ et al., 2011). The regressions are adjusted depending on the production period using ordinary polynomials, or other linear functions, and **model** trajectories for the population mean (fixed regressions) and for each animal (**random** regressions). Additionally, it provides estimates of genetic parameters at any point along the egg production curve in the interval in which the measurements have been obtained. The prediction of these values for egg production is of great economic importance, and should be considered when selecting laying hens (VENTURINI, 2012).

Mostrar mais
7 Ler mais

In this work, we introduce a deep-structured conditional **random** field (DS-CRF) **model** for the purpose of state-based object silhouette tracking. The proposed DS-CRF **model** con- sists of a series of state layers, where each state layer spatially characterizes the object sil- houette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical **model**, the proposed DS-CRF **model** allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multi- ple targets within the scene. Experiment results using video surveillance datasets contain- ing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering.

Mostrar mais
17 Ler mais

For each input 2D image, typical pre-processing steps such as background subtraction, noise removal and silhouette extraction are performed. We then normalize the silhouette centered inside a 128x128 bounding box while preserving the aspect ratio of the original silhouette. A normalized silhouette is translation and **scale** invariant. Most systems in the literature extract shape features from binary silhouettes. However, because of the fact that binary silhouettes contain a lot of redundant inter-pixel information in them, silhouette-based features generally tend to be more prone to ‘over-fitting’ in the recognition phase. Furthermore, most binary silhouettes can “losslessly” be represented by their contours. Hence, the proposed system extracts features from the contour of the silhouette. A contour is a sequence of vertices of the silhouette found through a contour extraction algorithm. We resample the contour to have exactly 230 points. We then use chord distribution to present 2D poses [13]. We do this by first calculating all the pair-wise distances between points on the contour and then building a distance histogram as shown in Fig 2. We use 50 bins of length 4 units. We then normalize the histogram by dividing the length of each bar by the length of the highest bar. The normalized chord histogram is rotation, **scale** and translation invariant. Each pose is essentially represented by a feature vector

Mostrar mais
5 Ler mais

Generally valuation models can be divided into two main categories: Absolute Valuation and Relative Valuation. Under each criterion there is a wide range of models. In order to define the fair value of Apple’s stocks, several models are applied, including FCFF **model**, FCFE **model**, DDM, Residual Income **Model** and Multiples Valuation **Model**.

68 Ler mais

Following the initialization of many variables (including but not limited to the matrices and vectors to store the X matrix, Y matrix and e, the number of variables (p) and the significance matrix) and the set up of data from the input file, the main part of the simulation program begins. Since the first experiment has no data on which to base a design, this pilot experiment is randomly generated. The simulation program is designed to handle a variety of data. If the number of variables is over 50, then the first experiment is selected to have 10 observations; otherwise, the first experiment has between five and nine observations. The number of pilot observations varies between 0 and 10 as determined by the number of variables. Each value in the pilot is created by generating a **random** number between 0 and 10 and then dividing it by the p value. The set is not then normalized nor orthogonal. After the pilot experiment is generated, the **random** number generator is changed depending on which method is used in order to allow for the same pilots but the rest of the numbers generated being different. From here the first set of the dependent Y vector is calculated along with the associated error vector ("). With the initial data generated the real calculations can begin.

Mostrar mais
13 Ler mais