Top PDF Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

As of January 2006, the TIGR Comprehensive Microbial Resource [1] contained 259 bacterial and 23 archaeal sequenced genomes, and the GOLD database [2] listed 987 ongoing prokaryotic sequencing efforts. The picture emerg- ing from metagenomic [3] and environmental sequencing [4] efforts is that the number of sequenced genomes will surge in the near future. In order to further our understanding of these organisms, it will be necessary to reconstruct their various biochemical reaction networks. First will be metab- olism, which is arguably the most basic function that a cell performs. After metabolic reconstruction [5], the second most feasible reconstruction will be that of transcriptional regulatory networks (TRNs). These regulatory reconstruc- tions will require methods to systematically, comprehensively, and efficiently reconstruct TRNs for which little data exist. Initial work [6–12] on systematic TRN reconstruction has been performed. These pioneering efforts span the range from the theoretical to combined computational and experimental iterative methods, and they address many of the important issues in TRN reconstruction. No single method is available, though, that iterates between computa- tional and experimental phases, utilizes a dynamic modeling framework, has a mechanism for incorporating probabilistic data derived from any source, and explores all of the ways that a network can be activated by different growth environ- ments. All of these aspects are relevant to the ‘‘open question’’ of ‘‘whether automated experimental design can be useful in a large and poorly characterized biological system with noisy data’’ [7]. Since the functional state of a TRN is a direct consequence of its environment, an experi-
Mostrar mais

10 Ler mais

Metabolic constraint-based refinement of transcriptional regulatory networks.

Metabolic constraint-based refinement of transcriptional regulatory networks.

There is a strong need for computational frameworks that integrate different biological processes and data-types to unravel cellular regulation. Current efforts to reconstruct transcriptional regulatory networks (TRNs) focus primarily on proximal data such as gene co-expression and transcription factor (TF) binding. While such approaches enable rapid reconstruction of TRNs, the overwhelming combinatorics of possible networks limits identification of mechanistic regulatory interactions. Utilizing growth phenotypes and systems-level constraints to inform regulatory network reconstruction is an unmet challenge. We present our approach Gene Expression and Metabolism Integrated for Network Inference (GEMINI) that links a compendium of candidate regulatory interactions with the metabolic network to predict their systems-level effect on growth phenotypes. We then compare predictions with experimental phenotype data to select phenotype-consistent regulatory interactions. GEMINI makes use of the observation that only a small fraction of regulatory network states are compatible with a viable metabolic network, and outputs a regulatory network that is simultaneously consistent with the input genome-scale metabolic network model, gene expression data, and TF knockout phenotypes. GEMINI preferentially recalls gold-standard interactions (p-value = 10 2172 ), significantly better than using gene expression alone. We applied
Mostrar mais

14 Ler mais

The transcriptional regulatory network of Mycobacterium tuberculosis.

The transcriptional regulatory network of Mycobacterium tuberculosis.

Using an approach similar to that introduced by Alon and coworkers [15,16] we have firstly registered the number of appearances for each of the subgraphs of length 3 and 4 in the M.tb network (see Materials and Methods). These numbers, even normalized by the total number of registered graphs, do not tell very much about the relevance of the corresponding motifs since they are strongly biased by the macro-scale features of the network. For example, the fat-tailed connectivity distribution typical of scale-free networks [13] makes single input modules to appear many more times than any other motif. Therefore, to get a better descriptor of motifs significance (i.e., whether or not they are more or less present than usual), we have to compare motifs appearance with a null model, namely with the frequency of motifs that comes out in an ensemble of suitably randomized networks. We have used the approach firstly suggested in [20]. It consists of generating, from the initial system, networks that preserve the same connectivity sequence of the original one. To this end, we implement a switching algorithm that preserves not only the number of incoming and outgoing links of each node, but also the number of mutual links when this is the case in the original TR network (see Materials and Methods). This kind of randomizing procedure has been the subject of intense research in the last years, and besides the method used in this paper, there are other alternative randomization schemes [16,20,21].
Mostrar mais

9 Ler mais

Acceptance testing of a CT scanner with a knowledge-based iterative reconstruction algorithm

Acceptance testing of a CT scanner with a knowledge-based iterative reconstruction algorithm

During the last decade, the range of computed tomography (CT) clinical applications has been growing constantly, as well as the number of CT procedures carried out annually. This has raised concerns about the increasing radiation dose delivered to the patients. Currently, there is a desire on the part of the medical community to employ doses that are as low as possible while simultaneously maintaining image quality (ALARA principle). For this reason, manufacturers have been developing technological improvements that allow the doses delivered to the patient to be optimized. Examples of these technological advances include the introduction of more efficient detection systems with higher signal-to-noise ratios (eg: Gemstone Clarity from General Electric, PureVision from Toshiba, Stellar from Siemens, NanoPanel Prism from Philips [1]), automatic exposure control systems and iterative reconstruction algorithms such as ASIR from General Electric, ADIR from Toshiba, IRIS from Siemens, and iDose 4 [2] and IMR from Philips [3].
Mostrar mais

17 Ler mais

Accelerating image reconstruction in dual-head PET system by GPU and symmetry properties.

Accelerating image reconstruction in dual-head PET system by GPU and symmetry properties.

‘‘one’’ LOR or the DRF to compute its response to the uniform activity distribution in the whole imaging space. As a result, to construct the SRM, we only need to compute the detector response functions for 60 p-planes, whose index values are within the range of [0 59] and 3 voxels in each p-plane [7]. The detector head has 104|72 pixels and was extended to 208|208 to enable the use of symmetries to be applicable to all elements in the DHAPET system matrix. We exploited the symmetry property and computed the system response matrices, which were stored as sparse matrices to facilitate access in reconstruction. Note that the subject scattering, positron range and photon linearity are not included. Also, the random events were excluded and the scanner dead-time was ignored.
Mostrar mais

12 Ler mais

Automated Identification of Core Regulatory Genes in Human Gene Regulatory Networks.

Automated Identification of Core Regulatory Genes in Human Gene Regulatory Networks.

While our knowledge of bacterial and yeast GRNs is fairly comprehensive, human GRNs have only been partially reconstructed. Recently, a core human GRN interconnecting 475 TFs was reported separately for 41 different tissue types [58] by combining in silico predicted TF bind- ing sites and in vivo DNaseI footprints. A human GRN was also reconstructed from ENCODE data [13]. In the present study we reconstructed a network of human genes and microRNAs using experimental TF target and in silico predicted miRNA target information. The TF target information obtained from ENCODE and HTRIdB covered only 329 out of 1374 TF nodes in our network and hence target information for more than two-thirds of the TFs was missing. Our analysis based on an incomplete network is prone to errors. Furthermore, TF target infor- mation obtained from ENCODE and HTRIdB is not tissue specific and hence it is only approx- imately correct when used for a specific tissue or cell line. Another source of error in our network reconstruction is that biochemical binding of a TF in the promoter region of a gene does not necessarily translate to a functional interaction between the TF and the target gene. Functional interaction requires additional criteria to be fulfilled such as the position of TF binding relative to the transcription start site and the context of other TF binding sites in its vicinity. Some proprietary databases contain information of functional interaction between TFs and their target genes compiled from published literature. This information can in fact be more accurate for delineating true interactions between TFs and their target genes. Despite all of these limitations, our network was able to explain gene expression with up to 70% accuracy (MCC = 0.4), which was significant against randomization tests. We are optimistic that predic- tion accuracy can be substantially improved as more extensive and accurate data of TF targets becomes available in the near future.
Mostrar mais

28 Ler mais

New evidence for balancing selection at the HLA-G locus in South Amerindians

New evidence for balancing selection at the HLA-G locus in South Amerindians

The occurrence of balancing selection at the HLA-G promoter region has previously been described (Tan et al., 2005). Considering that HLA-G expression and levels were already related to different situations (either in physiologi- cal or pathological conditions) and that this molecule, depending on the context, may be deleterious or advanta- geous, balancing selection at this locus seems to be a plausi- ble possibility. Recently, Castelli et al. (2011) made a comprehensive review of the HLA-G gene polymorphism and haplotypes in a Brazilian urban cohort, evidencing a high linkage disequilibrium along the whole length of the gene. In this same work, the authors revealed evidence for balancing selection acting on the regulatory regions only (5’ and 3’ UTRs) and on the HLA-G locus as a whole. We cannot rule out that the evidence of balancing selection ob- served in our data could be due to a hitchhiking effect caused by a linkage disequilibrium between the 14 bp locus and the HLA-G promoter region. Nevertheless, the compel- ling evidence for the functionality of the 14 bp insertion in alternative splicing and its potential role in post-transcrip- tional regulation by microRNA binding make us believe that the 14 bp INDEL might also be an adaptive factor, in- fluencing HLA-G expression patterns and probably is re- lated to survival of heterozygous fetuses due to resistance to pathogens (Mendes-Junior et al., 2007). In conclusion,
Mostrar mais

5 Ler mais

Deciphering the transcriptional-regulatory network of flocculation in Schizosaccharomyces pombe.

Deciphering the transcriptional-regulatory network of flocculation in Schizosaccharomyces pombe.

The cell suspension was transferred to two 2 ml bead beating vials containing 800 m l of 0.5 mm Zirconia/Silica beads (BioSpec Products, Bartlesville, OK) and subjected to 3 cycles of alternating 2 min beating and 2 min incubation on ice with a Mini Beadbeater 16 (BioSpec Products, Bartlesville, OK). The lysed cells were collected by puncturing the bottom of the bead-beating vial with a flame-heated inoculating needle and placing the vial on a sonication tube nested in 10 ml disposable culture tubes prior to centrifugation (8006 g, 3 min, 4uC). The cell pellet was resuspended, transferred to chilled microcentrifuge tubes, centri- fuged (16,0006 g, 15 min, 4uC) to remove unbound soluble proteins, and the resulting pellet resuspended in 800 m l of fresh lysis buffer in a sonication tube. Total cell lysate volume was adjusted to 2.2 ml with lysis buffer and subjected to 4 cycles of sonication and 1 min on ice incubation at 30% amplitude, 30 sec setting using a Sonic Dismembrator with a 1/8 tapered microtip probe (Thermo Scientific, Waltham, MA). The sonicated cell lysate was centrifuged (46006 g, 2 min, 4uC) and the supernatant stored at 280uC. The supernatant was tested to ensure that greater than 90% of the sonicated DNA was in the size range of 100 bp–1 kb by subjecting a sample (,50 m l) of the supernatant to overnight reverse-crosslinking at 65uC and phenol-chloroform extraction, followed by gel electrophoresis of 3–5 m g of DNA. To immunoprecipitate the chromatin-bound transcription factor, 100–200 m l of Dynabeads conjugated with sheep anti-mouse IgG (Invitrogen Life Technologies, Carlsbad, CA) were washed twice in 800 m l ice cold 16 PBS-BSA (5 mg/ml BSA, 16 PBS), resuspended in 800 m l cold 16PBS-BSA with 5 m g of anti-HA F-7 antibody (Santa Cruz Biotechnology, Santa Cruz, CA) and shaken gently for 2 hr at 4uC on a Labquake Tube Shaker (Thermo Scientific, Waltham, MA). The beads were washed twice in 1 ml cold deoxycholate buffer (100 mM Tris-HCl pH 8, 1 mM EDTA, 0.5% (w/v) sodium deoxycholate, 0.5% (v/v) NP-40, 250 mM LiCl) and twice in 1 ml cold lysis buffer. The beads were resuspended in 200 m l 16 PBS-BSA, combined with 400 m l of
Mostrar mais

23 Ler mais

Transcriptome dynamics of a broad host-range cyanophage and its hosts

Transcriptome dynamics of a broad host-range cyanophage and its hosts

In this study we investigated the infection process and transcriptional program of Syn9, a broad host- range T4-like cyanophage (Waterbury and Valois, 1993), during infection of three different Synecho- coccus hosts and assessed their transcriptional response to infection using both RNA-seq and microarray analyses. The three Synechococcus hosts, WH7803, WH8102 and WH8109, are phylogeneti- cally distinct and occupy different oceanic environ- ments (Zwirglmaier et al., 2008; Scanlan et al., 2009). We found that the transcriptional program of the phage is nearly identical irrespective of the host infected and revealed a regulatory program that significantly deviates from the current paradigm for T4-like phages. This transcriptional and regulatory program appears to be common among T4-like cyanophages as a similar infection process was observed for an additional phage, P-TIM40, during infection of Prochlorococcus NATL2A, and similar promoter motifs were identified in numerous sequenced cyanophage genomes and in metagen- omes. Our results indicate that the well-known mode of regulation in T4 cannot be taken as the rule among the broader family of T4-like phages. In contrast to the near-identical transcriptional program of the Syn9 phage, transcriptional responses of the three Synechococcus hosts showed considerable hetero- geneity, with a large number of responsive genes located in hypervariable genomic islands. This likely reflects different strategies in the hosts’ attempts to cope with infection.
Mostrar mais

19 Ler mais

Classification and structure-based inference of transcriptional regulatory proteins

Classification and structure-based inference of transcriptional regulatory proteins

TFs that are part of two-component transduction systems have shown to be either activators or duals and have similar and consistent architectures, since they are involved in the same type of molecular interactions. Domains present in the structure of Global TFs show a high variety, when comparing to the full sample used in the present work. The number of domain architec- tures is also high, reflecting their distinguishable features when comparing to the remaining TFs as they are capable of regulating a large number of genes in different pathways and cellular functions and responding to more stimuli that other TFs. There is an evident relation between functional domains and the TFs’ regulatory function, however the size of the sample wasn’t large enough allow to perform more statistical tests, since only about 22% of domains were used for the inference of this relationship.
Mostrar mais

92 Ler mais

AN ALGORITHMIC AND SOFTWARE ENGINEERING BASED APPROACH TO ROBUST VIDEO GAME DESIGN

AN ALGORITHMIC AND SOFTWARE ENGINEERING BASED APPROACH TO ROBUST VIDEO GAME DESIGN

The work by Bidarra, et al. in [10] describes a course on game projects, taught in the second year, at Delfts University of Technology with a focus on game development in large teams using students from different disciplines. The proposed course's learning outcomes include demonstrating proficiency in applying media and programming techniques within the context of computer games, striving for a balance between the effectiveness of a programming technique and the desired quality of a game effect; describing the main modules of a game engine and purposefully using their functionality, deepening object-oriented programming skills while building a complex and large software system; an. developing and contrasting teamwork skills within the context of a realistic interdisciplinary team. The survey results reported in the paper indicate that the students were highly motivated upon completing the course and were largely happy with the projects.
Mostrar mais

14 Ler mais

Analysis of the transcriptional regulatory network: underlying heart development

Analysis of the transcriptional regulatory network: underlying heart development

Heart development is a highly complex process with a series of precisely spatially and temporally ordered events on molecular level. To understand how these events are controlled and coordinated, it is necessary to study the underlying gene expression and its regulation. While many studies have been carried out in the examination of single genes and their expression patterns, comprehensive analyses of genome-wide expression profiles associated with cardiomyogenesis (i.e. the differentiation of stem cells into cardiomyocytes) are still rare. In fact, no study exists to date which compares and consolidates the publicly available genome-wide measurement for cardiomyogenesis. Such endeavour however is important, as it is well known that individual microarray studies can be seriously compromised by artefacts. In contrast, the combination of various expression studies, which was performed in my study, can lead to more reliable results and help elucidate the different aspects of heart development and repair. Furthermore, a brief study was performed regarding the potential risk of originating cancer or teratomas from stem cell therapy. Finally, I carried out a network-based analysis, to identify regulatory actions between genes, based on published interaction data. This type of analysis can also help to identify novel genes with a role in heart development and provide new valuable targets to future experimental laboratorial analysis. The combination of the multiple dataset is thus an important approach to gain better insights of the different heart development processes as well as regenerative medicine applied to the heart.
Mostrar mais

157 Ler mais

An Advance Approach to Evaluate the Performance of the TCP Networks

An Advance Approach to Evaluate the Performance of the TCP Networks

Active queue management (AQM) and RED variants: Congestion is a problem that occurs in shared networks when multiple users contend for access to the same resource (bandwidth, buffers and queues). Network resorts to packets dropping for the purpose of congestion handling. Dropping packets is inefficient since if a host is bursting and congestion occurs, a lot of packets will be lost. Therefore, it is useful to detect impeding congestion conditions and activities manage congestion before it gets out of hand. Active queue management is a technique in which routers actively drop packets from queues by sending a signal to senders in order to slow down its transmission rate. Random Early Detection (RED) is an active queue management scheme that provides a mechanism for congestion avoidance. Unlike traditional congestion control schemes that drop packets at the end of full queues, RED uses statistical methods to drop packets in a probabilistic way before queues overflow. Dropping packets in this way slows a source down enough to keep the queue steady and reduces the number of packets that would be lost when a queue overflow and a host is transmitted at a high rate.
Mostrar mais

5 Ler mais

J. Coloproctol. (Rio J.)  vol.33 número1

J. Coloproctol. (Rio J.) vol.33 número1

urological, and gynecological symptoms involving multiple structures, which require the surgeon, or even gynecologist, to act in different areas of his specialty in order to provide the patient with a real solution to his problem. Most of these pa- tients are evaluated and treated individually, sometimes un- dergoing two or more surgical procedures, which signii cantly increases the risk of complications and morbidity, as it is the elderly population that is usually affected by these diseases.

3 Ler mais

Multiobjective optimization of MPLS-IP networks with a variable neighborhood genetic algorithm

Multiobjective optimization of MPLS-IP networks with a variable neighborhood genetic algorithm

The proposed VN-MGA is based on the classical NSGA-II (Deb et al. , 2002) and has, as a distinctive feature, its crossover and mutation operators inspired in the concept of variable neighborhood of the VNS techniques. Two different encodings are employed: a low-level encoding, which encodes explicitly the routes that are followed by each re- quirement of service, and a high-level encoding, that encodes the permutations of the several requirements of service, defining the order in which they will be included in the solution. Crossover and mutation operators, acting in two levels, are able to explore and to exploit the decision variable space with enhanced efficiency, leading to solutions that dominate the ones that appear in algorithm versions using only one level. It should be noticed that the proposed operators are problem-specific. In problems of combinatorial nature, it has been established that algorithms employing specific crossover and muta- tion operators can be much more efficient than general-purpose GAs (Carrano et al. , 2006).
Mostrar mais

96 Ler mais

RMOD: a tool for regulatory motif detection in signaling network.

RMOD: a tool for regulatory motif detection in signaling network.

Regulatory motifs are patterns of activation and inhibition that appear repeatedly in various signaling networks and that show specific regulatory properties. However, the network structures of regulatory motifs are highly diverse and complex, rendering their identification difficult. Here, we present a RMOD, a web-based system for the identification of regulatory motifs and their properties in signaling networks. RMOD finds various network structures of regulatory motifs by compressing the signaling network and detecting the compressed forms of regulatory motifs. To apply it into a large-scale signaling network, it adopts a new subgraph search algorithm using a novel data structure called path-tree, which is a tree structure composed of isomorphic graphs of query regulatory motifs. This algorithm was evaluated using various sizes of signaling networks generated from the integration of various human signaling pathways and it showed that the speed and scalability of this algorithm outperforms those of other algorithms. RMOD includes interactive analysis and auxiliary tools that make it possible to manipulate the whole processes from building signaling network and query regulatory motifs to analyzing regulatory motifs with graphical illustration and summarized descriptions. As a result, RMOD provides an integrated view of the regulatory motifs and mechanism underlying their regulatory motif activities within the signaling network. RMOD is freely accessible online at the following URL: http://pks.kaist.ac.kr/rmod.
Mostrar mais

11 Ler mais

Jerarca: efficient analysis of complex networks using hierarchical clustering.

Jerarca: efficient analysis of complex networks using hierarchical clustering.

Benchmark A was specifically created for testing the quality of the optimal partitions computed by the algorithms implemented in Jerarca. We generated networks with progressive percentages of degradation. In this context, a percentage of degradation of, say, 10%, means that first, 10% of links were eliminated and, from the rest, 10% shuffled among units. The shuffling process involves the random removal of an edge of the graph and the later addition of a new edge between two nodes, chosen also randomly. We previously suggested using a number of iterations equal to 10 times the number of units [12]. Thus, for each of those networks, we ran 5000 iterations of Jerarca with the parameter all for both the iterative and the tree algorithms. This means that 12 analyses ( = 3 iterative algorithms62 tree algorithms62 partition criteria) were performed for each network. With 0–30% degradation, all algorithms recovered the original community structure of the network without errors. However, starting at 40% degradation, slight errors in recovering the original community structure of the graph began to emerge, so we focused on this case. For each of the six dendrograms constructed by using the three iterative and the two tree algorithms, the optimal partitions given by the two evaluation indexes implemented in Jerarca (Q and H) were exactly the same. In all cases but one, a single unit of the network, different for each combination of programs, was misclassified. Only the combination of SCluster and UPGMA recovered the exact community structure of the original network. Significantly, this particular combination also obtained the highest Q and H values. This example shows that all the programs efficiently recover the original structure, even when it is quite cryptic (40% degradation means that just about a third of the original links remain). On the other hand, it also shows the advantage of using
Mostrar mais

7 Ler mais

AN APPROACH TO MITIGATE DENIAL OF SERVICE ATTACKS IN IEEE 802.11 NETWORKS

AN APPROACH TO MITIGATE DENIAL OF SERVICE ATTACKS IN IEEE 802.11 NETWORKS

Problems related to wireless network security have great attention to R&D community, mainly issues regarding denial of service. Malekzadeh et al. (2011) authors show the effects of a denial of service over a wireless network, through simulations using OMNeT++ network simulator. Furthermore, a comparison between simulated and actual attack data is developed, intending to show that the simulator validates the data and presents consistent results. In the simulated scenarios, the authors conducted several tests verifying the throughput and delay of generated network traffic using TCP and UDP segments. The results show a sudden fall to 0 bps throughput and important increase to delay, from 0 seconds to about 6 seconds over the time which the attack is performed. The study shows the amount of lost packets in the simulations was 37.90% when the attack was in effect. Whereas it is feasible to ompare the simulated model with the real model, so can be proved that the results obtained from attack mitigation, the main event discussed in this work, may also be considered feasible and consistent with what goes on in an actual attack.
Mostrar mais

10 Ler mais

Explicit and implicit approach of sensitivity analysis in numerical modelling of solidification

Explicit and implicit approach of sensitivity analysis in numerical modelling of solidification

Keywords: Application of Information Technology to the Foundry Industry: Solidification Process: Numerical Tcchniqucs: Sensitivity Analysis; Borzndary Elcmcnt Mcthod.. Introd[r]

6 Ler mais

Functional alignment of regulatory networks: a study of temperate phages.

Functional alignment of regulatory networks: a study of temperate phages.

alignment and the contribution of each pair to the signaling difference. The two alignments show good matches for late lytic genes as well as for the regulators CI, CII, and B from 186 aligned with CI, CII, and Q in k. Thus, in general, functions of proteins in one network teach us about protein properties in the other network. The lack of a good match between Apl (in 186) and Cro (in k) is due to the weak links from Cro and reflects a different functional role of Cro and Apl in the late lytic development of phages. Insisting on alignment of Cro Figure 1. The Genetic Regulatory Networks for Phage 186, Phage k, and Phage P22, All of Which Are Temperate and Infect E. coli
Mostrar mais

5 Ler mais

Show all 10000 documents...

temas relacionados