Recently, using the positioning systems such as GPS, which is capable of determining the locations ofpoints, is rapidly increasing. Large volumes of spatial data can be gained using these systems by spending short time and low cost (Song et al., 2010) and (Wang, 2011). Extraction ofnetworkpaths using low-accuracy data obtained by such systems is one of interest (Kasemsuppakorn et al., 2013). Data obtained from such systems is shown as discrete points, which if connected, can be used in several studies such as in traffic to determine vehicles speed and accessing network path. Error in systems is derivation from centerline of roads and geometryof road networks resulting from cumulating roads centerline causes non-location ofpoints exactly onnetwork lines. Thus, extraction of networks from point set data is very important. Up to now, many studies have been conducted using different geometry-based, mathematical, and statistical methods (Scott, 1994) and (Newson et al., 2009).
Abstract: This article reports the testing of a fast track, large-group intervention method, designed to initiate a change process of a Portuguese SME in the IT sector, aiming at increasing the proactivity of its employees. Basedon previous work, mixing third generation large-group organizational change methods, classical Organizational Development (OD) approaches, and an adapted version of Creative Problem-Solving (CPS) protocol, the presentation of the case includes an extended diagnosis, the preparation and execution of the company meeting, and the beginning of the implementation of innovation projects. The company meeting was designed to last for just four hours, instead of the two to four days of the present methods. The diagnosis, made in close collaboration with management, includes the results of more than 30 interviews conducted with internal and external stakeholders, and a small-world analysis technique to determine the existing communication networks, together with possible clusters and brokers. Furthermore, using a content analysis, success stories were collected in order to clarify the strong points for a future organizational culture. The results support the effectiveness of the selected methodology in establishing innovation projects, involving the entire organization, and clarified desirable characteristics for the improvementof the present intervention method, adapted to Portuguese companies. The analysis of the success stories helped to determine the strengths of a future organizational culture, while the use of measures of small- world networks allowed to analyze the existing informal organization, and the way knowledge flows out of the necessary tension between clustering and bridging, necessary for creative benefits. Although this study does not include the entire completion of the projects, due to unpredicted company emergencies, it provides a solid basis for application in future interventions, and to initiate another line of investigation, related with the preparation of team leaders as group facilitators.
A large portion of depth map fusion methods build up on volu- metric range integration (VRIP) (Curless and Levoy, 1996). Typ- ically a signed distance field is computed on a (multi-level) oc- tree structure by the projection of depth estimations from which then a triangulation can be derived, for example using the March- ing Cube algorithm (Lorensen and Cline, 1987). Using the same base concept, (Zach et al., 2007) reconstruct a signed distance field in voxel space. Then a surface is extracted by minimizing a global energy functional basedon TV-L1 regularization, claim- ing smoothness and small differences to the zero level set. Em- ploying the L1 norm yields favorably results in the presence of outliers. However, depth samples across views possessing dif- ferent scales is challenging for VRIP approaches. One example addressing this issue is the scale space representation presented in (Fuhrmann and Goesele, 2011). They build a multi-level oc- tree holding vertices at different scales. Vertices are sorted to the octree structure according to their pixel footprint. This way a hierarchical signed distance field is generated. For iso-surface extraction the most detailed surface representation is preferred. Similarly, (Kuhn et al., 2014) proposed a method employing vari- able voxel sizes defined by observation-wise precision estimates. These precision measures are computed for each disparity basedon TV in the disparity maps. The local TV is associated with an error class defining quality which is previously learned using ground truth.
Drug resistance is one of the principal obstacles blocking worldwide malaria control. In Colombia, malaria remains a major public health concern and drug-resistant parasites have been reported. In vitro drug susceptibility assays are a useful tool for monitoring the emergence and spread of drug-resistant Plasmodium falciparum. The present study was conducted as a proof of concept for an antimalarial drug resistance surveillance networkbasedon in vitro susceptibility testing in Colombia. Sentinel laboratories were set up in three malaria endemic areas. The enzyme linked immunosorbent assay-histidine rich protein 2 and schizont maturation methods were used to assess the susceptibility of fresh P. falciparum isolates to six antimalarial drugs. This study demonstrates that an antima- larial drug resistance surveillance networkbasedon in vitro methods is feasible in the field with the participation of a research institute, local health institutions and universities. It could also serve as a model for a regional surveil- lance network. Preliminary susceptibility results showed widespread chloroquine resistance, which was consistent with previous reports for the Pacific region. However, high susceptibility to dihydroartemisinin and lumefantrine compounds, currently used for treatment in the country, was also reported. The implementation process identified critical points and opportunities for the improvementofnetwork sustainability strategies.
their purchasing behavior. The work on CSR and consumer choice could be a new growth opportunity for marketing. CSR initiatives with well-designed targets and high consumer awareness through communication could play an important role in successful marketing. Becker- Olsen et al. (2006) suspected the assumption that consumers will always reward firms for their socially responsible initiatives unselectively. They designed two studies to explore how consumers react to different CSR activities. In addition, they investigated the impact of the motivations and time choice of CSR initiatives. CSR activities that do not fit with a fir m‘s expertise have negative impact on consumers‘ attitudes toward a firm and the firm‘s credibility. Firms can be perceived as ―doing good‖ only by addressing selected CSR initiatives. CSR activities with low fitness with a firm are perceived as ―doing CSR business‖ by consumers, and lead to non-positive consumer evaluations. Perceived motivations of consumers have effect on consumers‘ evaluation of a firm and a firm‘s CSR initiatives. If consumers believe CSR initiatives are profit- driven rather than social-driven, then they will assess a firm and its credibility negatively. This leads to a low likelihood of consumers‘ purchase intention. The time of practicing CSR activities matters to consumers‘ assessments. Proactive CSR activities help firms get positive evaluations from consumers. In contrast, consumers regard reactive CSR activities as doing ―CSR business‖. Reactive CSR has non-positive contribution to a firm‘s image (Becker-Olsen et al., 2006).
The large plasmid pAsa4 from A. salmonicida subsp. salmonicida carries genes that provide resistance against chloramphenicol, spectinomycin, streptomycin, sulfonamides, tetracycline, mercury, and quaternary ammonium compounds (Reith et al., 2008). Except for tetracycline resistance, these genes are located in Tn21, a non-composite transposon. Tn21 is a widespread replicative transposon that also carries another mobile element, the integron In2 (Liebert, Hall & Summers, 1999). The complete sequence of pAsa4 was first described in reference strain A449, which originated from France (Reith et al., 2008). Genotyping done in a previous study has shown that some A. salmonicida subsp. salmonicida isolates likely bear pAsa4 but do not display the expected antibiotic resistance profile (Vincent et al., 2014b). This suggests that pAsa4 variants may have evolved from a common replicon backbone, but do not share the same antibiotic resistance genes.
Image digitalization, cutting the fruit and removing the pulp took 60% of the total time, cutting the peel and correctly positioning it in the scanner took 30%, while the remaining 10% was spent on image edition and automatic area calculation. After peel preparation, the 90 fruits were digitalized and their respective areas calculated in a period of four hours, by a single person (2.66 minutes/ fruit, using a Penthium 200 MHz personal computer with 32 Mbytes of ram memory and a table scanner setup for 300 dpi black and white image). For the gravimetric method, 50% of the time was spent on removing the pulp, 20% on peel cutting, and the remaining 30% cutting and weighing the disks. After peel preparation, the 90 fruits were processed and their respective areas calculated in a period of about eight hours, by a single person (5.35 minutes/fruit).
In hot working process, the prediction of material constitutive relationship can improve the optimization design process. Recently, the artificial neural network models are considered as a powerful tool to describe the elevated temperature deformation behavior of materials. Basedon the experimental data from the isothermal compressions of 42CrMo high strength steel, an artificial neural network (ANN) was trained with standard back-propagation learning algorithm to predict the elevated temperature deformation behavior of 42CrMo steel. The inputs of the ANN model are strain, strain rate and temperature, whereas flow stress is the output. According to the predicted and experimental results, it indicates that the developed ANN model shows a good capacity of modeling complex hot deformation behavior and can accurately tracks the experimental data in a wide temperature range and strain rate range. In addition, the predicted data outside of experimental conditions were obtained, indicating good prediction potentiality of the developed ANN model. The q - s curves outside of experimental conditions indicate that the predicted strain-stress curves exhibit a typical dynamic recrystallization softening characteristic of high temperature deformation behavior. Through the coupling of the ANN model and finite element model, the hot compression simulations at the temperature of 1273 K and strain rates of 0.01~10 s –1 were conducted. The results show that the predicted constitutive data outside
Paths produced mostly by non-oriented search may best be described by SI, but how to test a specific process behind the non-oriented search of individuals? For instance, how to test if an individual is moving according to a random walk or to Lévy-walk process? A diagnostic index such as MSD could be used in such case, but still assuming a non-oriented search mechanism. The high dependency of MSD on scale (Tab. II) reduces its value as a descriptive measure or index of path tor- tuosity, but actually is the property that makes it an excellent diagnostic tool. As MSD should scale linearly with time or path length for purely diffusive paths, super-diffusive movements have MSD increasing with a power exponent between 1 and 2 (C ODLING et al. 2008). Super-diffusion would imply a Lévy walk
interval CORRA, the 30-day interval PERSIANN-CSS, and the three-month interval CMORPH-KF interpolation calibrations steps are necessarily trailing for the Early and Late runs, whereas a centered approach is used for the Final run. Ancillary products required on routine basis for IMERG algorithm also change according to the considered version (Early, Late, Final). Required data of surface temperature, relative humidity and surface pressure are provided by the Japanese Meteorological Agency (JMA), the GANAL gridded assimilation, and the European Centre for Medium-range Weather Forecasting (ECMWF) for the Early, Late, and Final run, respectively. Finally, the post processing adjustment step involved climatological based coefficient to produce the Early and Late run product while Global Precipitation Climatology Project (GPCC) monthly precipitation gauge analysis based coefficients are used to produce the Final run product. For more information on IMERG processing, please refer to . From the year 2014 to 2018, three successive IMERG versions were made available. IMERG–v03 was the first version released in 2014, followed by IMERG–v04 in 2017, and then IMERG–v05 in early 2018. The main change among IMERG–v03, –v04, and –v05 rely on the use of successively different GPROF algorithm versions (GPROF–v03, –v04, and –v05). Differently to GPROF–v03, GPROF–v04 and –v05 used threshold precipitations rate to adjust fractional coverage. Others changes from IMERG–v03 to –v04 (and –v05) are: (1) the use of Global Precipitation Climatology Project (GPCP) v2.3 to compensate for the GPM combined instrument dataset (2BCMD) bias over non tropical oceans and land; (2) inclusion of the Advanced Technology microwave Sounder (ATMS) dataset; (3) dynamic calibration of PERSIANN–CSS by PMW derived precipitation estimates; (4) increase in HQ precipitation field spatial coverage from 60 ◦ N–S to 90 ◦ N–S; and, (5) removal of the GPCC grid box volume adjustment to eliminate blocky gauge adjustment (Final run only). The benefit of blocky gauge adjustment is clearly observable in Figure 2 with the total removed of blocky effects from IMERG–v03 to IMERG–v04 and –v05. Main changes in IMERG–v05 in comparison to previous –v04 and –v03 versions include some restrictions in Microwave Humidity Sounder (MHS) and ATMS swaths (for the five and eight footprints, respectively) and the no-consideration of TRMM Microwave Imager (TMI). For the present study, we used the IMERG–v03, –v04, and –v05 Final run.
Nash strategies for the rest of the players, then in the positional form the obtained game will represent a c-game for p − 1 players, since the positions of the first player can be considered as the positions of any other player (we consider them as the positions of the second player).
The sequence of learning objectives (LOs) by the system is made on the basis of a "networkof pre- requisites" proposed by the author of the teaching module. A prerequisite link between two objectives LO1 and LO2 (from LO1 to LO2) defines, on the one hand, a precedence desired by the author between the two objectives, proposing that learning the second objective cannot be completed until LO2 achievement (or success) of the first goal LO1, on the other hand, an indicative link of progression or a remediation of a potential link. This latter feature means that the system can choose a LO that is a pre-requisite to a LO on which the learner has failed in order to offer him a contribution of knowledge that relates to the LO prerequisites.
The development of fisheries sector was expected to keep the economy growth stable, to absorb more labor forces, to produce high foreign exchange, and the most important thing is to increase the income per capita as well as to give a muliplier effects to the society. The effect of economy froman economy activity is usually performed with input-output analysis approach (I-O) and Computable Geberal Equlibrium (CGE) to know the direct, indirect and induced impacts . The impact of economy in a productive activity, for example fisheries can be grouped into three catergories, i.e. direct advantage, indirect advantage, and induced advantage . Direct advantage is triggered from the fishing activity that needs input like labor force/ ship’s crew, fuel, Ice, clean water, supplies/ ration etc. That input can be obtained from other sector. This can cause indirect advantage. If ship’s crew is from local area, the expense of the crew can create induced benefit in that area. Not all the benefits or the impact can be felt by local society. Does the input come from the other area or imported one, the rotation of money can cause indirect benefit then. This is a leakage of benefit. The flow of the money from the fisheries activity to the local society at last creates the impact of economy and economy leakage. Even though it is a little, the empirical studies that try to count the downstream and upstream linkage in small scale fishing in a developing country tend to show that the number of added work is created through the downstream and upstream linkage that is significant enought . Downstream and upstream linkage is stated in Table 1. Generally, the international value chain for economic commodities is important for sellers, such as, tuna, salmon, skipjack tuna, shrimp and tilapia, which consist of some nodal with a product that passes over longer to achieve consumers. Whereas some species that is not economically important is important to the sustainability for local food which is part of the shorter value chain . Small scale fishing is very important as a source of livelihood, earning, production and world fish supply. Besides that, small scale fishing provides fish that directly gives contribution to increase the food and nutrition sustainability .
During review of the medical records, data were noted on age, sex, length of hospital stay, cultures from lesions, antimicrobials prescribed, bacterial resistance and surgery conducted. In all cases the samples for cultures were collected during the surgical procedure, by means of biopsy of the infected area after prior cleaning, with preference given to bone and tendon tissues. Material was collected into sterile lasks containing saline solution and immediately transported to the laboratory. Manual methods were used both for culturing and for constructing antibiograms, employing the following culture mediums: MacConkey, agar chocolate, agar blood and CLED agar. Culture mediums for anaerobic germs were not available. Statistical analysis of data consisted of calculation of means and standard deviations.
Schöyer et al. (1990) investigated many formulations directed in particular to solid propellant combinations in the research for high-performance ones, which could be stored for a considerable time and used not only to change the position of a spacecraft that is in space, but also for launching one into space. According to their research, it could be constituted by a combination of GAP or BAMO with boron, aluminum or aluminum hydride and a compound selected from the group of HNF, nitronium perchlorate (NO 2 ClO 4 ) or AP. The proportions of the components, oxidizer and fuel, in the propellant combinations were not critical. The components should be mixed prior to the reaction in such proportions that the mixing ratios were around the stoichiometric ratio to an amount of no more than 20%, calculated on the total mixture of the energetic binder (BAMO or GAP). In agreement with them, the preferred propellant combinations with HNF are: HNF – 70 to 80%, B – 10% and GAP or BAMO – 10 to 20%; or HNF – 59 to 69%, Al – 21% and GAP or BAMO – 10 to 20%. The same were basedon HNF, aluminum and onan energetic binder such as GAP or BAMO, exhibiting animprovementof the specific impulse relative to conventional AP propellants of 214 m/s, with the combustion gases much more cleaner, because HNF does not include chlorine and the environment is not burned with hydrogen chloride gas.
We use Flowcharts as a standardized approach for describing research methodologies and scientific analyses. Flowcharts main advantage is allowing to go from conceptual to operational level of data analyses. The principle of a Flowchart is to connect inputs, processes and outputs in a graphical execution pipeline. Inputs and outputs can be, for example, datasets or text files in several formats. Processes represent the execution of a task that transforms the input into a desired output. A process can range from a single command line to scripts, Masterscripts and third-party software. An example of Flowchart available in the EPIGEN-Brazil repository is the Ancestry analysis (Fig- ure 3.4, Flowcharts). This flowchart describes the steps to estimate both individual and chromosome local ancestry in admixed individuals. The Ancestry Flowchart summarizes steps to: (1) join different genetic datasets, (2) perform individual ancestry analysis by the model-based population genetics method implemented in the software Admixture , (3) analyses population structure by Principal Component Analyses (PCA) , (4) perform local ancestry analysis using the method implemented softwares such as PCAdmix  or RFmix . A visitor that wants to perform its own individual ances- try analysis with Admixture will note that the Ancestry Flowchart indicates the need for a format conversion step beforehand (the process ”–recode12 function of PLINK”). Similarly, for a PCA the visitor will note the need to run an intermediate task (the process ”EIGENSTRAT PCA Smart Eigenstrat.pl”) that prepares the inputs for the smartpca software of the EIGENSTRAT package. These small details provided by the flowchart’s big picture may save the visitor a few hours work in understanding the spe- cific requirements of the analysis of interest.
Abstract: This paper presents an adaptive-network-based fuzzy inference system (ANFIS) for long-term natural Electric consumption prediction. Six models are proposed to forecast annual Electric demand. 104 ANFIS have been constructed and tested in order to find the best ANFIS for Electric consumption. Two parameters have been considered in the construction and examination of plausible ANFIS models. The type of membership function and the number of linguistic variables are two mentioned parameters. Six different membership functions are considered in building ANFIS, as follows: the built-in membership function composed of the difference between two sigmoidal membership functions (dsig), the Gaussian combination membership function (gauss2), the Gaussian curve built-in membership function (gauss), the generalized bell-shaped built-in membership function (gbell), the Π-shaped built-in membership function (pi), psig. Also, a number for linguistic variables has been considered between 2 and 20. The proposed models consist of input variables such as: Gross Domestic Product (GDP) and Population (POP). Six distinct models basedon different inputs are defined. All of the trained ANFIS are then compared with respect to the mean absolute percentage error (MAPE). To meet the best performance of the intelligent based approaches, data are pre-processed (scaled) and finally our outputs are post-processed (returned to its original scale). The ANFIS model is capable of dealing with both complexity and uncertainty in the data set. To show the applicability and superiority of the ANFIS, the actual Electric consumption in industrialized nations including the Netherlands, Luxembourg, Ireland, and Italy from 1980 to 2007 are considered. With the aid ofan autoregressive model, the GDP and population by 2015 is projected and then with yield value and best ANFIS model, Electric consumption by 2015 is predicted.
reprogramming events have been largely completed. Interestingly, we observed an increase in centromeric major repeat transcript expression in Dicer1 knockout testes (Fig. 7). Similar defects in centromeric silencing have been reported in mouse embryonic stem cells lacking functional Dicer [35,36]. Most of the known Dicer-dependent functions in mammals involve post- transcriptional regulation, but evidence of a small RNA–mediated transcriptional gene silencing and regulation of heterochromatin formation and maintenance is emerging, even though the mechanistic aspects still remain unclear . Dicer has been localized in the nucleus and specifically on certain chromosomal domains suggesting that Dicer has a role in transcriptional regulation at the chromatin level [42,43]. However, further studies will be required to reveal the possible mechanistic connection of Dicer and Dicer-dependent small RNAs with heterochromatin formation, regulation of repeat-derived tran- scripts and control of chromatin organization in differentiating male germ cells.
Coastal eutrophication, mainly defined as ―the enrichment of nutrient stimulating algal growth‖ in coastal water, has started to be one of the main threats to the Chinese coastal areas since last two decades. The huge amount of nutrient loads from the human activities has modified the natural background of water quality in estuaries, bays and other coastal zones. As a result of elevated eutrophic status, coastal systems are subject to a series of negative and undesirable consequences, such as fish-kills and interdiction of shellfish aquaculture. While much attention is focused on managing this issue, there is a need to assess the eutrophic level in coastal systems and to identify the extent of danger. In this thesis, a variety of traditional Chinese assessment methods are discussed and compared with western ways, such as OSPAR COMPP and ASSETS. Afterwards, ASSETS was chosen to carry out two case studies (Changjiang Estuary and Jiaozhou Bay) due to its solid theory and successful applications. As a process-based method, it set up a pressure-state-response model basedon three main indices, i.e., Overall Human Influence, Overall Eutrophic Condition and Future outlook. In spite of the lack of enough data, the results from applying ASSETS to Changjiang Estuary and Jiaozhou Bay are ―Bad‖ and ―Low‖ respectively, while the traditional methods only obtain more ambiguous results. The comparisons of the rationalities behind the methodologies and the results suggest that ASSETS could be a more reasonable and applicable method to assess Chinese coastal systems.
Abstract—A MANET is a collection of mobile nodes communicating and cooperating with each other to route a packet from the source to their destinations. A MANET is used to support dynamic routing strategies in absence of wired infrastructure and centralized administration. In this paper, we propose a routing algorithm for the mobile ad hoc networks basedon fuzzy logic to discover an optimal route for transmitting data packets to the destination. This protocol helps every node in MANET to choose next efficient successor node on the basis of channel parameters like environment noise and signal strength. The protocol improves the performance of a route by increasing network life time, reducing link failure and selecting best node for forwarding the data packet to next node.