capacity as the cornerstone for a good approximation. The strong points of ANNs are their appealing ca- pacity to learn from examples, their distributed com- putation —which helps them tolerate partial failures to a certain extent— and the possibility, so often exploited, to use them as black-box models. This last characteristic is paradoxically one of the major weaknesses, given that in practice this autonomy of functioning involves no transfer of knowledge from or to the designer. With the exception of very specific architectures, the networks are forced to learn from scratch most of the times. The network works (in the sense that solves a problem to a certain satis- faction), but the weights convey information as high- dimensional real vectors whose meaning about the so- lution can be intricate. A significant part of the task is devoted to find structure in the data and transform it to a new hidden space (or space spanned by the hidden units) in such a way that the problem becomes eas- ier (almost linearly separable in the case of the output layer). The internal workings of a neuron are thus ob- scure, because the weights have not been set to shape a previously defined (and considered adequate) simi- larity measure, but rather to adapt a general physical measure (like scalar product or Euclidean distance) to the problem at hand.
Although with different formulations, we find in the literature some authors adopting the regression solution for the 3D/2D registration problem (e.g. Chou & Pizer, 2013; Hoff, Komistek, Stefan, & Walker, 1998). However very few works with such a regression approach use real images in a clinically relevant context. To the best of our knowledge, only one work used a resembling approach to partially solve (specifically) the 3D/2D coronary registration problem (Aksoy, Unal, Demirci, Navab, & Degertekin, 2013). This method decouples rotation and translation estimation into frequency and spatial domain, respectively. In a prior step, they built a library of DRRs obtained by generating different rotational poses of CTA vessels. These templates are compared with the segmented X-ray vessels in Fourier domain and the closest DRR found is used to compute the similarity measure to estimate the translation component. As for the majority of the 3D/2D coronary registration methods, the similarity measure (iteratively optimized) depends on the distance between 3D and 2D vessel centerline (Ruijters et al., 2009).
Computational investigation of gene functions in the context of complex biological systems is promoted greatly by the accumulation of high-throughput data, of which protein-protein interaction data have been exploited to identify disease-causing genes, based on the observation that genes implicated in a specific or similar diseases tend to be located in a specific neighbourhood in the protein-protein interaction network [1,2,3]. The identification of genes involved in a specific disease has long been a challenge in the study of human genetics. In addition to traditional gene-mapping approaches, many computational methods based on gene functions have appeared, which was reviewed by Oti and Brunner in . Recently, a few computational approaches for candidate gene prioritization have been developed which exploit both the protein-protein interactions and the disease phenotypic similarities. Lage et al.  scored a candidate protein based on the involvement of its direct network neighbours involved in a similar disease, in which a new disease similarity measure was also given and applied for prioritizing both the protein complex and the candidate disease gene in the protein complex. Kohler et al.  presented a method for prioritization of
In the algorithm based on the technique for order preference by similarity to ideal solution (TOPSIS) with | M | alternatives that are evaluated by | N |, the decision criteria is viewed as a geometric system with | M | points in the | N | dimensional space . Here, the chosen candidate network is the one which has the shortest distance to the ideal solution and the longest distance to the worst case solution. The ideal solution is a “hypothetical solution” with the best values in each parameter while the negative ideal solution is the opposite. It was initially proposed for vertical handoff decision in  but it has also been considered in other recent work such as , , and . In general, to compute the final ranking list, TOPSIS requires the following steps:
The artificial neural network A, based on the wood basic density and days of drying, showed greater accuracy in estimate moisture. Furthermore, the use of basic density as a parameter in the neural network architecture enables its application on a larger number of materials, broadening its utilization. Conversely, the artificial neural network B, based in clone and days of drying, can be used only for the clones employed in the present study. The errors in the estimates presented in both neuralnetworks are due to the heterogeneous behavior of moisture losses during drying and factors not assessed, as wood anatomy. Artificial neuralnetworks are efficient in predicting the wood moisture content and this methodology can be applied to monitor and control moisture in logs and lumber.
Overall, the K-Means algorithm disposes of the most ap- propriate characteristics in the given multi-casting use case. Not only good results in terms of quality and performance but also its capability to handle nominal data makes it the de- fault choice for classification requests. Neural Gas is the pre- ferred algorithm for large entity and group counts; however, it should not be used for clustering observations with nom- inal data. In case that the number of groups is not known yet, Growing Neural Gas, despite its difficult parametriza- tion, is the best choice. Fuzzy-C-Means is especially recom- mended if the number of groups remains small and the struc- ture of the observation set is rather complex. Finally, the K-Fixed-Means algorithm should be chosen if cluster cen- ters, i.e. group characteristics, are pre-determined and should only be marginally changed during execution.
In fiducial-based ECG Biometrics research, the most common feature extraction meth- ods are the extraction of characteristic points, such as intervals, amplitude, angles, areas, linear combinations of features, euclidean distances and slopes [5, 44]. Wavelet trans- forms , cosine transforms  or Fourier transforms  are also utilized for this purpose, as they can accurately capture frequencies and waveforms in the cardiac cy- cle. Fiducial features may be submitted to a dimensionality reduction technique, such as PCA , Linear Discriminant Analysis (LDA) ; or to dynamic time warping , which measures the similarity between sequences by aligning them through time. These methods may be also used in the classification module [50, 51], however, NNs  and decision trees  are also deployed for these tasks. Figure 2.1 contains an illustration of commonly used fiducial features.
However, most current approaches on automated quality control are based on rigid inspection strategies, i.e., fixed supervision checklists are invariably followed along the entire process. The checklists specify routine descriptions containing items-to-verify, test-criteria, robot orientation/motion sequence, and image acquisition configurations (lens aperture, exposure time, electronic gain, etc.). The routine is executed without considering the actual process state and its possible changes, and consequently, these supervision schemas may be sensibly affected by process variations.
personalized services while ensuring a fair degree of privacy / non-intrusiveness. The goal of pervasive computing is to create ambient- intelligence, reliable connectivity, and secure and ubiquitous services in order to adapt to the associated context and activity. To make this envision a reality, various interconnected sensor networks have to be set up to collect context information, providing context-aware pervasive computing with adaptive capacity to dynamically changing environment. Wireless sensor networks (WSN) can help people to be aware of a lot of particular and reliable information anytime anywhere by monitoring, sensing, collecting and processing the information of various environments and scattered objects . The flexibility, fault tolerance, high sensing, self- organization, fidelity, low-cost and rapid deployment characteristics of sensor networks are ideal to many new and exciting application areas such as military, environment monitoring, intelligent control, traffic management, medical treatment, manufacture industry, antiterrorism and so on [18,23]. Therefore, recent years have witnessed the rapid development of WSNs. In this paper, we address the issue of cross-layer networking for the pervasive networks , where the physical and MAC layer knowledge of the wireless medium is shared with network layer, in order to provide efficient routing scheme to prolong the network life time.
Text pre-processing: Before the words enter the neural network, a series of preliminary processing has to be fulfilled. At first, the punctuation marks are removed, then the numbers are identified and the abbreviations are expanded into full words. The next step is to fully diacritise the retrieved text to eliminate any ambiguity about the word’s pronunciation. The final step is to prepare the words as input vectors for the neural network. However, neuralnetworks only recognize numerical inputs, therefore, the ASCII code of each character is taken and replaced with its corresponding binary representation. Next the 0’s were replaced with (-1)’s to discriminate them from trailing zeros that will be added later. Now the text is ready to be processed and classified by the neural network.
Dynamic link inference in heterogeneousnetworks requires more accurate initial clustering. A high clustering quality is necessary for network analysis, but low computation speed is intolerable because of the large network scales involved. The accuracy of the LDCC algorithm is improved, while both the heterogeneous and homogeneous data relations are explored. The CESC algorithm  is very effective for clustering homogeneous data using an approximate commute time embed- ding. A heterogeneous information network with a star network schema can transform into multi- ple compatible bipartite graphs from the compatible point of view. When the relation between any two nodes of the bipartite graph is presented with the commute time, the relation of both heteroge- neous and homogenous data objects can be explored; the clustering accuracy can also be improved. The heterogeneous information networks are large but very sparse; therefore, the approximate commute time embedding of each bipartite graph can be quickly computed using random map- ping and a linear time solver. All of the indicator subsets in each embedding indicate the target dataset, and subsequently, a general model for clustering heterogeneous information networks is formulated based on all indicator subsets. All weighted distances between the indicators and the cluster centers in the respective indicator subsets are computed. All indicator subsets can be simul- taneously clustered according to the sum of the weighted distances for all indicators for an identical target object. Based on the above discussion, an effective clustering algorithm, FctClus, which is based on the approximate commute time embedding for heterogeneous information networks, is proposed in this paper. The computation speed and clustering accuracy of FctClus are high.
Abstract —In recent years, there have been major developments in, and deployment of, diverse mobile technology. Security issues in mobile computing are now presenting significant challenges. The ability to move from one network to another, and from one provider to another creating thus vertical and horizontal handoffs, has increased the complexity of mobile security. There are many research groups, such as Hokey and Y-Comm, working on the design of security architectures for 4G networks. Heterogeneousnetworks are the convergence of wired and wireless networks, other diverse end user devices and other communication technologies which provide very high speed connections. Major security challenges in 4G heterogeneousnetworks are inherent in current internet security threats and IP security vulnerabilities. These new challenges are: IP address spoofing, user ID theft, Theft of Service, Denial of Service, and intrusion attacks. Therefore, it is necessary to design security solutions which are independent from the network, provider, and end user devices. Existing technique in 4G heterogeneous security networks has not achieved major mobile security requirements such as protecting the mobile equipment; integrity of the hardware, and software. They do not prevent access to the mobile data and the mobile equipment can be used as an attack tool. In addition, current researches in security 4G heterogeneous network do not consider a security management system based on ITU-T M.3400 TMN management functions or any other related standards. In this paper, we propose a management system which is responsible for enforcing security policies and ensuring that security policies continued to be followed. The objective of this security management system is to prevent the mobile equipment from being abused or used as a malicious attack tool. The proposed security management system is consistent with the security specifications defined by ITU-T recommendation M.3400 TMN management functions. Finally, this paper will present a policy-based architecture for the security management system of 4G heterogeneousnetworks focusing on detection and prevention of malicious attacks. This architecture will consist of intelligent agent, security engine, security policies database, and security administrator.
With the development of new networking paradigms and wireless protocols, nodes with different capabilities are used to form a heterogeneous network. The performance of this kind of networks is seriously deteriorated because of the bottlenecks inside the network. In addition, because of the application requirements, different routing schemes are required toward one particular application. This needs a tool to design protocols to avoid the bottlenecked nodes and adaptable to application requirement. Polychromatic sets theory has the ability to do so. This paper demonstrates the applications of polychromatic sets theory in route discovery and protocols design for heterogeneousnetworks. From extensive simulations, it shows the nodes with high priority are selected for routing, which greatly increases the performance of the network. This demonstrates that a new type of graph theory could be applied to solve problems of complex networks.
Some networks, including biological networks, consist of hierarchical sub-networks / modules. Based on my previous study, in present study a method for both identifying hierarchical sub-networks / modules and weighting network links is proposed. It is based on the cluster analysis in which between-node similarity in sets of adjacency nodes is used. Two matrices, linkWeightMat and linkClusterIDs, are achieved by using the algorithm. Two links with both the same weight in linkWeightMat and the same cluster ID in linkClusterIDs belong to the same sub-network / module. Two links with the same weight in linkWeightMat but different cluster IDs in linkClusterIDs belong to two sub-networks / modules at the same hirarchical level. However, a link with an unique cluster ID in linkClusterIDs does not belong to any sub-networks / modules. A sub-network / module of the greater weight is the more connected sub-network / modules. Matlab codes of the algorithm are presented.
Neural synchrony has been linked to consciousness. Gap junctions or electrical synapses are open windows between adjacent neurons and have been shown to mediate gamma EEG/coherent 40 Hz and other synchronous activity (Dermietzel, 1998, Dicsi, 1989, Hormuzdi et al. 2004, Bennet and Zukin, 2004, LeBeau et al. 2003 Friedmand and Strowbridge, 2003, Buhl et al. 2003, Rozental et al. 2000, ,Perez-Velaquez and Carlen, 2000,Galarreta and Hestrin 1999, Gibson et al. 1999) which has in turn been related to consciousness (Hameroff, 2006). Gap junctions occur between axons and dendrites, between glia, between neurons and glia, between axons and axons and between neural dendrites. The gap junctions bypass chemical synapses (Traub et al. 2002, Froes and Menzes, 2002, Traub et al. 2002, Bezzi and Volterra, 2001). Gap junction connected neurons have both continuous membrane surfaces and continuous cytoplasmic interiors since ions, nutrients and other material pass through the gap junctions. Neurons connected by gap junctions are electrically coupled and depolarize synchronously (Kandel et al. 2000). A single neuron may have many gap junctions and only some of these are open at given time, with openings and closings regulated by cytoskeletal microtubules and/ or phosphorylation via metabotropic receptor activity (Hatton, 1998). The increase in pH in the cell increases gap junctional coupling and decrease in pH results in the decrease of gap junctional coupling. The increase in cytoplasmic concentration of Calcium can result in decrease in gap junction coupling. In the ensuing discussion the neurons with greater number of open junctions are taken as being more fit than those having less number of open junctions. Thus the fitness function η i is defined as
Until nowadays, the scientific community firmly rejected the Theory of Inheritance of Acquired Characteristics, a theory mostly associated with the name of Jean-Baptiste Lamarck (1774-1829). Though largely dismissed when applied to biological organisms, this theory found its place in a young discipline called Artificial Life. Based on the two abstract models of Darwinian and Lamarckian evolutionary theories built using neuralnetworks and genetic algorithms, this research aims to present a notion of the potential impact of implementation of Lamarckian knowledge inheritance across disciplines. In order to obtain our results, we conducted a focus group discussion between experts in biology, computer science and philosophy, and used their opinions as qualitative data in our research. As a result of completing the above procedure, we have found some implications of such implementation in each mentioned discipline. In synthetic biology, this means that we would engineer organisms precisely up to our specific needs. At the moment, we can think of better drugs, greener fuels and dramatic changes in chemical industry. In computer science, Lamarckian evolutionary algorithms have been used for quite some years, and quite successfully. However, their application in strong ALife can only be approximated based on the existing roadmaps of futurists. In philosophy, creating artificial life seems consistent with nature and even God, if there is one. At the same time, this implementation may contradict the concept of free will, which is defined as the capacity for an agent to make choices in which the outcome has not been determined by past events. This study has certain limitations, which means that larger focus group and more prepared participants would provide more precise results.
The paper addresses the problem of controlling a Heating Ventilation and Air Conditioning (HVAC) sys- tem with the purpose of achieving a desired thermal comfort level and energy savings. The formulation uses the thermal comfort, assessed using the predicted mean vote (PMV) index, as a restriction and min- imises the energy spent to comply with it. This results in the maintenance of thermal comfort and on the minimisation of energy, which in most conditions are conﬂicting goals requiring an optimisation method to ﬁnd appropriate solutions over time. A discrete model-based predictive control methodology is applied, consisting of three major components: the predictive models, implemented by radial basis function neuralnetworks identiﬁed by means of a multi-objective genetic algorithm; the cost function that will be optimised to minimise energy consumption and maintain thermal comfort; and the optimi- sation method, a discrete branch and bound approach. Each component will be described, with special emphasis on a fast and accurate computation of the PMV indices. Experimental results obtained within different rooms in a building of the University of Algarve will be presented, both in summer and winter conditions, demonstrating the feasibility and performance of the approach. Energy savings resulting from the application of the method are estimated to be greater than 50%.
However, it can be argued that cyclic processes, such as gait, can be better represented by cyclic diagrams (Grieve, 1968). In this speciic case, angle-angle diagrams known as cyclograms are used. These graphs are objective, reliable, and appropriate for statistical studies (Goswami, 2003). As this technique is based on geometrical igures, its values can easily be recognized. Hip-knee cyclograms represent most of the movement of the body (torso, hip, and legs), and therefore can provide useful parameters to differentiate several kinds of gait (Barton and Lees, 1997). Such parameters can be used in different ways for rehabilitation therapies, including those that make use of artiicial intelligence.
believe it will suﬃce to point out the social cost of false positives, as the commitment to this mode of operation runs too deep. Unlike Athena, algorithmic rationality did not spring fully formed from the head of Zeus. It is built on the foundations of science and of computation and called forth by a society whose value system is ultimately calculative. Big data algorithms are a way of looking at the world, and as Tibetan Buddhist Sogyal Rinpoche points out: ‘how you look is how you see’ (Rinpoche, 1996). To expect that the conse- quences will be abated by regulating certain actions based on certain forms of pattern ﬁnding is like trying to hold back the tide. The challenge is to ﬁnd a diﬀerent vision, a diﬀerent centre of gravity, that will produce an alternative unfolding of these new modes of perception. We are seeking to recover diﬀerence from the des- cendants of the Diﬀerence Engine. While we should be unsettled by the emerging problems of algorithmic apo- phenia, we could pivot our perspective to say, like Pasquinelli, that creativity and paranoia share a percep- tion of a surplus of meaning, so apophenia could equally be about the invention of a future out of a meaningless present (transmediale, 2015). Mackenzie acknowledges the dark side, saying ‘Almost everything we know about the historical experience of action, free- dom, collective becomings or transformations points in a diﬀerent direction to the technologies themselves’ (Mackenzie, 2015), but he also sees the possibility of slippage in the unstable performativity of machine learning and asks ‘Could the production of prediction also increase the diversity of social production or inform new collectives?. Since it is pre-individual in focus, the ‘‘unknown function’’ that generates the data might also diagram diﬀerent forms of association’. How can we recover something truly diﬀerent from the paranoid predictions of algorithmic governance? I will outline two steps towards a position that has some traction for change. The ﬁrst is to deﬂate the scientiﬁc and empirical hubris that, through the mechanism of prediction, is channelling the myth of progress in to pre-emptive interventions in the present. The second is an inversion of tools like machine learning to serve a purpose that Ivan Illich described as ‘convivial’.