In this work, we assume a homogenous WSN consists of N sensornodes that are uniformly and randomly distributed within an area of meter square. The sink node is locatedat . The problem considered in this paper is to gather data generated from sensornodes every seconds. Cluster-heads are defined to receive data from all member nodes of their clusters and transmit the aggregated data to the sink node directly or through other cluster heads. The network is assumed to be homogenous in that sensornodes are required to sense identical type of information. The goal of this algorithm is to present a strategy fordefining intermediate cluster heads to minimize the distance between the cluster heads and their member node, so that the total energy consumed in the WSN is reduced.
During the simulation, the ratio of Th/Wl have been evaluated and its behavior is depicted in Figs. 8 and 9. As it is possible to see, using the fuzzy-based approach proposed in this work the Th/Wl fluctuates between 70% to 30% with Gaussian membership functions in the star topology and between 80% to 45% in the clustered topology. With the approach proposed by Sabitha et al. , the Th/Wl fluctuates between 64% to 37% in the star topology and between 71% to 38% in the clustered topology. On the contrary, the values obtained without FLC fluctuate from 75% to 30% and 84% to 42% in the star and clustered topologies respectively, representing the best results in terms of the ratio of Th/Wl but at the expense of the battery consumption. However, it is necessary to note that the values obtained with the proposed FLC, are acceptable especially in those application fields with a moderate variation of data, e.g. temperature, humidity or light detection, where the most important thing is to prolong, as much as possible, the battery lifetime rather than to ensure high Th/Wl performance. On the contrary, the Th/Wl behavior obtained with the proposed FLC would not be appropriate in a context characterized by real-time constraints, in which the sleeping_time of network nodes should not be increased too much in order to obtain and ensure high values of the ratio Th/Wl.
The work presented in this paper lays the foundation for devising an efficient query driven routing protocol for subsurface exploration, using the deployment strategy suggested in our previous work. Also, it has been able to exploit the mobility of sensors to great extent. The current work show how sensor mobility characteristics can be used to displace normal sensors as well as GCS to formulate a new network layout, for data aggregation as well as dissemination. GCS’s which travel parallel to r-strips accumulate data on their way, and pass them over to next level r-strip, which definitely leads to a better route for the packets headed towards the base station. The proposed work is still in its initial phase of implementation and we expect that in future, we would able to respond to queries generated from the source.
The idea is to build clusters in such a way that we have only one cluster for each event being detected and cluster members are the detecting nodes. We might have different strategies to select the node with the coordinator role (clus- ter-head). For instance, we can choose the node with the smallest id, greatest degree, largest residual energy, short- est distance to the sink, or other metrics such as those sug- gested by Kochhal et al. . Such information, used in the election process, is included in the node state that com- poses the network and neighborhood states defined in Sec- tion 3.1. For the sake of simplicity we choose the node with the smallest id because this strategy leads to smaller com- munication cost during the election phase. Thus, our node state will be x i ¼ fidðv i Þg for every v i 2 V. The cluster for-
N to 1 Multipath discovery protocol proposed by Wenjing Lou al in .In this paper hybrid multipath scheme is mainly used for security and reliable data collection from one source node to destination node. N to 1 multipath discovery protocol is mainly used to find the multiple node disjoint path from all sensor node to base station to 1 multiple discovery protocol main technique is hybrid data collection scheme. The multipath discovery protocol is two phase (I) branch aware flooding (ii) multipath extension of flooding. Branch aware flooding is used to ability of finding extra paths because limiting of nodes sent message between sensor node and neighbor nodes. Multipath extension of flooding is used to find the more node disjoint path of each sensor node at cost of some extra message exchange.
UsingWirelessSensorNetworks (WSN), directly in the garden to collect real time data to check the current conditions of the garden and crossing these information with weather forecast, evapotranspiration and garden specifications, our proposed solution is able, through artificial intelligence algorithms, to predict/calculate the real water needs for that particular garden and adjust the irrigation controller without human interaction. It is also able to detect how weather conditions, such as rain or strong winds, may affect the irrigation process, scheduling the best time to start. We stand out from other solutions is that we do this analysis per irrigation zone and not in the garden as a whole, since gardens can have different characteristics in their constitution, having also different hydric needs.
The goal of anchor-based schemes is to provide partial information of the entire net- work to a localization algorithm. This way, the algorithm can extrapolate the coordinates of non-anchors based on this information . With anchor-based schemes the network’s coordinate system can be aligned with a global coordinate system, making it possible to obtain absolute coordinates like latitude, longitude and altitude for each target. Still, GPS receivers are expensive and equipping anchors with them in high scale networks might not be viable. Plus, GPS requires LoS communication which typically does not work indoors as the signal cannot penetrate walls reliably and can be obstructed by envi- ronmental obstacles . The alternative of pre-programming nodes with their locations is impractical, for the same scalability reasons, or even impossible, depending on the interest area’s accessibility or the method of deployment. Another downside is that accu- racy relies heavily on the number of anchor nodes and their geographic placement in the network. It has been found that localization accuracy improves if anchors are deployed in a convex hull around the network, but additional targets at centre of it is also benefi- cial . Yet, the big advantage of using anchor nodes is greatly simplifying the task of obtaining coordinates for the unknown location nodes.
Wirelesssensor network have been used in various application for the monitoring and collection of environmental data. Wirelesssensor network are inexpensive consists of large number of sensornodes. Access to these sensornodes is organized via a special gateway called base station. This sends queries in the wirelesssensor network and retrieves the required data.
We consider a hexagonal mesh constituting a triangular network of sensornodes. Object is tracked only inside this network. As soon as an object movement is detected, the four closest sensors to it are identified; their node numbers recorded, spatial coordinates generated and distances from the object calculated using the distance formula. The object is tracked using the three closest sensors. These sensors calculate the spatial coordinates of the object using the trilateration technique described above. In this technique, all points in the vicinity (x ± 5, y ± 5) are considered resulting in an inherent 5% error. This has been done to take into account the signal strength deterioration with distance which is shown below .
In the Fig. 5(a) the application deadline is met in almost cases. The complex estimation presents a more scalable behavior considering the percentage of nodes generating data. This occurs because this estimation infers better the data traffic behavior during the routing. The Fig. 5(b) shows the simple and complex estimation using the random and central sample reduction algorithms. It is showed that in all cases we have a distribution approximation ≤ 40%. The central sampling algorithm with the complex estimation has a smaller error. The reason is that the complex estimation performs the maximum reduction sooner (the central algo- rithm is executed once or twice). This result shows that because fewer successive reductions are performed, more representativeness is kept in the reduced data, i. e., data degradation is mitigated.
Within the scope of this thesis, we are particularly interested in the security and relia- bility concerns of large-scale networks (in the magnitude from hundreds to thousands of sensornodes), as two complementary dimensions of dependability solutions for WSNs. In this context, the main objective is to design, to implement, and to assess, with an ex- perimental environment, a secure intrusion tolerant routing service for large scale WSNs. The proposed solution combines multiple disjoint routes, selected and established in an ad-hoc way over multiple Base Stations and data consensus mechanisms performed by those Base Stations as a mechanism to support intrusion tolerance properties. Thus, the design of the routing protocol follows a resilient approach using disjoint multi-path routes established from each sensor to each different Base Station (BS), as a preventive intrusion tolerant approach, constructing forwarding tables at each node to facilitate communication between sensornodes and the multiple base stations. These routes are formed when the WSN is in its self-organization process, following an ad-hoc model. Data received by the multiple base stations are subjected to a data-consensus verification mechanism implemented by a pro-active intrusion-tolerant consensus protocol. After this verification, data can be used safely by the final applications.
Many routing algorithms have been provided for the sensornetworks. For some of these algorithms, each node may has more than one route to sink node that one of them is selected on the basis of a series of criteria, among the level of energy consumption along the route can be a proper criterion. Energy saving can be taken into account in two ways: (1) energy consumption is calculated for any separate routes, then, the route with minimal energy consumption is chosen ; and (2) data aggregation is based on provided learning automata, which prevents extra packets from being sent innetworks by identifying sensors generating identical data and by activating sensornodes periodically, and saves a large amount of energy while increasing network lifetime . A solution has been provided in  for data aggregating and routing with intra network aggregations inwirelesssensornetworksin order to maximize network lifetime by using intra network processing techniques and data aggregation. The relationship between the security and data aggregation process within wirelesssensornetworks has been investigated in .
WirelessSensorNetworks (WSN’s) are quite useful in many applications since they provide a cost effective solution to many real life problems. But it appears that they are more prone to attacks than wired networks .They are susceptible to a variety of attacks, including node capture, physical tampering and denial of service, prompting a range of fundamental research challenges (Perrig et al., 2004), an attacker can easily eavesdrop on, inject or alter the data transmitted between sensornodes. Security allows WSNs to be used with confidence and maintains integrity of data. Providing security inwirelesssensornetworks is pivotal due to the fact that sensornodes are inherently limited by resources such as power, bandwidth, computation and storage. A survey of security issues in adhoc and sensornetworks and related work can be found Djenouri and Khelladi (2005) Gaubatz et al. (2004) Perrig et al. (2002). All approaches of security analysis in WSNs are scenario depended, e.g., An Agricultural application, a habitat monitoring and remote operations and control domain (Zurina, 2009; Sundararajan and Shanmugam, 2010; Mbaitiga, 2009). In the above all the operations are sensitive to possible attacks and they have not concentrated on the key
CodeBlue (developed by Harvard University) is a wireless infrastructure intended to provide a common protocol and software structure in a disaster scenario response, allowing monitoring and wireless monitoring of patients and rescuers. The CodeBlue is a self-organized platform that is easy to connect because of its ad hoc architecture and it integrates different nodes of wireless device sensors. The system integrates low-power wireless sensors, and it offers services for the establishment of credentials, handoff, location tracking, and network’s filtering and aggregating of produced data by the sensor. The simple interface allows emergency medical technicians to request data from a group of patients. The CodeBlue is designed to pass through a wide range of density networks and operates in a range of wireless devices, from the limited resources to the PDA and more powerful PCs. The CodeBlue has several types of sensors (oximetry, ECG and motion sensor) and it is used together with the ZigBee trading platforms, Mica2, MICAz and Telos. Researchers consider that such platforms have a good response in research settings but have many failings in actual scenarios due to the dimensions of the modules and batteries; a support platform has also been developed for lighter sensors to be used in accident monitoring modules in a non-invasive way.    
In a cumulative histogram the bins count the occurrences (or relative frequency) of values that fall in the range of those bins or the preceding ones. It thus becomes easier to see what percentage of the sample values is equal or smaller than some other value. For instance, we see that about 80% of the sample values are equal or smaller than 15. If this were a sample of latencies in the system then we could use this non-parametric analysis as statistical evidence to drive the adaptation. For instance, we could make sensornodes wait for some event only up to 15 time units, and that way know that timing failures would occur only for approximately 20% of the events. That is, we would be using the empirical distribution to guide the process of adaptation.
sensor’s battery life). In such settings, only some targets can directly communicate with anchors; therefore, cooperation between any two sensors within the communication range is required in order to acquire suﬃcient amount of information to perform localization. We design novel distributed hybrid localization algorithms based on SOCP relaxation and GTRS framework that take advantage of combined RSS/AoA measurements with known transmit power to estimate the locations of all targets in a WSN. The proposed algorithms are distributed in the sense that no central sensor coordinates the network, all communications occur exclusively between two incident sensors and the data associated with each sensor are processed locally. First, the non-convex and computationally complex ML estimation problem is broken down into smaller sub-problems, i.e., the local ML estimation problem for each target is posed. By using the RSS propagation model and simple geometry, we derive a novel local non-convex estimator based on the LS criterion, which tightly approximates the local ML one for small noise levels. Then, we show that the derived non-convex estimator can be transformed into a convex SOCP estimator that can be solved eﬃciently by interior-point algorithms . Furthermore, following the SR approach, we propose a suboptimal SR-WLS estimator based on the GTRS framework, which can be solved exactly by a bisection procedure . We then generalize the proposed SOCP estimator for known transmit powers to the case where the target transmit powers are diﬀerent and not known.
Wireless body area networks (WBANs) have received a lot of attention from both academia and industry due to the increasing need of ubiquitous computing for eHealth applications, the continuous advances in miniaturization of electronic devices, and the ultra-low-power wireless technologies. In these networks, various sensors are attached either on clothes, on human body or even implanted under the skin for real-time health monitoring of patients in order to improve their independent daily lives. The energy constraints of sensors, the vital and large amount of data collected by WBAN nodes require powerful and secure storage, and a query processing mechanism that takes into account both real-time and energy constraints. This paper addresses these challenges and proposes a new architecture that combines a cloud-based WBANs with statistical modeling techniques in order to provide a secure storage infrastructure and optimize the real-time user query processing in terms of energy minimization and query latency. Such statistical model provides good approximate answers to queries with a given probabilistic confidence. Furthermore, the combination of the model with the cloud-based WBAN allows performing a query processing algorithm that uses the error tolerance and the probabilistic confidence interval as query execution criterions. The performance analysis and the experiments based on both real and synthetic data sets demonstrate that the new architecture and its underlying proposed algorithm optimize the real-time query processing to achieve minimal energy consumption and query latency, and provide secure and powerful storage infrastructure.
We run the Simulation on the sensornetworks with 16 nodes and ant agents. We can extend the paper by collecting all the data sets from sensornodes and suggest a plan ,how to recover from a particular jamming attack. we can predict the solution by predicting even the jamming attack such as ELINT. This automatically increase network performance which decreases packet loss. The formulation of DOS attack based on each layer can be combined to optimize the attacks by using a simple optimization algorithm. A sensor network with predictive nature could be applied to many applications where decisions plays an important role such as Medical controller, Military applications, Traffic Monitoring and others.
In this paper we proposed the Decentralized Lifetime Maximizing Tree with clustering construction algorithm. Clustering on the basis of distance ensures close proximity of the nodes and thus leads to reduction indata transfer delays. Energy conservation by waking nodes once instead of twice leads  to further reduction indata transfer delays, thus utilizing the energy available effectively. Efficient utilization of bandwidth achieved through allotment of frequencies from the free pool or withdrawing half frequency from cluster with low data transfer and assigning it some other. Efficient utilization of energy is achieved by using HyMAC technique. Simulation results prove enhancement if network lifetime, reduction in energy consumption and minimization of average delay.
As future work, we intend to match the proposed application-level solution with lower-level ones, for example, by considering some real-time-enabled signal processing method. In this case, not only data from a source is reduced, but similar data from different sources is also reduced, resulting in a more efficient solution. Another future work is to use feedback information to enable the source nodes to perform the reduction sooner. However, we intend to improve the central sampling algorithm complexity to O(n), by considering some rank selection algorithms.