Supraminal channel – A supra minal channel encodes information in the semantic content of cover data, generating innocent communication in a manner similar to mimic functions. These low bit rate channel are robust to active wardens and can be used with sub minal channel to achieve stenographic public key exchange. Padding 31 bit/packet, this is the most common form placing the convert data. Since this field has no significance. But stuff dummy 0 bits. This field requires the receiver to just extract the convert data in the padding field before the degitimate. TCP handler such field does not even demand the convert user to share common scheme for information encoding length of padding depends on presence of options field in the TCP header hence under an absence it is 31 bit or else 8 bit packet. Initial sequence number ISN 32 bit/connection TCP employes “3 way handshake” process for protocol negotiation ISN servers as a perfect medium to ensure reliable delivery in case of packet loss due to various circumstances. In this method the sender generate an ISN corresponding to actual convert data. Convert receiver extract this field and does not give and ACK for it convert sending and sending the some packet with different convert data embedded in ISN. This is the simplest form of using this field for placement of convert data.
In the past decade, wirelesssensornetworks (WSNs)      are widely studied and developed. A wirelesssensor network is composed of a distributed collection of sensor nodes with limited capability. By operating cooperatively of each node, different applications such as environmental monitoring, military surveillance, search-and-rescue operations, medical care and so on can be achieved. Among different physical standards of sensornetworks, Zigbee  which bases on the IEEE 802.15.4 standard is one of the most potential technologies. Zigbee refers to a suite of communication protocols. As shown in Fig. 1, its PHY layer and MAC layer which are responsible for radio transmission and PAN (Personal Area Network) association/disassociation respectively are defined by IEEE 802.15.4 standard. On the other hand, the Zigbee Alliance defines the AP layer and NWK layer that control/manage objects, and decides the network topology respectively. Zigbee aims to form a low data rate network that
Abstract: It is important to increase the WirelessSensor Network (WSN) lifetime, due to its limited energy resource, while meeting the constraints of applications. Recent Advances for In-Network Proc- essing (INP) motivate many WSN applications which are based on multirate and distributed signal processing, and therefore require the support of rate-based routing as well as MAC and link layer de- signs to maximize the WSN lifetime. We propose a new scheme called “Rate Distribution (RateD)”, in which the application rate constraints are distributed in the WSN based on an optimized routing scheme. An optimal RateD was achieved by forming optimal data flows under rate constraints, which was an NP-complete problem. To reduce the complexity, a near optimization solution formed and ana- lyzed, and a practical rate-based routing selection based on rate assignment also proposed to achieve effective rate distributions. The simulation shows that this scheme significantly extends the WSN life- time for INP applications.
The block diagram of the soft sensor strategy based on the process mathematical model with multi-rate control is shown in Figure 2. Unlike the strategies described in [28,38] in which the model data was used to compensate packet losses and the model is directly updated, in this paper an online network update scheme is developed and the model data is used mainly for reducing energy consumption. In this network update scheme, the soft sensor running in parallel with the physical process is updated in real-time with the same control signal data transmitted on the network to the actuators (different to the case when the control signal update is done directly without considering the network transmission). The advantage is that the effects of the network delays (in the transmission of the control signal) and the system discretization (the sensor sampling period) on the real process are considered in the mathematical model, providing more reliable virtual data.
With the ever-increasing wireless data application recently, considerable efforts have been focused on the design of distributed explicit rateschemebased on Network Utility Maximization (NUM) or wirelessmulti-hop mesh networks. This paper describes a novel wirelessmulti-hop multicast flow control scheme for wireless mesh networks via 802.11, which is based on the distributed self-turning Optimal Proportional plus Second-order Differential (OPSD) controller. The control scheme, which is located at the sources in the wireless multicast networks, can ensure short convergence time by regulating the transmissionrate. We further analyze the theoretical aspects of the proposed algorithm. Simulation results demonstrate the efficiency of the proposed schemein terms of fast response time, low packet loss and error ration.
a Liquid Crystal Display (LCD) (128x64 pix monochrome) to view information (limited to screen resolution) through a graphical view and some added functionalities like a calendar. The communication is supported via Bluetooth and provides link to a cellular phone or a Personal Computer (PC). A diverse range of sensors can be used such as light, motion, temperature, audio and eWatch provides visual, tactile and audio notification. Despite being a wrist device, it provides ample processing capabilities with multiple day battery life enabling continuous data sensing and user studies. Reeves et al.  present an approach with a more specific purpose. Remote monitoring as an active element of care provision packages for older adults has the potential to sig- nificantly augment traditional social care. Such monitoring is made through integrated ambient and body sensing in order to sense specific patient data. The aim of the project is to achieve an environment that has ability to adapted to the patient lifestyle and answer all need related to the individual concerned. It is important underline that each person has different threshold values (minimum measurement values), so it is necessary that each patient system is configured for these same measurements. Thresholds values achieve flexibility and support a broad range of individual scenarios. Data can be seen at a mobile device or a PC located at the medical base.
The neighbor manager enables it to provide the Decision Making levels with the required information for routing. This runs the HELLO protocol, manages neighbor table. Neighbor table assigns an entry for each neighbor node, which includes all information related to the node such as position, residual energy, estimated hop count to sink, neighboring nodes in direction of sink, required transmission energy towards it, estimated packet reception ratio and etc. The HELLO protocol consists of periodical broadcast of HELLO packets. These packets are used to update existing entries, and delete entries when neighboring nodes break down, which can be detected in case of not receiving HELLO packets after a defined period of time (timeout). Neighbor manager is the first module that receives the packet from the higher layers. It provides the routing module with all information it needs such as the set of nodes ensuring positive progress ( ) and current values of its required parameters.
To this end, it is very crucial to make a trade off balance between the pros and cons of using such greedy protocols to guarantee the best security service while minimizing the overall energy consumption. Our work sheds the light on the cloning attack, which actually consists in cloning the target sensors and transmitting faulty data to the destination. Such at- tack can affect the performance of 802.15.6-2012, as the cloned sensor will always get access to the channel for it possesses a high priority. In fact, a biosensor with a high priority value is considered transmitting emergent data; therefore, the access to the channel has to be immediate compared with regular data. This differentiation is highlighted when the CSMA/CA mechanism is employed to regulate the access to the channel. A backoff Counter is selected within a Contention windows intervals which takes into consideration the maximum and minimum value in respect to the measured data. The backoff Counter value is decremented for each idle channel detection performed by Contention Channel Assessment (CCA), when
In this work, we show how we can design a routing protocol for wirelesssensornetworks (WSNs) to support an information-fusion application. Regarding the application, we con- sider that WSNs apply information fusion techniques to detect events in the sensor field. Particularly, in event-driven scenarios there might be long intervals of inactivity. However, at a given instant, multiple sensor nodes might detect one or more events, resulting in high traffic. To save energy, the network should be able to remain in a latent state until an event occurs, then the network should organize itself to properly detect and notify the event. Based on the premise that we have an information-fusion application for event detection, we propose a role assignment algorithm, called Information-Fusion-based Role Assignment (InFRA), to organize the network by assigning roles to nodes only when events are detected. The InFRA algorithm is a distributed heuristic to the minimal Steiner tree, and it is suitable for networks with severe resource constraints, such as WSNs. Theoretical anal- ysis shows that, in some cases, our algorithm has a Oð1Þ-approximation ratio. Simulation results show that the InFRA algorithm can use only 70% of the communication resources spent by a reactive version of the Centered-at-Nearest-Source algorithm.
higher data aggregation rates. Also, Figure 3.5(b) shows that DAARP needs only 50% of the
ontrol messages used by InFRA in the o
e of 6 events and, on average, only 29% of the
ontrol messages used by InFRA to build the routing stru
ture. Thus, for more than one event, DAARP is more e
ient than SPT and InFRA, as shown
The point whose distance from all three sensors is the least is considered is taken as the estimate location of the object. These sensors measure the object location at every one second interval. The present technique employs the trilateration technique for estimating the location of the object with three sensors at the end of every one second. It also employs the same approximation algorithm as before in which the difference function is minimized. The difference function, is calculated as
From the three strategies, considering the application restrictions and the advantages of online data processing, we focus on sensor data stream reduction. Regarding the WSN application restriction, its use is motivated by three main factors. First, data transmission requires more energy than data measurement. Hence, reducing the data transmit- ted reduces the energy spent. Second, in order to reduce the data, the sensor node needs to perform constant and quick large local data processing, which requires simple and smart reduction strategies. Reduction strategies based on data streams are suitable here since they process the data locally and independently of previous data, avoiding the complete data storage and/or preprocessing. At last, as we have reduced bandwidth, sending large amounts of data can be problematic, causing excessive delay in response time and invalidating the data.
In fig 7 we added the new layer in the wirelesssensor architecture. The sensor network transfer the information to the base station the based station is directly connected to the user or the middleware and in the pervious architecture no security layer is present so there is always a demand of security layer in the wirelesssensor network. So in this paper we presented a Kerberos authentication scheme to protect the wirelesssensor network for the unauthorized user. By adding the security layer in the wirelesssensor network we can prevent the network from the different security thread. The base station can’t be accessed directly because the user has to authentication him/her from the Kerberos server and then obtain the ticker to access the base station to retrieve the in the information provided by the sensors.
The hierarchical structure of TPSN provides scalability. The synchronization of a node depends on its parent in the hierarchical structure. Therefore, even if the number of nodes in the network increases, the high synchronization accuracy can still be achieved. Since the hierarchical structure covers the entire network based on a root node, the whole network can be synchronized to the same time reference. As a result, network-wide synchronization is possible. In addition, TPSN requires each node to exchange timing information with its parent in the hierarchical structure. Consequently, in TPSN, each node has to exchange synchronization information with only a single node and the protocol ensures that it is synchronized with all the remaining nodes in its neighborhood. Moreover, the synchronization cost is relatively low compared to NTP . TPSN requires a hierarchical structure to exist for synchronization. The maintenance of this structure in the case of failed nodes increases the energy consumption. The hierarchical structure also prevents accurate synchronization of mobile nodes. Since the connectivity of different nodes changes as nodes move, the hierarchical structure needs to be formed accordingly. Moreover, the synchronization procedure of TPSN is based on adjusting the clocks according to the parent nodes in the hierarchy. This increases the cost of synchronization compared to other methods where the relative offsets of the neighbor nodes are stored and the time is translated without adjusting the physical clock. When multiple root nodes are used in large networks, each cluster can be synchronized to a different reference time. As a result, the protocol forms islands of times. To prevent this, each root node should be synchronized in advance. Of course, this increases the overall cost for synchronization if the root nodes are located far from each other. Furthermore, multi-hop synchronization is not supported since nodes synchronize only to their parent node.
This thesis addresses the subject of performance assessment of real-time data management on wirelesssensornetworks. In fact nowadays, systems based on sensornetworks are getting increasingly used in many areas of the knowledge, giving rise to several flavours of applications such as financial market, human motion tracking application, monitoring of urban or environmental phenomena, patients monitoring in hospitals, automated production, military and aircraft control, etc. Some of these applications called real-time applications have the particularity of having to comply with the logical constraints and consistency imposed by the system, but also the temporal constraints related to the speed of execution of operations, as well as the respect of their deadlines. In addition, these applications must be able to handle large amounts of data, coming from sensors, necessary for their correct functioning. Thus, the use of databases is necessary and indispensable for this type of systems. However, unlike traditional databases, real-time databases must be able to also meet temporal constraints introduced by real-time systems, while ensuring the integrity constraints and consistency, the ability to share data, the recoveries after failures, etc., provided by the traditional databases management systems (DBMS). Thus, the real-time databases are essential for real-time systems with non-negotiable temporal constraints, such as automotive and aircraft applications where deadlines on temporal data and transactions can not be lost because of the risk of generating a disaster. Similarly, the real-time databases are useful for real-time systems running in unpredictable environments, such as financial market and human motion tracking application, where meeting most of the temporal constraints is the best system performance.
Fibonacci heap (Cormen et al., 2001) its running time is bounded by O(|V | log |V | + |E|), where |E| is the number of edges. Meyer and Sanders (2013) introduced ∆Step which ordered nodes by using bucket representation with size ∆, and each bucket may be processed in parallel. They also did a review in detail of another formulation of par- allel SSSP. Madduri et al. (2007) do an eﬃcient implementation of this algorithm for multi-thread parallel computer (Cray MTA-2). By improving hardware for parallel processing, General Purpose Graphical Processing Unit (GPGPU) has become a pop- ular topic being a solution not only in Graphics, but also in diﬀerent applications in diﬀerent areas. It has high power of parallel processing with lots of threads and low price. Moreover, it was introduced a SDK, which makes programming easy for devel- opers. CUDA was one of the most famous parallel purpose hardware (graphic card) in- troduced by NVIDIA (we explained the architecture in Section 2.5). Some researchers have been working on SSSP in order to implement it on CUDA. Harish and Narayanan (2007) used compact adjacency list to represent the graph and presented a fast imple- mentation of graph search algorithms such as breadth-first search, SSSP and All-Pairs Shortest Path (APSP) in CUDA. Martin et al. (2009) proposed several solutions for SSSP based on Dijkstra. They compared Dijkstra algorithm implemented on CUDA with ad- jacency list and CPU, based on Fibonacci structure on diﬀerent random graphs. Also, they claim that their algorithm overcomes the problem in (Harish and Narayanan, 2007) for not using the atomic function.
On the other hand, some works are targeted at particular applications, but the central idea is still related to the coverage issue. For example, sensors’ on-duty time should be properly scheduled to conserve energy. Since sensors are arbitrarily distributed, if some nodes share the common sensing region and task, then we can turn off some of them to conserve energy and thus extend the lifetime of the network. This is feasible if turning off some nodes still provide the same “coverage” (i.e., the provided coverage is not affected).Author in  proposes a heuristic to select mutually exclusive sets of sensor nodes such that each set of sensors can provide a complete coverage the monitored area. Author in  proposes a probe-based density control algorithm to put some nodes in a sensor-dense area to a doze mode to ensure a long-lived, robust sensing coverage. A coverage preserving node scheduling scheme is presented in  to determine when a node can be turned off and when it should be rescheduled to become active again.
Comparing Figure 8b,c, one can see that, in general, making an additional step of the length δ (k) i in the first iteration pushes the estimates much closer to the true target positions. As the number of iterations increases, the step size, i.e., w, decreases, giving more importance to the solution obtained with the proposed SOCP approach. Additionally, from Figure 8, we can see that the target node, which has no anchor nodes as its neighbors (upper right corner), suffers the lowest, while the target node closest to ˆ x (0) i experiences the highest estimation accuracy in these few iterations. Although we cannot guarantee that our approach will converge under all conditions, our simulation results show that it is a good heuristic.
Global Navigation Satellite System. A very popular way for determining target’s location nowadays is through a global navigation satellite system (GNSS). It can deliver to its user the latitude and longitude position in real-time . GNSS utilizes satellites orbiting the Earth, which broadcast signals using very precise frequencies and highly-accurate atomic clocks for time measurements. Any receiver on the ground can pick up the GNSS signal as long they are codded to read its signal. As the GNSS signals travel through the Earth’s atmosphere, they can become distorted, leading to a reduced positional accuracy delivered to the receiver. Also, GNSS signals that are low on the horizon, i.e., the ones that have low zenith are more likely to deliver error because they are traveling through more atmosphere. GNSS uses groups of satellites, called constellations, for their systems, see Fig. 1.2. For a receiver to establish its location, it must be able to pick up a signal from at least four of the satellites . Currently, there are two globally operational GNSSs: American GPS (constellation of 32 satellites, fully operational since 1995) and Russian GLONASS (constellation of 24 satellites, restored in 2011). Also, The European Union’s Galileo GNSS, as well as China’s BeiDou-2 GNSS are scheduled to be fully operational by 2020. These systems can be used for providing location, navigation or for tracking the location of a receiver. The signals also allow the electronic receiver to calculate the current local time to high precision, which allows time synchronization. Although technologies like telephonic or internet reception could be used to further enhance the localization performance of GNSSs, they usually operate independently of any of them. Also, even though these systems represent today a standard solution for outdoor localization, they have very limited or no functionality in harsh propagation environments, such as dense urban, underground, underwater and indoor to name a few .
WirelessSensorNetworks (WSN) are defined as a subclass of Ad hoc networks, aiming to monitor some phenomena. This kind of network is mainly applied on locations of hard access or in dangerous areas, for military purposes or not [1-3]. The main component of this kind of network is the sensor node, which is responsible for detecting a signal of interest. The inappropriate positioning of a node can cause failures on the WSN to monitor the desired event [1-5]. Studies involving WSN are developed to guarantee appropriate distributions of the sensor nodes on different interest areas. Among the motivations, emphasize to obtain the biggest sensing coverage area, with the best possible quality and with the lesser sensor quantity.