Cellular wireless technology today has become the prevalent technology for wireless networking. Not only mobile phones but also other types of devices such as laptops and Personal Digital Assistant (PDA) can connect to Internet via cellular infrastructure. These mobile devices are often capable of running multimedia applications (e.g., video, images). Therefore, cellular networks need to provide quality of service (QoS) guarantee to different types of data traffic in a mobile environment. A call admission control (CAC) scheme aims at maintaining the delivered QoS to the different calls (or users) at the target level by limiting the number of ongoing calls in the system. One major challenge in designing a CAC arises due to the fact that the cellular network has to service two major types of calls: new calls and handoff calls. The QoS performances related to these two types of calls are generally measured by new call blocking probability and handoff call dropping probability. In general, users are more sensitive to dropping of an ongoing and handed over call than blocking a new call. Therefore, a CAC scheme needs to prioritize handoff calls over new calls by minimizing handoff-dropping probability.
The WSN has a group of nodes which ranges from few to several hundred or even thousands. It consists of small light weighted wireless nodes called sensor nodes. A sensor node varies from the size of a back-pack to the size of grain dust. The cost of sensor nodes depending upon the complexity it ranges from a few dollars to hundreds of dollars, depending on the complexity of the individual sensor nodes. The size and cost constraints on sensor nodes results in changes with the resources such as energy, memory, computational speed and bandwidth. The topology of the WSNs can vary from a simple star network to multi-hop mesh network. The propagation technique between the hops of the network can either be routing or flooding. Energy, computation, memory and limited communication capabilities are the resource constraints ofwireless sensor networks. All sensor nodes in the wireless sensor network are interacting with each other or by intermediate sensor nodes. Nodes sends their report towards a processing of energy compared to data processing. Protocols designed for the network should be prolong the lifetime of the network.
The emergence ofwireless sensor networks (WSNs) has currently become increased with wide-ranging applications from health, home, and environmental to military, space and commercial. They are a special case of ad-hoc wirelessnetworks where the constraints on resources are especially tight , WSNs are composed of nodes typically powered by batteries, for which replacement or recharging is very difficult. With finite energy, a finite amount of information can only be transmitted. Therefore, minimizing the energy consumption for data transmission becomes one of important design considerations for WSN in most application scenarios. Moreover, the channel fading has also a great effect on the reliability of data transmission and energy consumption in WSN. As a result, the design of energy efficient strategies to prolong lifetime or minimize the energy consumption is still of utmost and critical importance issue in WSN design . Wireless sensor networks require simple and facile error control schemes because of the low complexity request of sensor
This class ofnetworks is motivated by the mathematical representation of cellular wirelessnetworks. Such a network is a group of base stations covering some geo- graphical area. The area where mobile users communicate with a base station is referred to as a cell. A base station is responsible for the bandwidth management concerning mobiles in its cell. New calls are initiated in cells and calls are handed over (transfered) to the corresponding neighboring cell when mobiles move through the network. A new or a handoff call is accepted if there is available bandwidth in the cell, otherwise, it is rejected.
Abstract— Time synchronization is an important issue in wireless sensor networks. Many applications based on these WSNs assume local clocks at each sensor node that need to be synchronized to a common notion of time. Some intrinsic properties of sensor networks such as limited resources of energy, storage, computation, and bandwidth, combined with potentially high density of nodes make traditional synchronization methods unsuitable for these networks. Hence there has been an increasing research focus on designing synchronization schemes. This paper contains a survey, relative study and analysisof existing clock synchronization protocols for wireless sensor networks, based on a various factors that include precision, accuracy, cost, and complexity. The design considerations presented in this paper will help the designer in structuring a successful clock synchronization system. Specifically, the comparisons presented based on various factors will provide basic guidelines to the designer in integrating various solution features to create an efficient clock synchronization scheme for the application.
The design of call admission control algorithm must take into consideration the packet level QoS parameters like minimum delay, jitter as well as session level QoS parameters like call blocking probability (CBP) and call dropping probability (CDP). The CBP is the probability of denial of accepting the new call and CDP the likelihood of dropping the call by a new access network due to decline of the network resources to an unacceptable level in other words the networks is exhausted with the available resources at which it drops the handover calls. In mobile networks the admission control traffic management mechanism is needed to keep the call blocking probability at a minimal level and another RRM strategy vertical handover plays crucial role in reducing the and call dropping probability in an heterogeneous wirelessnetworks.
Due to the fact that the real-time applications are subject to a very huge amount of data to process and are temporal constraints many transactions may miss their deadline, degrading then the performances. In these applications, it is also important to access fresh data that effectively reflect the current status of the targeted environment. Considering these problems, Kang et al. (2004) have proposed a real-time main memory database architecture, named QMF (QoS management architecture for deadline Miss ratio and data Freshness) to improve the quality of service (QoS). This proposal attempts to balance the deadline miss ratio compared with data freshness considering the applications requirements. Indeed, this model allows specifying the desired miss ratio and data freshness for a specific application. To deal with the miss ratio, QMF uses a feedback controller. It includes a controller that periodically measures the miss ratio, calculates the error miss ratio, i.e, the difference between the values of miss ratio desired and the actual measured value and react to correct the error. The QMF also includes a freshness manager that updates more or less sensor data on demand according to the miss ratio control messages and the workload. Moreover, QMF performs control admission to incoming transactions to decrease the overload situations. In order to balance potentially conflicting miss ratio and freshness requirements, it uses a flexible method. Thus, it uses a range of quality of data (QoD) that is compared to the sensor data in order to accept their freshness, if necessary. This can be relaxed the update periods of sensor data and decrease the update workload in case of overload. Moreover, the freshness of sensor data is maintained according to flexible validity intervals.
With today’s technologies, constant growth of people in major cities and the permanent concern with our natural resources, such as energy, it is inevitable that we look for ways to improve our urban transportation networks. One way of improving transportation net- works is to make an efficient vehicle routing. In this thesis vehicle routing with backup provisioning, using wireless sensor technologies, is proposed. We start by collecting the entry and exit of people inside urban transportations, which will give us a view of fleet load over time, using wireless sensor technologies. For this purpose a monitoring soft- ware was developed. Such data gathering and monitoring tool will allow data analysis in real time and, according to information extracted, to propose solutions for service im- provements. In this thesis the possibility of using such data to plan vehicle routes with backup provisioning is discussed. That is, a variant of the open vehicle routing prob- lem is proposed, called vehicle routing with backup provisioning, where the possibility of reacting to overloading/overcrowding of vehicles in certain stops is considered. After mathematically formalizing the problem a heuristic algorithm to plan routes is proposed. Results show that vehicle routing with backup provisioning can be a way of providing sustainable urban mobility with efficient use of resources, while increasing quality of ser- vice perceived by users. We expect this tool to be useful for the improvement of Urban Transportation Networks.
In multi hop wirelessnetworks experience frequent link failures caused by channel interference, dynamic obstacles, and/or applications’ bandwidth demands. These failures cause severe performance degradation in wirelessnetworks or require expensive manual network management for their real-time recovery. This paper presents an autonomous network reconfiguration system (ARS) with destination sequence distance vector (DSDV) protocol that enables a multi radio Wireless network to autonomously recover from local link failures to preserve network performance. By using channel and radio diversities in Wirelessnetworks, ARS generates necessary changes in local radio and channel assignments in order to recover from failures. Next, based on the thus-generated configuration changes, the system cooperatively reconfigures network settings among local mesh router. In this concept during the data transmission if the link fails in between the nodes, the previous node act as the header node. The header node, creating the loop around the neighboring nodes and find the energy efficient path, after finding the path send the data’s towards it to reach the destination. Because of this there is no chance for data losing, Here ARS has been implemented and evaluated extensively on through ns2-based simulation. Our evaluation results show that ARS outperforms existing failure recovery schemes in improving channel-efficiency .
was derived and applied to a TDMA-based network. The performanceof single-source single-relay cooperative ARQ was studied through analytical model, and the performanceof two cooperative ARQ protocols is compared against two non-cooperative ARQ protocols i.e., type hybrid ARQ and type hybrid ARQ. However, their analysis and algorithm are only suitable for stop-and-wait ARQ (SW-ARQ) protocol, but not for continuous transmission of the Go-Back-N ARQ (GBN-ARQ) protocol or Select Repeat ARQ (SR-ARQ). Literature  analyzed the performanceof a cooperative ARQ protocol under Poisson arrivals and time correlated Raleigh fading. The average frame latency and the probability generating function of the frame service time were compared against non-cooperative ARQ protocol. Literature  analyzed the performanceof cooperative ARQ protocol which adopts equal gain combination over Nakagami-m channels, utilizing an approximation strategy. By approximating the product of two independent Nakagami-m random variables to the sum of two independent gamma random variables, the performanceof protocol I is derived at high signal-to-noise ratio (SNR), the authors further develop the approximation for the product of two maximum Nakagami-m random variables, which is employed to obtain the performanceof protocol II at high SNR.
Current movement based predictions usually use the RSS, the moving speed and direction of a mobile user, and/or variations of those values as input parameters of the algorithms. In the proposed method, we basically use moving direction estimation to predict the target handover cell. Even if we derive the handover target by prediction, it may not be adequate to directly perform a handover as a wrong target cell prediction causes a worthless pingpong handover and performance degradation. In order to cope with this problem, we check the necessity of handover to the predicted cell at that time. Fig.2 briefly shows the concept of our handover optimization process. When a handover to a certain cell is requested by the conventional RSS-based handover decision process, that target cell is verified by the target cell prediction check. If that target cell is not the predicted target, the handover process is delayed. In case the target cell is verified, the assurance of that prediction is checked. If the assurance does not satisfy a certain level, handover process is also delayed. In this approach, the algorithms for target cell prediction and its assurance check should be carefully designed for the optimal performance.
The performanceof mobile and wirelessnetworks depends on the time-varying nature of the wireless channel and on the different protocols working on physical and Medium Access Control (MAC) layer in order to compensate for or even utilize the time-varying nature of the wireless channel. Among the most prominent of these techniques, we have fast power control, adaptive modulation and coding (AMC), hybrid automatic repeat request (H-ARQ), channel-aware scheduling, and many more. Common to all these techniques is that they have a considerable impact on the system performance and take place at very small timescales of the order a few microseconds or even less. On the other hand, obtaining statistically significant results about the performance on system or flow level such as blocking probabilities, average throughput, or transfer times requires considering large networks for a long time, making detailed simulation very time-consuming or inapplicable. A convenient technique for obtaining results in acceptable timescales is to use analytic models or abstract simulation models on flow level that use an intelligent algorithm for determining the system behavior between flow-level events such as arrivals, departure, or activity changes. This section follows the development of UMTS during the past years by first looking at Release 99 downlink, then at the HSDPA introduced in Release 5, and finally at the Enhanced Uplink introduced in Release 6. The work on UMTS is followed by two contributions without close technological relationship presenting models for flow-level performanceanalysisof multi- access and multihop networks. The section is concluded by a contribution on the evaluation of service-level architectures in mobile networks.
The objective of this thesis was through simulation validate the results obtained in the pre- vious theoretical study about network inaccessibility in IEEE 802.15.4 wireless commu- nications, by providing tools capable of measure network inaccessibility on a simulation environment. In this way significant improvement and modifications in the NS-2 simula- tor IEEE 802.15.4 module were presented, as well as new module allowing the corruption of specific frames which is not possible with the current error model, and without which we could not perform the simulation and evaluation of all network inaccessibility scenar- ios. The current IEEE 802.15.4 module implemented in NS-2 is modified and extended to include the use of the GTS mechanisms based on the standard. So, the operations of the GTS allocation, use and deallocation are implemented. The addition of unimplemented MAC operations enhanced the simulation module so that is in accordance to the standard. Based on NS-2 simulations, we evaluate the performanceof various features in the IEEE 802.15.4 MAC. We find that data transmission during the CAP reduces energy cost due to idle listening in the backoff period but increases the collision at higher rate and larger number of sources. While the use of GTS in the CFP can allow dedicated bandwidth to a device to ensure low latency, the device need to track the beacon frames in this mode, which increases the energy cost. The addition of available channels to scan during association revealed an increase of the association time an energy cost, but made the NS-2 more compliant to the standard.
Abstract: Problem statement: Extending lifetime of the battery operated wireless sensor nodes through the design of low power medium access control protocols dealt in this study. Approach: In this study energy efficient Optimal Power Control MAC with Overhearing Avoidance (OPC-OA) was proposed. The transmission power of every node was dynamically changed for the optimal connectivity between nodes. The optimal transmission power for a link was estimated in this OPC-AC algorithm by measuring the link quality by using RSSI. The energy consumption analysisof proposed MAC had been done in MATLAB based discrete event simulation. Results: The comparison of energy consumption analysisof the proposed MAC with the On Demand Transmission Power Control (ODTPC) had been done. Conclusion: The results showed the proposed transmission power control MAC with overhearing avoidance outperforms the On Demand Transmission Power Control (ODTPC) in terms of energy consumption.
Nowadays, due to the technological evolution of e-health applications it is possible to have sensors of all the sizes and with numerous features, even sensors that can be placed inside (intra-body sensor) or outside (inter-body sensor) the human body, typically in contact with the skin. All these types of sensors must deal with many constraints on resources such as energy, memory, computational speed and bandwidth. Sensor networks could be applied in the medical environment, helping with the gathering of data for fast diagnoses and providing monitoring services . The concept of “continuity care” has been increasingly adopted by the health community. These kinds of applications have experienced considerable growth, which contributes to the improvement of human life conditions and helps the progress of Medicine by improving disease diagnoses. In this context, a Body Sensor Network (BSN) is a sensor network for body applications. These sensor networks are applied in medical care and biofeedback, providing healthcare monitoring services . The aim of BSNs is to provide continuous monitoring of patients in their natural physiological state so that transient but life threatening abnormalities can be detected or predicted. This network is composed of a sensing node with a processing unit and a limited power supply. If the sensing node is provided with a wireless transceiver we are then dealing with a WSN . In Body Area Sensor Networks (BASNs), the signals collected by sensors relay them to the sink node and are connected to a central computer [4-6]. The communications between sensor nodes usually employ wireless technologies like Bluetooth and Zigbee  over IEEE 802.15.4, but the most used and best one for wearable health applications is the Zigbee communication protocol due to its lower power consumption [8,9].
In the above example, node5 will pass its data to node4. Node4 fuses node5’s data means node4 data combine with node5 data and forms a new data, and this new data is sent to node3, Again node3 fuses node4’s data with its own data, forms a new data transmits to the node2. After node3 passes to node2, Node2 fuses node3’s data with its own and forms a new data and then transmits to the node1 (Base station).
This is a set of standards for implementing WLAN (wireless local area networking) communications in the 2.4, 3.6, and 5 GHz frequency bands. The first in this group is IEEE80.11- 2007 which has been subsequently amended. Other members are 802.11b and 802.11g protocols. These standards provide the basis for wireless network products using the Wi-Fi brand. IEEE 802.11b and g use the 2.4 GHz ISM band and are operated on WiFi channels. WiFi channel are grouped into 14 overlapping channels. Each channel has a spectral bandwidth of 22MHz, (though the nominal figures of the bandwidth of 20MHz are often given). This channel bandwidth exists for all standards even though different speed is available for each standard: 802.11g standard has 1, 2, 5, or 11 Mbps while 802.11g standard has 54Mbps. The difference in speed depends on the RF modulation scheme used. The adjacent channels are separated by 5MHz with the exception of 14 with the centre frequency separated from channel 13 by 12MHZ. From figure 1, it is obvious that a transmitting station can only use the fourth or the fifth channel to avoid overlapping. Most often, Wifi routers are set to channel 6 as the default, hence channels 1, 6, and11 have been adopted generally and particularly in Europe being non-overlapping channels for wireless transmission in the ISM band. This band can provide up to 11Mbps [8-10]
One of the most important issues in Wireless Sensor Networks (WSNs) is severe energy restrictions. As the performanceof Sensor Networks is strongly dependence to the network lifetime, researchers seek a way to use node energy supply effectively and increasing network lifetime. As a consequence, it is crucial to use routing algorithms result in decrease energy consumption and better bandwidth utilization. The purpose of this paper is to increase Wireless Sensor Networks lifetime using LEACH-algorithm. So before clustering Network environment, it is divided into two virtual layers (using distance between sensor nodes and base station) and then regarding to sensors position in each of two layers, residual energy of sensor and distance from base station is used in clustering. In this article, we compare proposed algorithm with well- known LEACH and ELEACH algorithms in homogenous environment (with equal energy for all sensors) and heterogeneous one (energy of half of sensors get doubled), also for static and dynamic situation of base station. Results show that our proposed algorithm delivers improved performance.
Broadcast Traffic. All the research works for BEB described so far (with an exception of ) only account for unicast traffic. However, in a real system there are a co-existence of unicast and broadcast traffic in the network. There are a few research works in the area where broadcast traffic is analyzed. In 2007, Ma  studies saturation throughput using only broadcast traffic by means of a 1-D Markov Chain. It was the first research work dealing with the Consecutive Freeze Process (CFP) in broadcast. The CFP has different effects on the network performance when unicast or broadcast scheme is considered. When using unicast scheme under saturation, after a busy period, only a user that has finished a successful transmission may access immediately to the channel if the chosen value for BC is zero. However, when using broadcast scheme, CFP happens more often because it does not rely on receiver acknowledgement (or retransmission mechanism). Because of this CFP problem, models that deal only with unicast cannot simply be applied to broadcast. Thus,  addresses this problem by dividing the BC into two sub-processes. One process is the Sequential BEB Process (SBP) which describes the general BEB procedure without zero initial BC and another process where CFP is modeled by involving consecutive trans- missions as a result of zero BC. Later, Ma and Chen  extend  to include average delay time and the packet delivery ratio 6 . Finally, this research group combines  and  to present in 2008 an extended work . The novelty of this last work is the analysisof the initial CW value importance for the network performance. They found that CFP effect is neglected when CW ≥ M , where M is the total number of users.
Computer networks form an essential substrate for the multitude of distributed application which is now an essential part of modern business and personal life. It is important to optimize the performanceof computer networks, so that users can derive optimum utility from the expertise in network infrastructure. Most networks perform well when lightly used, but problems appear when network load increases. This loss of network performance when a network is heavily loaded is called congestion. Wired networks are becoming an integral part of the Internet. Unlike wirelessnetworks, random packet loss due to bit errors is not negligible in wired networks, and this causes significant performance degradation of transmission control protocol (TCP). We propose and study a novel end-to-end congestion control mechanism called TCP Veno that is simple and effective for dealing with random packet loss. A key ingredient of Veno is that it monitors the network congestion level and uses that information to decide whether packet losses are likely to be due to congestion or random bit errors. Specifically: A). it refines the multiplicative decrease algorithm of TCP Reno—the most widely deployed TCP version in practice—by adjusting the slow-start threshold according to the perceived network congestion level rather than a fixed drop factor and B) it refines the linear increase algorithm so that the connection can stay longer in an operating region in which the network bandwidth is fully utilized, based on extensive network testbed experiments and live Internet measurements.