Chapter 11. ENERGY MANAGEMENT IN AD HOC WIRELESS NETWORKS

11.1 INTRODUCTION

The nodes in an ad hoc wireless network are constrained by limited battery power for their operation. Hence, energy management is an important issue in such networks. The use of multi-hop radio relaying requires a sufficient number of relaying nodes to maintain the network connectivity. Hence, battery power is a precious resource that must be used efficiently in order to avoid early termination of any node.

Energy management deals with the process of managing energy resources by means of controlling the battery discharge, adjusting the transmission power, and scheduling of power sources so as to increase the lifetime of the nodes of an ad hoc wireless network. Efficient battery management, transmission power management, and system power management are the three major means of increasing the life of a node. Battery management is concerned with problems that lie in the selection of battery technologies, finding the optimal capacity of the battery, and scheduling of batteries, that increase the battery capacity. Transmission power management techniques attempt to find an optimum power level for the nodes in the ad hoc wireless network. On the other hand, system power management deals mainly with minimizing the power required by hardware peripherals of a node (such as CPU, DRAM, and LCD display) and incorporating low-power strategies into the protocols used in various layers of the protocol stack. This chapter concentrates on the issues involved and the solutions for energy management in ad hoc wireless networks.

11.2 NEED FOR ENERGY MANAGEMENT IN AD HOC WIRELESS NETWORKS

The energy efficiency of a node is defined as the ratio of the amount of data delivered by the node to the total energy expended. Higher energy efficiency implies that a greater number of packets can be transmitted by the node with a given amount of energy reserve. The main reasons for energy management in ad hoc wireless networks are listed below:

Limited energy reserve: The main reason for the development of ad hoc wireless networks is to provide a communication infrastructure in environments where the setting up of a fixed infrastructure is impossible. Ad hoc wireless networks have very limited energy resources. Advances in battery technologies have been negligible as compared to the recent advances that have taken place in the field of mobile computing and communication. The increasing gap between the power consumption requirements and power availability adds to the importance of energy management.

Difficulties in replacing the batteries: Sometimes it becomes very difficult to replace or recharge the batteries. In situations such as battlefields, this is almost impossible. Hence, energy conservation is essential in such scenarios.

Lack of central coordination: The lack of a central coordinator, such as the base station in cellular networks, introduces multi-hop routing and necessitates that some of the intermediate nodes act as relay nodes. If the proportion of relay traffic is large, then it may lead to a faster depletion of the power source for that node. On the other hand, if no relay traffic is allowed through a node, it may lead to partitioning of the network. Hence, unlike other networks, relay traffic plays an important role in ad hoc wireless networks.

Constraints on the battery source: Batteries tend to increase the size and weight of a mobile node. Reducing the size of the battery results in less capacity which, in turn, decreases the active lifespan of the node. Hence, in addition to reducing the size of the battery, energy management techniques are necessary to utilize the battery capacity in the best possible way.

Selection of optimal transmission power: The transmission power selected determines the reachability of the nodes. The consumption of battery charge increases with an increase in the transmission power. An optimal value for the transmission power decreases the interference among nodes, which, in turn, increases the number of simultaneous transmissions.

Channel utilization: A reduction in the transmission power increases frequency reuse, which leads to better channel reuse. Power control becomes very important for CDMA-based systems in which the available bandwidth is shared among all the users. Hence, power control is essential to maintain the required signal to interference ratio (SIR) at the receiver and to increase the channel reusability.

11.3 CLASSIFICATION OF ENERGY MANAGEMENT SCHEMES

The need for energy management in ad hoc wireless networks, discussed in the previous section, points to the fact that energy awareness needs to be adopted by the protocols at all the layers in the protocol stack, and has to be considered as one of the important design objectives for such protocols. Energy conservation can be implemented using the following techniques:

• Battery management schemes

• Transmission power management schemes

• System power management schemes

Maximizing the life of an ad hoc wireless network requires an understanding of the capabilities and the limitations of energy sources of the nodes. A greater battery capacity leads to a longer lifetime of the nodes. Increasing the capacity of the batteries can be achieved by taking into consideration either the internal characteristics of the battery (battery management) or by minimizing the activities that utilize the battery capacity (power management). The system power management approach can be further divided into the following categories:

• Device management schemes

• Processor power management schemes

Figure 11.1 provides an overview of some of the techniques at different layers of the protocol stack that fall into three categories: battery management, transmission power management, and system power management schemes. Though these schemes cannot be strictly classified under the different layers of the OSI protocol stack as they reside in more than one layer, the classification provided in this section is based on the highest layer in the protocol stack used by each of these protocols.

Figure 11.1. Classification of energy management schemes.

image

11.4 BATTERY MANAGEMENT SCHEMES

Battery-driven systems are those systems which are designed taking into consideration mainly the battery and its internal characteristics. They try to maximize the amount of energy provided by the power source by exploiting the inherent property of batteries to recover their charge when kept idle. In the following sections, we discuss the ways in which the energy efficiency of mobile wireless communication can be enhanced through the use of improved battery management techniques as described in [1], [2], [3], [4], [5], [6], and [7]. Recent research results in this area have proved that, by varying the manner in which energy is drawn from the batteries, significant improvement can be obtained in the total amount of energy supplied by them. In the section that follows, we also discuss some of the battery characteristics [8] which are used throughout our discussions on battery management.

11.4.1 Overview of Battery Characteristics

The major components of batteries are illustrated in Figure 11.2. A battery mainly consists of an anode, a cathode, an electrolyte medium, and a case. The anode is often a metal and the cathode a metallic oxide. The electrolyte is a salt solution that promotes the ion flow. The porous separator is used to prevent a short circuit between anode and cathode by keeping them from touching one another. The battery is contained in a structural support (case) that provides dimensional stability and a positive and a negative electrode for discharging (or recharging) the cell. The positive ions move from the anode toward the cathode through the electrolyte medium and the electrons flow through the external circuit. A number of separate electrochemical cells can also be combined within the same case to create a battery.

Battery technologies: The most popular rechargeable battery technologies developed over the last two decades are comprised of nickel-cadmium, lithium ion, nickel metal-hydride, reusable alkaline, and lithium polymer. The main factors considered while designing a battery technology are the energy density (the amount of energy stored per unit weight of the battery), cycle life [the number of (re)charge cycles prior to battery disposal], environmental impact, safety, cost, available supply voltage, and charge/discharge characteristics.

Principles of battery discharge: A battery typically consists of an array of one or more cells. Hence, in the subsequent sections, the terms "battery" and "cell" are used interchangeably. The three main voltages that characterize a cell are: (1) the open circuit voltage (Voc), that is, the initial voltage under a no-load condition of a fully charged cell, (2) the operating voltage (Vi), that is, the voltage under loaded conditions, and (3) the cut-off voltage (Vcut) at which the cell is said to be discharged. All the cells are defined by three main capacities:

Figure 11.2. Basic structure of a lithium/thionyl chloride battery.

image

– Theoretical capacity: The amount of active materials (the materials that react chemically to produce electrical energy when the cell is discharged and restored when the cell is charged) contained in the cell refers to its theoretical capacity. A cell cannot exceed its theoretical capacity.

– Nominal (standard) capacity: This corresponds to the capacity actually available when discharged at a specific constant current. It is expressed in ampere-hours.

– Actual capacity: The energy delivered under a given load is said to be the actual capacity of the cell. A cell may exceed the actual capacity but not the theoretical capacity.

The constant current discharge behavior of lithium-manganese dioxide (LiMnO2) cells with Voc = 3 V and Vcut = 1 V is shown in Figure 11.3 [9]. The discharge curve is flat most of the time and a gradual slope is developed as the voltage reaches the cut-off voltage. The performance of a cell's discharge is measured using the following parameters:

– Discharge time: The time elapsed when a fully charged cell reaches its cut-off voltage and has to be replaced or recharged is called the discharge time of the cell.

– Specific power (energy): This is the power (energy) delivered by a fully charged cell under a specified discharge current. It is expressed in watt-per-kilogram (watt-hour-per-kilogram).

– Discharge current: There are mainly two models of battery discharge: constant current discharge and pulsed current discharge. In pulsed current discharge, the battery switches between short discharge periods and idle periods (rest periods). Chiasserini and Rao in [4] illustrate the performance of the bipolar lead-acid battery subjected to a pulsed discharge current of six current pulses. After each discharge, which lasts for 3 ms, the cell was idled for 22 ms during which no recharging was allowed to take place. Figure 11.4 shows the current density and the corresponding cell voltage. The cell is able to recover and revert to its initial open circuit voltage during the first four rest periods. After the fifth current pulse, the rest period of 22 ms turns out to be inadequate for the cell recovery.

Figure 11.3. Discharge pattern of a cell when Voc = 3 V and Vcut = 1 V.

image

Figure 11.4. Performance of a bipolar lead-acid cell subjected to six current impulses with pulse length = 3 ms and rest period = 22 ms.

image

Impact of discharge characteristics on battery capacity: The important chemical processes that affect the battery characteristics are given below.

– Diffusion process: When the battery is actively involved in discharging, that is, at a non-zero current, the active materials move from the electrolyte solution to the electrodes and are consumed at the electrode. If this current is above a threshold value called the limiting current, the active materials get depleted very quickly. But as the current decreases, the concentration of the active materials around the electrode drops. By increasing the rest time periods of the battery, longer lifetimes can be achieved due to the recovery capacity effect, which is explained later in this section. In the following discussion, we will concentrate on some of the battery management techniques which increase idle periods for batteries.

– Passivation process: The cell discharge is limited not only by the diffusion process but also by a process called passivation, which induces in the cell the precipitation of crystals which are produced by the discharge due to the chemical reactions on the electrode. This phenomenon increases during higher current densities.

Two important effects to be considered for understanding the battery's discharge properties are stated below.

– Rate capacity effect: As the intensity of the discharge current increases, an insoluble component develops between the inner and outer surfaces of the cathode. The inner layer becomes inaccessible as a result of this phenomenon, rendering the cell unusable even while a sizable amount of active materials still exists. This effect depends on the actual capacity of the cell and the discharge current.

– Recovery capacity effect: This effect is concerned with the recovery of charges under idle conditions. By increasing the idle time, one may be able to completely utilize the theoretical capacity of the cell.

Battery models: Battery models depict the characteristics of the batteries used in real life. The pros and cons of following battery models are summarized in [8]: analytical models, stochastic models, electric circuit models, and electrochemical models. Finally, a battery efficient system architecture is proposed and the following approaches are suggested to enable longer life of the nodes of an ad hoc wireless network:

– Supply voltage scaling: An optimal value of supply voltage (vdd) is maintained, by means of scaling, that provides a balance between battery charge consumption and performance (number of packets transmitted per unit charge).

– Battery-aware task scheduling: In [10], Luo and Jha proposed a battery-aware static scheduling scheme that optimizes the discharge power of the batteries. According to this scheme, from the knowledge of the task graph, the discharge current of the battery is shaped in order to reduce the unwanted consumption of power. This is done as a two-step process. In the first step, the initial schedule obtained is adjusted in order to reduce peak current requirements. The second step consists of a local transformation which changes the position of the scheduled events so as to minimize the delay and also the energy drawn off the cell.

– Dynamic power management: Energy conservation can be achieved at the nodes carrying multimedia traffic by a graceful degradation of the quality of audio output when the battery is about to reach the completely discharged state. Many approaches have been suggested to achieve this. In one such policy, the audio device outputs high-quality sound when the remaining battery charge is above a certain threshold value. Once it falls below the threshold value, the audio device tries to degrade the output sound quality. These policies mainly exploit the recovery capacity effect of the batteries to attain theoretical capacity.

Battery scheduling: The use of multiple batteries in mobile nodes has become very common. The key aspect behind this kind of an architecture is the property of charge recovery by the battery when it remains in idle state. A detailed description of the charge recovery property of the battery can be found in the next section.

Smart battery standard (SBS): This is an emerging technology toward the development of batteries that consume low power. The main aim of SBS is to create standards by which the systems become aware of the batteries and interact with them in order to provide a better performance.

11.4.2 Device-Dependent Schemes

The lifetime of a node is determined by the capacity of its energy source and the energy required by the node. Recent works in [1], [2], [4], [5], [6], and [7] show that the battery life can be improved by introducing techniques which make efficient utilization of the battery power. In this section, some of the device-dependent approaches that increase the battery lifetime by exploiting its internal characteristics are discussed.

Modeling and Shaping of Battery Discharge Patterns

The stochastic model of the discharge pattern of batteries introduced in [5] employs the following two key aspects affecting the battery life: the rate capacity effect and the recovery effect.

Effect of Battery Pulsed Discharge

Recent works such as in [3], [4], and [5] show that pulsed current discharge applied for bursty stochastic transmissions improves the battery lifetime. If pulsed current discharge is applied to a cell, significant improvement in the specific energy delivered is realized. In such an environment, higher specific power can be obtained for a constant specific energy. In [4], a model for battery pulsed discharge with recovery effect is considered. The model proposed consists of a battery with a theoretical capacity of C charge units and an initial battery capacity of N charge units. Battery behavior is considered as a discrete-time Markov process with the initial state equal to N and the fully discharged state 0. Time is divided into slots (frames). Each packet for the node is transmitted in one time slot and the battery enters the previous state by losing a charge unit. If the battery remains idle, it recovers one charge unit and enters the next state. The results suggest that at the most C (theoretical capacity) packets can be transmitted if the battery is given enough time to recover. The passivation (surface modifications of metallic materials which cause an increase in their resistance to the corrosion process) time constant is assumed to be greater than the discharge time to fully drain the theoretical capacity of the cell. Thus, the passivation effects can be neglected. In [4], Chiasserini and Rao studied the battery behavior under two different modes of pulsed discharge.

Binary Pulsed Discharge

In this mode, if there are packets in the queue, transmission of a packet occurs in one time slot; one charge unit is recovered if the queue is empty. The current required for transmission is drained during the entire time frame. The Markov chain for binary pulsed discharge is shown in Figure 11.5. An additional dummy state is added to the Markov chain representing the cell behavior, which represents the start of the discharge. The cell is modeled as a transient process and the packet arrival follows a Bernoulli process. If the probability that a packet arrives in one time frame is stated as a1 = q and the probability for transmitting a packet in a time slot is given by a1, then the probability of recovery is given by a0 = (1 - q). The cell can never cross the charge state of N. The gain obtained in this scheme is given by image, where mp is the total expected number of packets transmitted and N is the amount of charge in a fully charged battery. The gain, however, cannot exceed C/N where C is the theoretical capacity.

Figure 11.5. Cell behavior under binary pulsed discharge represented using the Markov chain model.

image

Generalized Pulsed Discharge

In a particular time frame, either one or more packets are transmitted or the cell is allowed to recover a charge unit. The quantity of the impulse is equal to the current required to transmit all the packets, and the duration of the impulse is equal to a fraction of the time frame. In the remaining fraction of the time frame, the cell is allowed to recover one unit. In this case, if ak is the probability that k packets arrive in one burst arrival and M is the maximum number of packets per burst, then the probability of recovery is given by image. In each time frame, the cell can move from state z to z - k + 1, where 0 < z < N, with a probability of ak. Generalized pulsed discharge can be represented as a general Markovian chain, as shown in Figure 11.6. The probability to move from one state to another is specified in Figure 11.6.

Figure 11.6. Cell behavior under generalized pulsed discharge represented using the Markov chain model.

image

An optimal discharge strategy which provides a solution to extend the lifetime of a battery by exploiting the internal battery characteristics is proposed in [1]. The total amount of charge delivered by the battery lies between N and C units. The system is assumed to be a stochastic process with N states (x0, ..., xN). Each state i is denoted by the tuple (ni, ci), where ni and ci are the remaining charge and capacity left in the cell, respectively. Thus the initial state is given by (N, C). When the battery which is in state (ni, ci) delivers q charge units, it moves to the state (ni - q, ci - q). If the battery remains idle for one charge unit, it moves to the state (ni + 1, ci). The battery expires if either ci or ni becomes 0. When the battery is in state (ni, ci), and idles for one unit of time, the probability of recovering one charge unit is given by

(11.4.1)

image

Here g is a constant value and φ(ci) is a piecewise constant function of the number of charge units delivered which are specific to the cell's chemical properties. Using stochastic dynamic programming, an optimal policy for discharging the cell is proposed in [1]. The cells are then scheduled based on their recovery process. Another battery capacity model is suggested by Simunic et al. [7], with which battery lifetime can be estimated accurately. The efficiency of the battery is given by

(11.4.2)

image

that is, the ratio of actual capacity (ECycle) to the rated capacity (ERated) which is derived from the battery specification. If the battery voltage is nearly a constant, efficiency is given by the ratio of actual to the rated current. Using this model, the battery lifetime estimation can be made as follows:

• The manufacturer specifies the rated capacity, the time constant, and the discharge plot of the battery.

• The discharge current ratio, which is the ratio between the specified rated current (irated) and the calculated average current (iavg), is computed.

• Efficiency is calculated by the interpolation of points in the discharge plot as seen in Figure 11.7, which shows the variation of efficiency with the current ratio.

Figure 11.7. Variation of battery efficiency with discharge current ratio.

image

Lower efficiency corresponds to a shortened battery lifetime and vice versa.

Battery-Scheduling Techniques

Chiasserini and Rao in [6] proposed battery-scheduling techniques that improve the battery lifetime. In a battery package of L cells, a subset of batteries can be scheduled for transmitting a given packet, leaving other cells to recover their charge. The following approaches are applied to select the subset of cells:

Delay-free approaches: In the above context, job is defined as a demand for battery discharge which can be satisfied by the subset of cells. As soon as a job arrives, the battery charge for processing the job will be provided from the cells without any delay. The scheduling scheme for batteries can be any one of the following:

– Joint technique (JN): As soon as a job arrives, the same amount of current is drawn equally from all the cells, which are connected in parallel. If there are L cells, the current discharged from each of them is image times the required supply.

– Round robin technique (RR): This scheme selects the battery in round robin fashion and the jobs are directed to the cells by switching from one to the next one, which takes place as shown in Figure 11.8 (a). The job from job queue gets energy from the battery selected by the transmission module based on round robin technique.

Figure 11.8. Battery-scheduling techniques: (a) round robin technique (b) random technique.

image

– Random technique (RN): In this technique, any one of the cells is chosen at random with a probability of image. The selected cell provides the total supply required, as shown in Figure 11.8 (b).

No delay-free approaches: In these kinds of approaches, the batteries coordinate among themselves based on their remaining charge. In one such technique, a threshold is defined for the remaining charge of the cell. All the cells which have their remaining charge greater than this threshold value become eligible for providing energy. Delay-free approaches such as round robin scheduling can be applied to these eligible cells. The cells which are not eligible stay in the recovery state. This enables the cells to maximize their capacity. The general battery discharge policy employed in portable devices completely drains battery packs one after the other. Bruni et al. have shown in [2] that this kind of policy is inefficient.

Using heterogeneous batteries: This section examines a new model suggested by Rong and Pedram in [11] for a battery-powered electronic system, which is based on the continuous time Markovian decision process (CTMDP). It attempts to exploit the two main characteristics of the rechargeable batteries, that is, the recharging capability under no load condition and the rate capacity effects discussed earlier. The main objective is to formulate an optimization problem which tries to minimize the charge delivered by the battery, thereby effectively utilizing the battery capacity. The problem framed is solved using the linear programming approach. The model proposed in [11] correlates the model of the batteries with that of the power-managed portable electronics. The model consists of two power sources which have different discharge characteristics and capacities. The jobs that arrive are serviced using the power from any of the two batteries, where the batteries are scheduled alternatively. The model, which is depicted in Figure 11.9, uses a battery scheduler, job queue, and the job requester. Each of the three components can be modeled using the Markovian model. In this case, the battery scheduler is a simple selector that chooses between the two batteries in an alternating fashion, or it can be a scheduler that uses the round robin scheduling technique. The formulated problem finds an optimal policy that tries to minimize the usage of the batteries that poses a constraint on the number of waiting requests in the job queue.

Figure 11.9. Heterogeneous battery-scheduling technique.

image

11.4.3 Data Link Layer Solutions

The data link layer solutions take into consideration battery characteristics while designing the protocols. Designing a battery-aware scheduling technique and maximizing the number of packets being transmitted are conflicting objectives. The following schemes attempt to find a trade-off between them. Subsequent sections deal with:

• Lazy packet scheduling scheme

BAMAC protocol

Lazy Packet Scheduling Scheme

The basic principle behind the development of this scheme is that in many of the channel coding schemes for wireless transmission, the energy required to transmit a packet can be reduced significantly by minimizing the transmission power and increasing the duration of transmission. But this may not suit practical wireless environment packets. Hence, a transmission schedule is designed taking into account the delay constraints of the packets. The energy optimal offline and online scheduling algorithms proposed by Prabhakar et al. in [12] consider a transmitter-receiver pair. All packets are of equal length. Let image = (τ1, ..., τM) be their transmission durations obtained as a schedule where M is the number of packets to be transmitted and w(image) be the energy required to transmit a packet over the duration image. Let di, {i = 1, ..., M} denote the packet inter-arrival times and k0 = 0. Then the following parameters are defined:

(11.4.3)

image

(11.4.4)

image

For j ≥ 1,

(11.4.5)

image

(11.4.6)

image

Optimal Offline Schedule

Assuming the arrival times of all the packets (ti, {i = 1, ..., M}) are known a priori and t1 = 0, the problem is to find optimal values for τi, 1 ≤ iM, so as to minimize image. A necessary condition for optimality is

(11.4.7)

image

The optimal offline schedule suggested in [12] is given by

(11.4.8)

image

such that, image is feasible and image (time window) and satisfies the necessary condition for optimality stated above. mj denotes the maximum packet inter-arrival time among all the packets that arrive after the arrival of packet j.

Online Algorithm

Assuming an offline schedule as described above, the time at which packet j starts its transmission is given by

(11.4.9)

image

bj the backlog when the jth packet starts its transmission is given by

(11.4.10)

image

where Di is the inter-arrival time of M packets. The time t < T at which a packet j starts its transmission when there is a backlog of b packets can be set equal to the expected value of the random variable E(image(b, t)), which is evaluated numerically.

(11.4.11)

image

The lazy packet scheduling scheme combined with battery recovery properties is found to be providing energy saving up to 50% [13].

Battery-Aware MAC Protocol

The battery-aware MAC (BAMAC) protocol [14] is an energy-efficient contention-based node scheduling protocol, which tries to increase the lifetime of the nodes by exploiting the recovery capacity effect of battery. As explained earlier in this chapter, when a battery is subjected to constant current discharge, the battery becomes unusable even while there exists a sizable amount of active materials. This is due to the rate capacity effect of the battery. If the battery remains idle for a specified time interval, it becomes possible to extend the lifetime of the battery due to the recovery capacity effect. By increasing the idle time of the battery, the whole of its theoretical capacity can be completely utilized. Also, Equation 11.4.1 clearly shows that this effect will be higher when the battery has higher remaining capacity and decreases with a decrease in the remaining battery capacity. The BAMAC protocol tries to provide enough idle time for the nodes of an ad hoc wireless network by scheduling the nodes in an appropriate manner. It tries to provide uniform discharge of the batteries of the nodes that contend for the common channel. This can be effected by using a round robin scheduling (or fair-share scheduling) of these nodes.

In the BAMAC protocol, each node maintains a battery table which contains information about the remaining battery charge of each of its one-hop neighbor nodes. The entries in the table are arranged in the non-increasing order of the remaining battery charges. The RTS, CTS, Data, and ACK packets carry the remaining battery charge of the node from which they originated. A node, on listening to these packets, make a corresponding entry in its battery table. The objective of the back-off mechanism used in BAMAC protocol is to provide a near round robin scheduling of the nodes. The back-off period is given by

image

where, CWmin is the minimum size of the contention window and rank is the position of that entry in the battery table of the node. TSIFS and TDIFS represent the SIFS and DIFS durations. Their values are same as those used in IEEE 802.11. Tt is the is the longest possible time required to transmit a packet successfully, including the RTS-CTS-Data-ACK handshake. The node follows the back-off even for the retransmission of the packets. When this back-off scheme is followed, nodes with lesser rank values back off for smaller time durations compared to those with higher rank values. Uniform[0, (2n × CWmin) - 1] returns a random number distributed uniformly in the range 0 and (2n × CWmin - 1), where n is the number of transmission attempts made so far for a packet. Thus the nodes are scheduled based on their remaining battery capacities. The higher the battery capacity, the lower the back-off period. This ensures near round robin scheduling of the nodes. Hence, a uniform rate of battery discharge is guaranteed across all the nodes. This guarantees alternate periods of transmission and idling of the node, which leads to alternate periods of discharge and recovery of the battery, as illustrated in Figure 11.10. In this protocol, whenever a node gains access to the channel, it is allowed to transmit only one packet, giving rise to an average idle time of (N - 1) × m, where N is the total number of nodes contending for the common channel and m is the average time taken by a node for transmission of a packet. This improves the lifetime of the battery as it gains more idle time to recover charge because of the recovery capacity effect.

Figure 11.10. Illustration of BAMAC.

image

BAMAC(K) Protocol

Unlike the BAMAC protocol wherein the nodes are allowed to transmit only one packet on gaining access to the channel, in the BAMAC(K) protocol proposed in [14], K packets are transmitted consecutively by the node on gaining access to the channel. This provides a discharge time of K × m and an average recovery time of (N - 1) × m × K for the nodes. Though a larger value of K results in higher recovery time, it also increases the discharge time of the battery during the transmission of K packets. This increases the rate capacity effect due to faster depletion of the battery charge. A smaller value of K, on the other hand, decreases the recovery time of the battery. Hence, choosing an appropriate value for K is very important for optimum performance of the protocol

In the BAMAC(K) protocol, whenever the node attempts to gain access to the channel, it waits for DIFS time duration before transmitting the first packet. If no other neighbor transmits in this duration, the active node (the node that gains access to the channel) initiates its transmission. For transmitting each of the next K - 1 packets, it waits only for an SIFS duration; if the channel remains idle during this SIFS duration, the active node proceeds with the transmission of the packet. This ensures that none of the neighboring nodes gains access to the channel until the active node completes the transmission of K packets. This is ensured since the neighbors never find the channel idle for DIFS time duration.

Both the protocols explained above ensure short-term fairness among the nodes in terms of access to the common channel. This ultimately increases the lifetime of the nodes in an ad hoc wireless network. Though the protocol provides fairness to the nodes, it does not provide per flow fairness. This is because providing per flow fairness may lead to higher battery consumption for the nodes with more flows than the nodes with lesser number of flows. Since the protocol considers improving the lifetime of the nodes as its main objective, individual flows are not taken into consideration. Another issue in this protocol lies in finding an optimal value for K, which depends on a number of parameters such as number of neighbor nodes contending for the channel access, packet arrival rate for all the nodes, packet deadline, traffic characteristics, and battery parameters and characteristics.

11.4.4 Network Layer Solutions

The lifetime of a network is defined as the time from which the network starts operating until the time when the first node runs out of battery charge. The network layer solutions for battery management aim mainly at increasing the lifetime of the network. The major solutions provided focus primarily on developing routing protocols that use routing metrics such as low energy cost and remaining battery charge.

Traffic-Shaping Schemes

This section discusses some of the traffic-shaping schemes ([3] and [15]), which are based on the battery discharge characteristics.

The scheme proposed in [3] uses the same basic model for the battery explained in earlier sections and is based on the fact that most of the network traffic is bursty. Introducing some acceptable delays in the battery discharge requests paves the way for the battery to be idle for a few time slots. This allows charge recovery to a certain extent. A proper analysis of the traffic characteristics provides a discharge-shaping technique by introducing battery idle times that trade off energy efficiency and delay. We shall now discuss an algorithm that increases the lifetime of the battery by shaping the network traffic.

Shaping Algorithm

The main goal of the algorithm is to introduce delay slots in the battery discharge process. This is done by defining a threshold which is expressed in terms of the amount of charge. The model used in this algorithm consists of a battery with a nominal capacity of N, a charge request rate of αN, a theoretical capacity of T, and a threshold (expressed as a state of charge) of B.

Whenever the state of the battery drops below B, it is allowed to recover through idling. The remaining requests that arrive at the system are queued up at the buffer L with a large buffer size to guarantee zero loss probability. As soon as the battery recovers its charge and enters state B + 1, it starts servicing the queued-up requests. By applying this model of shaping discharge to the cell, the gain obtained is given by image. A large value for M, which is equal to N - B, is favorable, since it results in higher service rates and smaller average delays for the discharge requests. Performance improves as the value of M increases. While considering the ON-OFF process, each requiring one charge unit, the ON-OFF times are random variables based on the Pareto distribution,

(11.4.12)

image

Thus with an additional delay in the discharge requests, a significant improvement in the performance of the battery can be achieved.

Strategies for Blocking Relay Traffic

One of the main issues concerned with ad hoc wireless networks is the relay traffic. As mentioned earlier, each node deals with two kinds of traffic: relay traffic and its own traffic. A trade-off is reached between the blocking probability of the relay traffic and the battery efficiency. The intermediate nodes may not wish to transmit the whole of the neighbors' traffic. A method that calculates the optimal fraction of relay traffic, proposed by Srinivasan et al. in [15], is discussed in this section. The model used has N number of nodes which are uniformly distributed in an ad hoc wireless network with R(s, d) set of available routes between source s and destination d. If P(k, r) is the power required to transmit from node k to the next node through route r, then the energy cost associated with route r is given by

(11.4.13)

image

Whenever a traffic session is generated at the source node s, a route is selected which has minimum energy cost. A relay or an intermediate node can either allow the session traffic by sending an acknowledgment to s or block it by sending a negative acknowledgment to s. If the latter is chosen, on receiving the negative acknowledgment, the source repeats the process for the next best route on the basis of energy cost. If all the routes are blocked, the session is said to be blocked. Each node tries to behave selfishly when there is relay traffic. The amount of selfishness is defined using a quantity called sympathy. sympathy(k, r) denotes the sympathy associated with kth node in route r. The value of sympathy lies between 0 and 1, which reflects the willingness of the node to accept the relay traffic. The value 0 reflects complete unwillingness and 1 reflects complete willingness of the node to accept relay traffic. It is calculated based on some of the factors affecting transmission such as energy constraints of the node and the node's location in the network. The relay node rejects relay traffic based on the total amount of data the source intends to send to the destination and the strategy used. The following two strategies are considered in order to explore the above discussed trade-off which is based on the sympathy level:

Random strategy: Assuming a session between source s and destination d, available routes are stored in R(s, d) in the increasing order of the sympathy level. Whenever the kth node in route r receives a session request, it accepts it with a probability sympathy(k, r).

Pay-for-it strategy: According to this strategy, each node keeps an account of the help that it had received from other nodes relaying its messages, termed credit, and the amount of help it has given to others by allowing the relay traffic, that is, debit. The node tries to help if it has received more help in the recent past and rejects if its own traffic has been rejected often, that is, the node tries to find a balance between these two parameters.

The number of packets dropped by the relay nodes decreases as the number of selfish users decreases.

Battery Energy-Efficient Routing Protocol

The battery energy-efficient (BEE) routing protocol [13] is an energy-efficient routing protocol that attempts to combine the lazy packet scheduling and the traffic-shaping schemes.

The BEE routing protocol tries to find a balance in the energy consumption among all the nodes in the ad hoc wireless network. In this protocol, a new metric called energy cost is introduced. From the available routes, the one which has the lowest energy cost is selected. Even a route with a greater number of hops may be selected provided the route along these nodes has minimum power per link. The algorithm also insists on selecting the path with the higher battery charge in all the nodes in order to allow the recovery effect of the batteries with lesser remaining charge. The network has K nodes with S source nodes and D destination nodes. Any source node s image S can transmit to the destination node d image D through the relay nodes. The initial and instantaneous amount of battery charge are Bi and bi, respectively, for any node i. The transmission range of all the nodes is p. Any node i is reachable to any other node j, only if the distance (dij) between them is less than p. The energy required to transmit from node i to node j is given by image, only if i lies within the reach of j and vice versa. Otherwise, the energy required to transmit is infinity.

The energy required to transmit is discretized into few energy levels between emin and emax. The mean energy required for node i to transmit a packet is given by image, where Ri is the set of nodes whose distance from i is less than p. Whenever the number of packets transmitted by a node decreases, the energy level of the node increases significantly. This increase in energy level can be expressed as Γ(λi, ei), which is a function of transmission rate λi and the mean energy ei. The new battery status (state of charge of the battery) at any time instant is given by bi + Γ(λi, ei) if node is idle. The energy cost function used in BEE can be defined for kth route image as follows:

(11.4.14)

image

where s and d are the source and generic destination nodes, respectively; lij is the link between nodes i and j of route image; image is the weighting function such that image = Ai with A being a constant; otherwise, image = 1. Pij denotes the energy penalty that occurs whenever a power level higher than the mean power level is required and is equal to max(0, eij - ei). image is the minimum value of battery status among the nodes of the route image. The routing protocol can be stated as follows: Whenever source s has data to send to destination d, it calculates the energy cost function of all the routes leading to the destination d. The best route among them is one with minimum cost function, that is, image. One main disadvantage of this scheme is that the complexity of the algorithm depends on the number of routes for which the cost has to be computed. One alternative suggested to this algorithm to reduce the complexity is that the source selects a set of routes c on a random basis from the available list of routes to the destination. The BEE algorithm is then applied to this set alone to find the optimal route.

Energy Conserving Routing Based on Battery Status

The main goal of battery management schemes is to exploit the discharge characteristics of the battery and allow recovery. Energy-efficient routing protocols are designed to take into account this battery information in the selection of the best route. Chang and Tassiulas proposed an energy-efficient routing protocol [16] which tries to maximize the battery lifetime. The algorithm converges to a maximizing flow problem when there is a single power level and provides an optimal lifetime for batteries. In order to maximize the lifetime, the traffic should be routed so as to balance the energy consumption among the nodes rather than trying to reduce the power consumed. Most of the previous work which deals with minimizing the overall energy consumption tries to route the packets through the path that has the minimum energy consumption per unit packet. But these routing techniques remain static, which leads to a more rapid draining of the battery charge through those routes. In [16], Chang and Tassiulas try to find the optimal traffic split (distributing the traffic between multiple routes that exist between the source and the destination) which takes into consideration the remaining charge of the battery. The assumptions made are mentioned below. An ad hoc wireless network with N nodes is considered as a directed graph G(N, A) where A is the set of all directed links (i, j), where i, j image N. Si denotes the set of all nodes reachable for node i, such that for any existing link (i, j), j image Si. Initial battery store for node i is given by Ei. image and image denote the rates at which information is generated and transmitted, respectively, at node i for commodity c image C, where C denotes the set of all commodities. Each commodity contains a set of flows. eij is the energy required to transmit an information from node i to node j, O(c) is the set of source nodes, and D(c) is the set of destination nodes. image denotes the flow of the commodity, which is the rate at which data is transferred between node i and node j. The optimal flow condition states that the total incoming flows for node n must be equal to the total outgoing flows, as shown in Figure 11.11.

Figure 11.11. Flow condition at node i.

image

The flow augmentation (FA) algorithm is performed at each node to obtain the flow which, in turn, is used to split the incoming traffic. Each node performs the following at each iteration:

• Each source node o image O(c) for a commodity c calculates the shortest cost path to its destination d image D(c).

• Then the flow is increased by an amount of image, where λ is the step size.

• The shortest cost path is recalculated and the process is repeated until the first node runs out of its initial energy.

The main objective of the algorithm is to find the link with minimal cost that leads to maximization of the lifetime of the network. The three main factors that influence the cost cij are the energy required to transmit a packet over the link, eij, the initial energy Eij, and the instantaneous energy image. A good link of flow augmenting path must consume less energy and should select the path which has more energy remaining. Simultaneous optimization of these factors is not possible. Hence, a balance point is reached between these two. Taking this into account, the cost calculation is given by

(11.4.15)

image

x1, x2, x3 are weighting factors for the items eij, image and Ei, respectively. The path cost is the sum of the costs of all the links on the path.

The flow redirection (FR) algorithm is an inference of the following observation, proof for which can be obtained in [16]: "In a single source, single destination environment or multiple source, multiple destination environment, without any constraints on the information generation rates, the minimal lifetime of every path remains the same under the optimal flow condition. In case of multiple source and destination, common-source and common-destination nodes are assumed with zero cost link that connects all the sources and destinations, respectively, that is, image." If there exists a flow from source o image O(c) to destination image which uses the minimum total transmitted energy path with a flow value of image, then the steps taken to reroute the flow to a different destination are:

• Determine the paths in which redirection is going to take place.

• Calculate the amount of redirection, that is, the percentage of flows per commodity to be redirected, given by image.

• Redirect the flows through certain path to image by decrementing an amount image from the outgoing flows and by adding same amount to the flows of the selected path.

11.5 TRANSMISSION POWER MANAGEMENT SCHEMES

The components used in the communication module consume a major portion of the energy in ad hoc wireless networks. In this section, we investigate some of the means of achieving energy conservation through efficient utilization of transmission power such as selection of an optimal power for communication. The variation in transmission power greatly influences the reachability of a node. Increasing the transmission range not only increases coverage, but also the power consumption rate at the transmitter. This section deals with finding a trade-off between the two contradictory issues, that is, increasing the coverage of a node and decreasing its battery consumption.

11.5.1 Data Link Layer Solutions

As stated earlier, transmitter power greatly influences the reachability of the node and thus the range covered by it. Power control can be effected at the data link layer by means of topology control and constructing a power control loop. This section describes different power-based solutions at the data link layer. Recent works in [17], [18], [19], and [20] suggest that a proper selection of power levels for nodes in an ad hoc wireless network may lead to saving of power and unnecessary wastage of energy. Some of the solutions proposed to calculate the optimum transmission range are as follows:

• Dynamic power adjustment policies

• Distributed topology control algorithms

• Constructing distributed power control loop

• Centralized topology control algorithm

Dynamic Power Adjustment Based on the Link Affinity

Ad hoc wireless networks are prone to constant link failures due to node mobility, hence the stability of routes cannot be assured in such situations. But frequent link failures lead to reduced throughput. A protocol that selects a route which has a very low probability of link failures is proposed in [17]. A parameter called affinity that decides the stability of a route is defined. Node m samples a set of signals from the node n and calculates the affinity (αnm) as follows:

(11.5.1)

image

that is, the link is assumed to be disconnected between two nodes n and m if the signal strength Snm(current) is well below the threshold signal strength (Sthresh). δSnm(ave) is the average of the rate of change of signal strength over the last few samples.

Each node transmits Hello packets to its neighbors periodically with constant power. As soon as the receiver hears one such Hello packet, it calculates the signal strength of the transmitter (St,t) using the relation specified below. If the time interval of the arrival of Hello packets is represented by τ, then

(11.5.2)

image

where SH is the signal strength of the Hello packet received, τ is the time period between two successive Hello packets, and a is the link affinity between a node and its neighbor. After calculating the signal strength, the node adjusts its transmission power (PT) accordingly. The new adjusted transmission power (Pt,t+τ) is given by

(11.5.3)

image

Each node transmits with the minimum power that is required to reach the destination. Thus dynamic power adjustment can be made in a distributed way. The authors of [17] show that around 3% to 5% of power saving can be obtained using the above scheme.

Distributed Topology Control Mechanisms

Now we shall discuss the algorithm proposed in [18], which uses a distributed power control mechanism as opposed to the centralized one used in [21] and [22], which is explained in the next section. According to this algorithm, each node of the ad hoc wireless network independently runs a localized algorithm and decides the appropriate power level to be used by that node. A node increases the power directionally until it finds one node in all the directions. Then it tries to increase the lifetime of the nodes to a greater extent by reducing the transmission power and having less coverage of the nodes while guaranteeing the same connectivity as the one achieved when the nodes are maximally powered. The principle behind this algorithm is that the topology of the network can be changed by choosing the appropriate power level. An improper network topology may suffer from poor network utilization, lesser battery life, and higher delays.

The model used in this algorithm uses a cone-based topology on a two-dimensional surface. The model assumes a set V of n nodes in the plane. Each node consists of a power supply unit, memory, and processor for performing simple calculations. Any node n can send a broadcast message with varying powers ranging between 0 ≤ p ≤ P. Whenever a node n initiates a broadcast message, all nodes that receive the message (set N) reply with an acknowledgment. Thus, node n becomes aware of the set N. When any two nodes u and v exchange broadcast and acknowledgment messages, they become aware of the directions of each other which are separated by a degree of π. Hence, nodes u and v transmit with a power p and p + π, respectively. Techniques such as angle of arrival (AOA), which is used to calculate the direction of the node, are assumed to be available. Now we will look into the algorithm in detail, which consists of two phases.

In the first phase, which is a neighbor discovery process, a distributed algorithm is run on each node so as to form a connected network. This is done as follows. Starting with a small value for the power level, each node sends a broadcast message. Any node receiving it sends an acknowledgment back to the sender. The sender records all the acknowledgments that it received along with the information about the direction. It determines whether there exists at least one neighbor in each cone of degree a. Each node u starts with the initial value for the growing power p. If node u discovers any neighbor v, it adds it to the local neighbors set N(u). The node keeps increasing the power until one node is found in each cone or till the power p reaches the maximum value P. This termination condition can be mathematically formulated using Figure 11.12. That is, each node in the set N(u) covers a cone for any node u. If the union of all the cones forms an angle greater than 2π, the algorithm enters phase 2. The inference made from phase 1 is that, if there is a node v in the cone when sending with a maximum power P, then there is always another node v' in the cone when sending with a minimum power p(u), that is, the algorithm is symmetric.

Figure 11.12. Coverage determination.

image

In the second phase, the redundant edges are removed without affecting the network connectivity of nodes and without removing the minimum power path. If two nodes v and w exist in the same cone for node u, with v, w image N(u) and w image N(v), then node w is removed from N(u) if it satisfies the condition

(11.5.4)

image

where q ≥ 1 and P(u, w) denotes the power required by node u to reach w. For values of P smaller than image, the algorithm guarantees maximum connected set.

Constructing Distributed Power Control Loop

The following is a distributed approach which tries to attain an optimal power level for the nodes in an ad hoc wireless network. The authors in [19] proposed a power control loop which increases the battery lifetime by 10-15% and the throughput by around 15%. The algorithm is tested on the model that assumes mobility, group communication, and fading due to blockages such as manmade obstacles. The proposed algorithm works at the MAC layer in a distributed fashion. The main objective behind the algorithm is to reduce the energy cost of communication between the nodes and thereby increasing the battery lifetime and the effective bandwidth. The main reasons behind the need for distributed algorithms in ad hoc wireless networks are the mobility and absence of a central arbiter which can inform the nodes about the power levels to be used. The algorithm aims at allowing each node to use different power levels while transmitting to different nodes and at the same time maintaining the connectivity of the network. This is because nodes that are closer require less power for transmission, but on using a common power level as suggested by Kawadia et al. in [21], interference may be increased when higher power is selected as a common power level and used for communicating with nearby nodes. The power control algorithm has been incorporated into the IEEE 802.11 MAC protocol. We now discuss the modifications made to the 802.11 MAC protocol.

• Unlike the usual IEEE 802.11 DCF protocol which uses only one common power level, the algorithm suggested by Agarwal et al. in [19] uses ten different power levels varying with a step size of one tenth of the maximum power level available.

• The format of the message header is also modified as shown in Figure 11.13. The headers of the CTS and Data frames are modified to include the information of the ratio of the signal strength of the last received message to the minimum acceptable signal strength of the node under consideration. When the receiver receives the RTS signal from the transmitter, it attaches to the CTS the ratio information calculated and sends it back to the sender. Similarly, when the sender gets the CTS packet, it includes the ratio in the Data frame and sends it to the receiver. Thus in a single transmission of RTS-CTS-Data-ACK exchange, both the sender and the receiver learn about the transmit power levels of each other.

Figure 11.13. Modifications to IEEE 802.11.

image

• The MAC layer for each node constructs a table which holds information about the transmit power levels of all the neighboring nodes. The information stored in the table consists of the exponential weighted average (EWA) history of the ratio for all the neighbors. This table may be small because of the fewer number of neighbors in the ad hoc network environment. There exist two situations where the table has to be considered. Whenever a message is received from a node, the receiver looks up into the EWA history. If the node is not present in the EWA table, a new entry is made in the table. The second scenario is, when the receiver is not within the range of the transmitter, that is, when there is no reply for the RTS or the Data signal, the transmitter increases its power by the step size of one tenth of the maximum power available. Similarly, the receiver increments its power level if there is no reply for the CTS sent by it. This process is continued until the node becomes reachable or the maximum power is reached.

Thus each node calculates which level of power is to be used for transmission to each of its neighboring nodes.

The authors in [23] have also proposed a dynamic topology construction algorithm in which nodes, by overhearing the RTS-CTS packets, calculate their transmission powers. The main aim of this power control dual channel (PCDC) protocol is to choose the transmission power in such a way that the connectivity between nodes remains unaffected but at the same time increases the number of simultaneous transmissions in the network, and thereby the throughput. The main difference between this protocol and the centralized topology control algorithm proposed in [20] lies in the use of dual channels, one for transmission of control packets and the other for data packets. The use of a separate channel for control packets avoids to a great extent those collisions caused by the hidden terminal problem. This is effected as follows. Whenever a node M hears transmission of RTS packets in its control channel with power p, destined for another node N, which causes interference to the on-going reception at node M, it sends a special CTS packet which makes the RTS sender withdraw its transmission. The duration of withdrawal is specified by node M based on the time duration for which the on-going transmission of node M is estimated to last. This increases the end-to-end throughput of the system and also reduces the overall power consumption at the nodes.

Centralized Topology Control Algorithm

The algorithm suggested by Ramanathan and Rosales-Hain in [20] is a centralized algorithm which adjusts the power level of the nodes to create the desired topology. The problem is constrained as an optimization problem with power level as the optimization objective and the constraints are connectivity and biconnectivity.

Unlike the conventional representation of an ad hoc wireless network using a graph, as represented in [21] and [18], a model for the ad hoc wireless networks assumed in [20] keeps separate the entities contributing to the ability to communicate. Some of the entities are the mobility information, propagation characteristics, and node parameters such as transmission power and antenna direction. These parameters can be defined as follows.

Any graph is said to be k-vertex/edge connected if and only if there are k-vertex/edge-disjoint paths between every pair of vertices. The graph is connected if k = 1 and biconnected if k = 2. The network is represented as M = (N, L), where N is the set of nodes and L is the location information image, which is the set of coordinates on the plane. The parameter vector of a node is given by P = {f0, f1, ..., fn}, where ƒi: NR is a real value where the parameter vector includes antenna configuration, spreading code, and hardware. In [20] the authors consider only one parameter, that is, power for node u (p(u)). Hence, P = {fp}. The propagation is represented as γ : L × L → Z where L represents the set of location coordinates in the plane and γ(li, lj) gives the propagation loss at lj for the packet whose source is li. For successful reception

(11.5.5)

image

where S is the receiver sensitivity which is the threshold strength required for reception of signals and is assumed to be known a priori and γ is assumed to be a monotonically increasing function of the geographical distance given as

(11.5.6)

image

where P must be greater than λ(d) to achieve successful transmission where λ(d) is the least power function which specifies the minimum power required to transmit to a node which is at a distance d. Given an ad hoc wireless network represented by M = (N, L) and the transmit power function p and the least power function λ, the induced graph can be represented as G=(V, E), where V corresponds to the nodes in N, and E is the set of undirected edges such that (u, v) image E if and only if both p(u) and p(v) are greater than λ(d(u, v)).

The constrained optimization problem can thus be stated as a connected min-max power (CMP) problem. The problem is to find a per-node minimal assignment of transmit powers p : NZ+, such that the graph (M, λ, L) that is induced remains connected and the power factor MaxuimageN(p(u)) has a minimum value. For the biconnected graph, the problem can be stated as biconnected augmentation min-max power (BAMP). Given the graph as in the previous definition, the problem is to find a per-node minimal set of power increments (δ(u)) such that the induced graph (M, λ, p(u)(u)) remains biconnected and the power factor MaxuimageN(p(u)+ δ(u)) has a minimum value. We shall now look into two major types of algorithms that are used to generate connected and biconnected graphs that satisfy the given constraints.

The connect algorithm is similar to the minimum cost spanning tree algorithm. The basic idea used in this algorithm is to iteratively merge the connected components until only one component is left. The input to the algorithm is a graph in which the nodes use the minimum power for transmission and hence remain partially connected (M). The following steps are performed in order to carry out this algorithm:

Step 1: First, the connected node pairs are sorted in the increasing order of the mutual distance.

Step 2: If the nodes are in different network components, the power of the nodes are increased so as to reach the other nodes.

Step 3: Step 2 is repeated until the whole network becomes connected.

The biconnect algorithm attempts to discover a biconnected graph from the given graph M so as to satisfy the objectives and the constraints. The extension to the biconnected network from the algorithm connect can be done as follows:

Step 1: The biconnected components are identified in the graph induced by the algorithm connect based on the depth-first search method.

Step 2: The nodes are arranged in non-decreasing order of the connected node pairs as done in the previous algorithm.

Step 3: Nodes which are in different components of the network are connected by adjusting the power appropriately, and this step is repeated until the network becomes biconnected.

Now the graph obtained may not be per-node minimal because adjusting the power to reach the nodes of other components may lead to addition of edges that are not critical. These edges are termed as side-effect edges, as shown in Figure 11.14. The numbers of the form s - p denote step number - power assigned during the step, d(s) denotes distance d between the corresponding nodes, and the step s during which the edge has formed. Figure 11.14 (a) is per-node minimal and Figure 11.14 (b) is obtained by reducing the power level to 1 but still maintaining the connectivity. To restore the per-node minimal for the graph shown in Figure 11.14 (b), a post-processing phase is carried out after applying the aforementioned algorithms.

Figure 11.14. Side-effect edges.

image

Let S be the list of sorted node pairs. In the post-processing algorithm, the power of the nodes is reduced to the minimum possible value without affecting the connectivity of the induced graph.

In Figure 11.14 (b), Step 1 connects A and C and Step 2 connects B and D with power level 1. By increasing the power level to 2 in Step 3, A and B get connected. In Steps 4 and 5, CE and DF are formed with power level 3. This creates a side-effect edge CD. Hence, the power of A and B can be reduced back to 1, without affecting the graph connectivity.

11.5.2 Network Layer Solutions

The power at the network layer can be conserved by reducing the power consumed for two main operations, namely, communication and computation. The communication-related power consumption is mainly due to the transmit-receive module present in the nodes. Table 11.1 lists the power consumption of the communication module during different modes of operation. Whenever a node remains active, that is, during transmission or reception of a packet, power gets consumed. Even when the node is not actively participating in communication, but is in the listening mode waiting for the packets, the battery keeps discharging. The computation power refers to the power spent in calculations that take place in the nodes during routing and power adjustments. The following section discusses some of the power-efficient routing algorithms. In general, a routing protocol which does not require large tables to be downloaded or greater number of calculations is preferable. Also, reducing the amount of data compression that is done before transmission may decrease the communication power but ultimately increases the number of computation tasks. Hence, a balance must be reached between the number of computation and communication tasks performed by the node, which are contradictory to each other.

Table 11.1. Power consumed by Lucent ORiNOCO wireless LAN PC card in different modes

image

Common Power Protocol

In [21], the authors propose a common power protocol (COMPOW) that attempts to satisfy three major objectives: increasing the battery lifetime of all the nodes, increasing the traffic-carrying capacity of the network, and reducing the contention among the nodes.

The main reason behind the need for an optimal transmit power level for the nodes in an ad hoc wireless network is that battery power is saved by reducing the transmission range of the node. This also leads to a connected network with minimum interference. In [21], the authors put forth the following reasons for a common power level in an ad hoc wireless network:

• For the proper functioning of the RTS-CTS mechanism: If there are different power levels for each node, CTS of a lesser-powered node may not be heard by its neighboring nodes. Hence, a neighboring node may start transmitting, which leads to collision.

• For proper functioning of link-level acknowledgments: Whenever the transmitter (T) sends a packet to a receiver (R), the power level at R must be at least equal to that of T so that the acknowledgment sent by node R reaches node T. This implies that the power level of any two neighbors in an ad hoc wireless network must be equal. By transitivity, this is extended to multi-hop neighbors and thus a common power level is necessary for all the nodes in the network. If the common power level selected is a very high value, then this may lead to interference among the nodes as shown in Figure 11.15 (a). On the other hand, if the value is too low, the reachability of nodes may become very weak, which in turn may render the network partitioned as shown in Figure 11.15 (b). Hence, choosing an optimum value is a difficult task. For calculating the common power level, the following solution is proposed in [21]. A network with n nodes is considered for study with a data rate of W bits/sec, in a circular area of A square meters. The common range to be calculated is assumed to be r. A successful transmission from node T to node R requires that there cannot be any other transmission occurring around a distance of (1 + Δ)r from R, where Δ > 0. Now, let us consider two simultaneous transmissions, one from T to R and another from T' to R' separated by distance of image and image, respectively, as shown in Figure 11.16. Then the distance between the receivers R and R' is given by

(11.5.7)

image

Figure 11.15. Power levels and the range. (a) Interference due to higher transmission range (T1). (b) Partition of the network due to lower transmission range (T2)

image

Figure 11.16. Successful simultaneous transmissions.

image

A conclusion drawn from the above discussion is that a circle of radius image and a circle of radius image are disjoint. Hence, the distance between any transmitter and receiver must be less than the common range r. If the common range for n nodes is r(n) then the problem can be stated as

(11.5.8)

image

if and only if

(11.5.9)

image

The maximum throughput that can be supported by the network is given by

(11.5.10)

image

In a practical wireless environment, factors such as the number of nodes and the area of the domain may not be known a priori. In such cases, rather than to deal with the range factor, it is convenient to deal directly with the power level P. To find the smallest power level that is required to ensure network connectivity, [21] proposes the following network feedback strategy. The power level for a node j is given by Pj. Let R(P) denote the set of nodes which are connected when the common power level (p0 = p1 = ... = pn) is maintained at a particular value P, and let RPmax be the maximal reachable set. By analyzing the feedback and adjusting the power, the smallest power level required can be obtained. That is,

(11.5.11)

image

Here, t denotes an instant of some set of sampling times, at which the power levels are changed. R(P) can be obtained from the routing tables and hence the common power level can be calculated in real life.

Kawadia and Kumar proved that the COMPOW protocol works well only in a network with a homogeneous distribution of nodes and exists only as a special case of the CLUSTERPOW protocol proposed by them in [24]. They have extended their COMPOW protocol to work in the presence of non-homogeneous dispersion of the nodes. This extended power control protocol, called CLUSTERPOW, is a power control clustering protocol, in which each node runs a distributed algorithm to choose the minimum power p to reach the destination through multiple hops. Unlike COMPOW, where all the nodes of the network agree on a common power level, in CLUSTERPOW the value of p can be different for different nodes and is proved to be in non-increasing sequence toward the destination. The authors in [24] have provided an architectural design to implement CLUSTERPOW at the network layer. This loop-free power control protocol can work in the presence of any underlying routing protocol.

Globalized Power-Aware Routing Techniques

Minimizing the overall transmission power and distributing the power consumption evenly among the nodes are contradictory to each other. Now we shall look into the routing algorithms suggested in [25] and [26] that attempt to find a balance between these two factors by using new node metrics, as described below, for the route selection process. In [27], the authors have proposed a power optimal scheduling and routing protocol which tries to minimize the total average power in the network, subjected to constraints such as peak transmission power of the nodes and achievable data rate per link.

Minimum Power Consumption Routing

The power required to transmit a packet from node A to node B is inversely proportional to the nth power of the distance (d) between them, that is, image, where n varies from 2 to 4 depending on the distance and terrain between the nodes. A successful transmission from node ni to node nj requires the signal to noise ratio (SNR) of the node j to be greater than a specific threshold value image. This can be mathematically represented, which shows that for a successful transmission, the SNR at receiver node nj given by SNRj must satisfy the condition:

(11.5.12)

image

where Pi is the transmission power of host ni; image is the path gain between hosts ni and nj; nj is the thermal noise at the host nj; and BER is the bit error rate which is based on the threshold image.

The total transmission power for route l is the sum of the transmission powers of all nodes in the route. According to minimum power consumption routing (MPCR), the preferred route is the one with minimum total transmission power among all the available routes between the source and the destination. This routing algorithm can be realized by modifying the Dijkstra's shortest path algorithm. But this may select a path with a greater number of hops, which passes through nodes that require less power. Hence it may result in increasing the end-to-end delay of the packets in the network. In addition to this, involving greater number of hops may reduce the stability of the route because of the node mobility which is one of the inherent characteristics of ad hoc wireless networks. Hence, the Bellman Ford algorithm is considered, which takes into account transmission power as a cost metric. The power cost is given by

(11.5.13)

image

where Ptransmit(ni, nj) is the transmitter power of node i to reach node j and Ptransceiver(nj) is the transceiver power of node j, which tries to select the route with the fewer number of hops. The cost function at node ni is given by

(11.5.14)

image

where NH(i) = {j, nj is a neighbor node of ni}. This algorithm tries to reduce the overall power consumption of the network, but still lacks in the ability to reduce the power consumed at individual nodes. As shown in Figure 11.17, node n may be a common node used by multiple flows simultaneously. This may render node n deprived of all its battery charge faster than other nodes in the network.

Figure 11.17. An illustration of the disadvantage in using shortest path routing.

image

Minimum Variance in Node Power Levels

The main motivation behind this metric is to ensure that all the nodes are given equal importance and no node is drained at a faster rate compared to other nodes in the network. This problem is similar to the load sharing problem in distributed systems, which is an NP-complete problem. Woo et al. in [26] suggest a scheme called join the shortest queue (JSQ) that tries to attain the optimal solution. According to this algorithm, for transmitting a packet, a node selects the next-hop node so that it has the least amount of traffic among all neighbors of the node. The objective can also be realized by applying a round robin selection of next-hop neighbors.

Minimum Battery Cost Routing

In the minimum battery cost routing (MBCR) algorithm, individual battery charges are taken into consideration while selecting the route, that is, the path selected must not contain the nodes that have less remaining battery capacity. This may be done in many ways. If image denotes the battery cost at any time instant t, image represents the battery cost function of host ni. Now suppose the function reflects the remaining battery capacity of the node, then

(11.5.15)

image

which means that higher the value of the function fi, the more unwilling the node is to participate in the route selection algorithm. If a route contains N nodes, then the total cost for the route Ri is the sum of the cost functions of all these N nodes. The routing algorithm selects that path with the minimum value of the total cost among all the routes that exist between the source and the destination.

(11.5.16)

image

Here A is the set of all routes from source to destination. The major drawback with this scheme is that the use of summation of the remaining energy of the nodes as a metric selects the path which has more remaining energy on an average for all of its nodes rather than for individual nodes. This may lead to a condition as shown in Figure 11.18, where Route 1 is selected in spite of some of its nodes having less battery capacity compared to the nodes of Route 2. In Figure 11.18, "Node x(y)" denotes node x with y equal to the value of the function image for the node x at the time instant t. Although the battery capacity for node 3 is too little, the path containing node 3 is selected for a connection from source to destination because Route 1 has lesser battery cost due to the other nodes in the path.

Figure 11.18. Illustrating the disadvantage of minimum-cost routing.

image

The algorithm works well when all the nodes have higher battery capacity, but because the network nodes have almost drained their battery charges, some discharge control mechanisms are required to ensure uniform drainage from the batteries. The main advantage of this algorithm is that the metrics used can be directly incorporated in the routing protocol.

Min-Max Battery Cost Routing

The objective function of the min-max battery cost routing (MMBCR) algorithm is to make sure that route selection is done based on the battery capacity of all the individual nodes. Hence, the battery cost is defined as

(11.5.17)

image

Therefore, the desired route is given by Ri = Min(Rj, j image A) where A is the set containing all possible routes. A variant of this routing algorithm minimizes the maximum cost after routing N packets to the destination or after a time period of t seconds. This tries to postpone the first node failure, which ultimately leads to a longer network lifetime. This algorithm ensures uniform discharge from the batteries. A closer look at it reveals that the path chosen does not ensure minimum transmission power and hence rapidly reduces the lifetime of all the nodes.

Conditional Min-Max Battery Cost Routing

In order to solve the contradictory issues that exist in the algorithms previously mentioned, instead of using the battery cost, conditional min-max battery cost routing (CMMBCR) considers the battery capacity directly. A threshold value γ is defined for the remaining battery capacity of the nodes. Only those paths that have sufficiently higher capacity for all their nodes compared to the threshold participate in the routing algorithm. Once the competing routes are decided, the usual MTPR algorithm is applied on them so as to choose the path with minimum total transmission power. The battery capacity of route j image at time t is

(11.5.18)

image

Any route j can participate in the routing process only if image.

Minimum Energy Disjoint Path Routing

In [28], Srinivas and Modiano have proposed minimum energy disjoint path routing for two cases: (a) node disjoint case and (b) link disjoint case. The important need for having disjoint paths in wireless networks, especially in ad hoc wireless networks, is because of the need for reliable packet transmission and energy efficiency. Ad hoc networks are highly unreliable due to the mobility of nodes and hence the probability of link failure is quite high in such networks. This problem can be overcome by means of link disjoint routing. Also, since the ad hoc nodes have stringent battery constraints, node disjoint routing considerably increases the lifetime of the network by choosing different routes for transmitting packets at different points of time. The routing schemes assume that the topology is known a priori in the form of an energy-cost graph. The authors have proposed optimal algorithms for finding minimum energy disjoint paths and nodes in polynomial time of O(kN3) and O(kN5), where N is the number of nodes in the network. They have also proposed a number of sub-optimal heuristics which have a run-time of O(kN2). The authors have also proposed a distributed version of the optimal algorithms.

Localized Power-Aware Routing Techniques

The aim of this routing protocol is to find the shortest route to the destination so as to increase the net lifetime of the power source (battery), using localized algorithms. Local algorithms are distributed greedy algorithms that try to achieve a global objective based on the information available locally at the node. In this section, we will look into one of the power-cost-aware routing protocols proposed by Stojmenovic and Lin in [29]. This protocol is based on the basic principle that, by placing intermediate nodes at the desired location between two nodes separated by a distance d, the transmission power can be made proportional to the distance d rather than dα [25], where α ≥ 2. The protocol tries to find the route that minimizes the total power needed, which increases the battery lifetime. The protocol is designed to satisfy the following main objectives:

• Use of location dependent routing: The distance between the nodes is a vital piece of information that has to be taken into account to minimize the energy required for each routing task. The location of the nodes can be obtained by the methods specified below:

– Using the global positioning system (GPS), the location of the nodes can be obtained by using the information obtained from the satellite.

– Receiving control messages from the neighbors at regular intervals and observing the signal strengths obtained at different points of time provides the data about their distance from the node concerned. This process which is explained in [17], is discussed in Section 11.5.1.

• The routing protocol must be loop-free. This is to ensure that the path selected uses minimum power for transmission.

• When shortest path routing is applied to a graph, it may so happen that the same node is involved in several routing tasks. This eventually decreases the lifetime of that node. The protocol must distribute the traffic load so as to avoid this problem.

• The routing protocol must be designed in such a way to reduce the amount of information exchanged among the nodes, since communication incurs loss of energy. Increase in the number of communication tasks also increases the traffic in the network, which results in loss of data, retransmissions, and hence more energy consumption. The number of communication tasks can be decreased by avoiding centralized algorithms and avoiding maintenance of large routing tables.

• Selection of routes must be flexible and should try to avoid memorizing past traffic or the routes, thereby avoiding large storage.

• Another main objective is to achieve maximum packet delivery for dense networks.

• Adaptation of the routing algorithm to the continuous topological changes is important as far as ad hoc wireless networks are concerned.

The algorithm is aimed at selecting a single path for a particular routing task which guarantees delivery of packets. This is due to the mobile nature of the nodes in the ad hoc wireless networks which leads to frequent topological changes. The model suggested by Stojmenovic and Lin in [29] makes use of min-power-graphs which are constructed as follows. The concept used here is similar to that suggested in [22]. Two nodes are said to be connected neighbors if and only if they satisfy the condition given below:

(11.5.19)

image

where t(x) and d(A, B) denote the transmission range of the node x and the distance between nodes A and B, respectively. Min-power-graphs are built using this equation. If t(x) is same for all values of x, x image set of nodes, then the graph is said to be a unit graph. Before going into its details, we will discuss the model used in describing the properties.

• The power needed for transmitting and receiving is assumed to be u(d) = adα + c where c is a constant dependent on the energy spent on processing for encoding and decoding. The parameter a is adjusted according to the physical environment.

• When sender S sends packets directly to the destination D, let the distance between nodes S and D be given by |SD| = d. If the packets traverse through an intermediate node A, let |SA| = x and |AD| = d - x. Let the intermediate node be placed at any arbitrary location. The following properties hold for the prescribed model:

Lemma 1: There always exists an intermediate node A between source S and destination D which reduces the energy consumption if packets from S destined to D are routed through it when the condition d > (c/(α(1 - 21-α)))1/α holds. Maximum power saving is achieved when A is placed exactly in the midpoint of SD.

Lemma 2: If d > (c/(α(1 - 21-α)))1/α, then by dividing SD into n equal intervals, n being the nearest integer to d(α(α - 1)/c1), maximum power saving can be obtained. The minimal power is then given by

(11.5.20)

image

First we discuss a power-saving algorithm, then a cost-saving algorithm, and finally an efficient power-cost saving algorithm derived from the previous two algorithms.

Power-Saving Localized Routing (SP-Power) Algorithm

The centralized version of the above algorithm can be effected by using Dijkstra's single-source shortest weighted path algorithm, where the edge weight is u(d) = adα + c. This is referred to as the SP-power algorithm. Now the corresponding localized algorithm is as follows.

Power Calculation: Now we will calculate the power required to transmit a packet from node B (source or intermediate node) to node D. Let node A be the neighbor of B and let |AB| = r, |BD| = d, and |AD| = s. The power needed to transmit from node B to node A is u(r) = arα+c. Assuming that the power required for the rest of the transmissions (v(s)) in the network is uniformly distributed, by applying the above Lemma 2 we have

(11.5.21)

image

The power-saving localized routing algorithm from source S to destination D is given below.

Step 1: Let A := S.

Step 2: Let B := A.

Step 3: Each node B, which may be a source or intermediate node, will select one of its neighbors A so as to minimize the power p(B, A)=u(r) + tv(s) and sends the message to neighbor node A.

Step 4: Steps 2 and 3 are repeated until the destination is reached, that is, A = D, or the delivery has failed, in which case A = B.

Cost-Saving Localized Routing (SP-Cost) Algorithm

This algorithm is based on the relation f(A) = 1/g(A), where f(A) is the cost of node A, and g(A) denotes the lifetime of the node A, normalized to be in the interval (0, 1). The localized version of this algorithm under constant power is mentioned below.

Cost Calculation: The total cost c(A) incurred when a packet moves from node B to node D via intermediate node A is the sum of the node's cost f(A) = 1/g(A) and the estimated cost of the route from node A to node D. This cost f(A) of the neighbor node A holding the packet is known to node B. The cost of the nodes in the rest of the path is proportional to the number of hops between nodes A and D, which is in turn proportional to the distance s = |AD| between nodes A and D, and inversely proportional to the transmission radius R. Thus the cost f(A) is ts/R, where t is a constant. Hence, the total cost can be considered to be c(A) = ts/R + f(A) or c(A) = tsf(A)/R. The cost-saving localized routing algorithm from source S to destination D is given below.

Step 1: Let A := S.

Step 2: Let B := A.

Step 3: Let each node B, which may be a source or intermediate node, select the neighbor node A that minimizes the cost c(A). If node D is one of the neighbors, send the message to node D, else send it to node A.

Step 4: Steps 2 and 3 are repeated till the destination is reached, that is, A = D, or the delivery has failed, in which case A = B.

Power-Cost-Saving Localized Routing Algorithm

In order to arrive at the power-cost algorithm, the power-cost of sending a message from node B to its neighbor node A must be known, which can be power-cost(B, A) = f(A)u(r), where |AB| = r, or power-cost(B, A) = αu(r) + βf(A), where a and β are constants. Depending on which factor is used, either sum or product, the power-cost-saving localized routing algorithm from source S to destination D can be represented as SP-Power*Cost or SP-Power+Cost algorithm.

Step 1: Let A := S.

Step 2: Let B := A.

Step 3: Let each node B, which may be a source or intermediate node, select the neighbor node A that minimizes the value pc(B, A) = power-cost(B, A) + v(s)f'(A). Send the message to A.

Step 4: Steps 2 and 3 are repeated till the destination is reached, that is, A = D, or the delivery has failed, in which case A = B.

Energy-Efficient Ad Hoc Wireless Networks Design Algorithm

In ad hoc wireless networks, cluster formation is done to provide better local coordination, hierarchical addressing, and better scheduling of resources. Such cluster formations involve distributed identification of a cluster-head that has the responsibilities of internal coordination and scheduling. Inter-cluster communication is achieved through cluster gateway nodes. In a power-constrained network, the additional responsibilities deplete the energy reserve of the cluster-head faster than the other nodes in the cluster. This results in the constant change of the cluster-heads and inefficient topology. We will now discuss an energy-efficient network design algorithm called ad hoc network design algorithm (ANDA), which tries to maximize the network lifetime for a static network where cluster-heads are known a priori. The basic idea used in this protocol is that the cluster-heads dynamically adjust the power level; and hence, the cluster size varies based on the remaining battery charge. Thus energy is uniformly drained from the cluster-heads, which in turn increases the lifetime of the network.

Let SC and SN be the set of cluster-heads and the set of member nodes in the network, respectively. Assuming that the cluster-heads are known a priori and fixed, the following functions are proposed by Chiasserini et al. in [30]:

Covering: This function tries to find an optimal coverage for all cluster-heads. The algorithm states that all the nodes choose one among the set of cluster-heads that projects maximum lifetime. The resulting network configuration guarantees minimum energy consumption.

Reconfiguration: Finding an appropriate time for the network reconfiguration is a crucial task. It is dependent on the remaining energy of the cluster-heads. Assuming all nodes have the same energy initially and they select their cluster-heads based on the covering algorithm specified above, now after a time period t from the last reconfiguration, the remaining energy at the cluster-heads after a time interval t is given by

(11.5.22)

image

where Ei (i = 1, ..., C) indicates the remaining energy at the cluster-head i; α and β are constant weighting factors; ri is the radius of coverage of the cluster-head; and ni denotes the number of nodes under the cluster-head i. The function covering is performed on the nodes again to assign them to the corresponding cluster-heads. Thus, a load-balancing approach of nodes is carried out in the network to increase the network lifetime.

Determination of Critical Transmission Range

The authors of [22] propose a centralized algorithm that calculates the minimum power level for each node that is required to maintain network connectivity based on the global information from all the nodes. This minimum value of the node's transmission range is termed the critical transmission range. This algorithm aims at finding the transmission range for each of the nodes which acts as a tradeoff between increasing the network connectivity, the spatial reusability, and the battery-life extension. The optimal value of the reception range depends on the following factors:

Mobility: Due to the mobile nature of ad hoc wireless networks, the links are frequently formed and broken. This greatly affects the optimum range to be considered.

Propagation models: Different fading mechanisms exist, some of which are frequency-dependent. The propagation model that results from these mechanisms has been developed for different network operational conditions such as atmospheric conditions and man-made obstacles. The use of an appropriate propagation model can help in studying the changes required in the physical layer under different environmental conditions.

Power level: The larger the power level, the larger is the transmission range covered. This leads to faster depletion of the battery charge and hence reduces the lifetime of the node.

Antenna configuration: Another factor to be considered is how the power is spatially distributed, which actually depends on the antenna model used.

The problem is to find the critical transmission range for each of the nodes so that the network remains connected. But finding nodes within the transmission range is unpredictable due to node mobility. Some mobility model has to be assumed, to simulate the changing topology. This model can be viewed as a set of snapshots which represent the location of the mobile, separated by a small time interval. The authors of [22] prove that most of the results discussed are not sensitive to the mobility pattern under consideration. Since the topology keeps changing, maintaining the same power level for all the nodes as suggested in [21] is not possible. Hence, further investigation is done in this regard.

The network can be represented as a graph G(v, e) with v vertices and e edges. The links of a network are divided into two types: essential links which are necessary to maintain a fully connected network, and the rest of the links that form the set of redundant links. In order to find the essential links a new concept called direct neighbor is introduced by Sanchez et al. in [22]. Nodes a and b are said to be direct neighbors of one another, if and only if the only closest node to a is b, and vice versa. Now the problem can be stated as

(11.5.23)

image

where

(11.5.24)

image

where xi denotes the location of node i, l is the transmission range, and Conn(G) is the connectivity of the graph, which is the ratio of the number of pairs of the nodes that are connected to the maximum number of such possible connections. It varies between 0 and 1. A link between two nodes i and j is said to be a critical link (CL) if the removal of the link leaves the network partitioned. It is the link between two direct neighbors that determines the critical range. Thus, the direct neighbors can communicate without any nodes' intervention, as shown in Figure 11.19. (a, b), (b, c), and (c, d) are direct neighbors whereas (b,d) is not, as c acts as a link between them. From this, a direct neighbor graph (DNG) Gd is created.

Figure 11.19. A sample network.

image

(11.5.25)

image

Now the problem of finding a critical link is the same as finding the longest link in DNG. On removing the loops in DNG, which can be done by deleting the links in increasing order of their lengths without affecting the network connectivity, CL can be found. The edges of DNG are a subset of e*. A minimum spanning tree (MST) of the graph G that covers all the network nodes and aims at minimizing some link value can be used to construct DNG. This is given by

(11.5.26)

image

where et is the set of edges that provides network connectivity and represents the minimum sum of the graph edge lengths. It is also the subset of the direct neighbor set ed. Thus, by creating the DNG graph and applying MST algorithm for the graph, the critical link of the graph of the network can be found. This value of CL gives the critical range and hence the optimal power level for the corresponding nodes.

11.5.3 Higher Layer Solutions

This section describes some of the power-aware techniques handled at the TCP/IP and the application layers of the protocol stack. The protocols used at these layers incorporate in them power control and energy conservation.

Congestion Control and Transmission Policies at the TCP/IP Layer

Due to the mobile nature of ad hoc wireless networks and the absence of the central coordinator such as a base station, the links are highly error-prone, which results in a large number of retransmissions, which in turn constantly invokes congestion control mechanisms. This kind of a situation is highly intolerable in the case of ad hoc wireless networks because of the limited energy source. Protocols at the TCP layer have to take into account the energy reserve while allowing retransmissions. Some of the suggestions made in [8] are listed below.

• The number of retransmissions can be considerably reduced by making a few modifications in the error-control protocols at the TCP/IP layer, such as selective retransmissions and explicit loss notification (ELN), which try to differentiate between congestion and other types of losses.

• Error correlation greatly affects the energy conservation of ad hoc wireless networks. TCP layer protocols that take into consideration error bursts and backing off, which result in energy efficiency of the system, can be used.

• If the packets are not received, a probe cycle may be initiated by exchanging some probe messages with the transmitter rather than using congestion control algorithms directly. This achieves higher throughput while consuming less energy due to reduced traffic in the network.

OS/Middleware and Application Layer Solutions

A quadratic relationship exists between power and voltage. Circuit speed reduction can be done which reduces significantly the power consumed at the node. This can be compensated by implementing parallelism and pipelining. Thus, the throughput can be increased for a small value of energy consumption. There exist some sporadic events that are triggered by external events. Hence, the system can be shut down during the period of inactivity.

At the application layer, protocols such as advanced configuration and power interface (ACPI) and power management tools such as power monitor are developed to assist programmers in creating power-efficient applications. Using proxy servers results in traffic redirection or reduction in traffic, such as in multimedia where video quality may be scaled down and audio sent, and in redirecting local traffic when it arrives along with the network traffic. This results in a huge amount of energy conservation. Some energy-conserving metrics can also be used in database design such as energy per transaction. In the context of multimedia, energy conservation can be achieved by reducing the number of bits in the compressed video and by transmitting only selected information. Multimedia traffic consumes a large amount of energy in ad hoc wireless systems.

11.6 SYSTEM POWER MANAGEMENT SCHEMES

This section deals with power control in the peripherals and the processor of nodes in an ad hoc wireless network. Efficient design of the hardware brings about significant reduction in the power consumed. This can be effected by operating some of the peripheral devices in power-saving mode by turning them off under idle conditions. System power consists of the power used by all hardware units of the node. This power can be conserved significantly by applying the following schemes:

• Processor power management schemes

• Device power management schemes

11.6.1 Processor Power Management Schemes

Processor power management schemes deal with techniques that try to reduce the power consumed by the processor, such as reducing the number of calculations performed. In this section, we discuss some of the power management techniques that are applied at the hardware level when there is a request from the higher layers.

Power-Saving Modes

The nodes in an ad hoc wireless network consume a substantial amount of power even when they are in an idle state since they keep listening to the channel, awaiting request packets from the neighbors. In order to avoid this, the nodes are switched off during idle conditions and switched on only when there is an arrival of a request packet. This primarily has two advantages: reducing the wastage in power consumed when the node is in the listen mode, and providing idle time for the batteries of the node to recover charges. Since the arrival of request packets is not known a priori, it becomes difficult to calculate the time duration for which the node has to be switched off. One solution to this problem suggested in [31] calculates the node's switch-off time based on the quality of service (QoS) requirements. An assumption made in these systems is that the battery subsystem of the node can be remotely powered on by means of a wake-up signal that is based on RF tag technology. RF tags are used as transponders for remote localization, activation, and identification of objects within a short range. Hard QoS requirements make the node stay active most of the time. This results in high consumption of the battery charge.

In [31], the authors suggest a distributed technique in which a sleep pattern is selected by the power source, that is, the node selects different timeout values (duration of sleep) depending on the traffic delay and the remaining charge of the battery. The model used assumes that {1, ..., L} are the sleep states, with L corresponding to the fully active state, while 0 corresponds to the deep sleep state. The deeper the sleep state of the node, the longer is the lifetime of the battery, and the larger is the delay encountered in packet transmission or reception. To implement the remote activation of the nodes, a switch called remote activated switch (RAS) is used, as shown in Figure 11.20. As soon as the node enters the idle state, it is switched off by the RAS switch. The receiver of the RAS switch still listens to the channel. It is designed to be either fully passive or powered by the battery. The remote neighbors send the wake-up signal and a sequence. The receiver, on receiving the wake-up signal, detects the sequence. The logic circuit compares it with the standard sequence for the node. It switches on the node only if both the sequences match.

Figure 11.20. Remote activated switch.

image

We now discuss a power management scheme using RAS which exploits the aforementioned technique of switching off the nodes. The model used in this scheme assumes the following:

• {1, ..., L} are the sleep states, where L is the fully active state.

• Let the power consumption when the node is in one of the sleep states be denoted by Pi(i = 1, ..., L) and the corresponding delay overhead be given by Wi(i = 1, ..., L-1), where P1 > P2 > ... > PL and W1 > W2 > ... > WL-1.

• Let image be the power spent on the transition from state l (l=1, ..., L-1) to state L, and Zl be the minimum time that a node has to stay in a state in order to achieve a positive energy gain. The value of sleep time Zl in state l is given by

(11.6.1)

image

• The nodes select the sleep patterns image based on the constraint

(11.6.2)

image

where q = {1, ..., Q}, Yl is the system parameter greater than Zl and the value of Q depends on the battery status and QoS requirement.

• Any node currently in state l wakes up if and only if

(11.6.3)

image

where τj is the time spent by the node in state j.

The power management scheme using RAS is shown in Figure 11.21. The figure shows the steps followed by transmitters and receivers that use this scheme.

Figure 11.21. Power management scheme using remote activated switch.

image

Power-Aware Multi-Access Signaling

Power-aware multi-access signaling (PAMAS) [32] is another approach for determining the time duration for which the node should be turned off. This scheme suggests the addition of a separate signaling channel in the MACA protocol [33]. The RTS-CTS signaling takes place in this separate channel, which determines the time period for which the node has to be powered off. The algorithm is divided into two parts:

Addition of separate signaling channel: This can be explained through a state diagram as shown in Figure 11.22. A node can be in any one of the six states represented within the boxes. The algorithm for the MAC layer using PAMAS is described below. Initially, when a node neither transmits nor receives packets, it stays in the idle state.

Packet transmission:

– As soon as the node gets a packet for transmission, it transmits an RTS and enters the Await CTS state.

– If it does not receive the CTS, it enters the binary exponential back-off (BEB) state. A node also enters the BEB state if it hears a busy tone when a neighboring node which is actively transmitting sends a busy tone in the control channel.

– After receiving the CTS, it enters the Transmit packet state and starts transmitting the packet.

Packet reception:

– As soon as a node receives an RTS, it sends a CTS back to the sender and enters the Await packet state, only if no other neighboring nodes exist in the Await CTS or Transmit packet state.

– If packets arrive on time, the node enters the Receive packet state and starts receiving the packets.

– If the packet does not arrive on time, the node enters the idle state again.

Figure 11.22. PAMAS protocol.

image

Powering off the radios: We now discuss the conditions under which the node enters the power-off mode:

Condition 1: The node has no packets for transmission.

Condition 2: A neighbor node is transmitting or receiving packets, that is, the channel is busy.

The following protocol is carried out by each node to decide on the duration for which it should be powered off. When the node has packets to be transmitted, but the channel is occupied by one of its neighbors, it knows when the neighbor finishes transmission and the channel becomes free for use. If the neighbor takes t1 seconds to finish its transmission, then the time for which the node should be powered off is t1 sec.

When the node powers on again and hears on-going transmission started by some other node, it again goes back to the sleep state. The time (t2) for which it remains so is based on an additional feature that has been added to the protocol which is described as follows. As soon as the node wakes up and has packets to send, it sends a probe packet t_probe(l) to all its neighbors on the control channel, where l is the maximum packet length. If transmitters exist whose transmissions are expected to end in the time period (l/2, l), they respond to the probe packet, with a t_probe_response(t) packet, where t is the duration for which the transmission is expected to last. The node, on receiving these packets, decides on the power-off time. The t_probe_response messages may collide with other packets. In such cases, the receiver probes the channel at different intervals of time to receive the probe response packet, that is, if the node hears a collision in the interval (t1, t2), it turns itself off for a time period of t1. This is to enable power saving during the probing of the channel and also to ensure there are no packet losses.

When the node has a packet to transmit, as soon as it powers on, it sends an RTS on the control channel. All the nodes that undergo transmission or reception send a busy tone on the control channel. If there is a collision, the node probes the channel as before. It then remains powered off for the time period min(t, r), where t and r are the times when the last transmitter and last receiver finished its transmission and reception, respectively.

In all the above cases, if the probe message gets collided, the node stays powered on all the time.

In [34], the authors have proposed an on-demand power management strategy in which the decision on the duration of nodes' sleep time is based on the traffic load in the network. Figure 11.23 explains the working of this power management strategy. As shown in the figure, every node in the ad hoc wireless network can be in either one of the two modes: power-saving mode (PSM) and active mode (AM). In the AM, the node remains awake and can transmit and receive packets. But in PSM, the node remains in the sleep state and wakes up periodically to check for messages. The transition from PSM to AM is triggered by communication events such as routing messages, and the reverse transition is triggered by the expiration of timers called keep-alive timers. The timer value is calculated based on the type of packet received recently. For example, a route update message does not set the timer value and hence does not trigger a transition of the node to active state. But an RREQ packet is used in setting the timer value. Different packet types set the timers with different values. The PSM of the neighboring nodes, which can be obtained through the Hello packet, also influences the timer value of a node. It has been proven that by choosing an appropriate sleep time for nodes based on the timers, a balance in the trade-off between the packet delay, energy consumption, and throughput can be attained.

Figure 11.23. Illustration of on-demand power management.

image

In [35], Zheng et al. have proposed another asynchronous wakeup schedule that proves advantageous in the presence of various traffic patterns. The wakeup schedule is modeled as a block design problem and solved. The wakeup protocol thus obtained has proven to be resilient to packet collision and network dynamics.

11.6.2 Device Power Management Schemes

Some of the major consumers of power in ad hoc wireless networks are the hardware devices present in the nodes. Various schemes have been proposed in the design of hardware that minimize the power consumption.

Low-Power Design of Hardware

Low-power design of hardware results in a significant improvement in the energy conservation. Some of the low-power design suggestions include varying clock speed CPUs, disk spin down, and flash memory. We now look into some of the sources of power consumption in the ad hoc wireless networks and the corresponding solutions to reduce power consumption as suggested in [8].

• Major sources of power consumption in ad hoc wireless networks are the transmitters and receivers of the communication module. The design of transceivers has a significant effect on the power consumption. Hence, great care must be taken while designing them. Switching off various units of the hardware while idling reduces the energy consumption. Instead of switching off fully, different stages may be followed in each of which there exists a different power requirement.

• The main hardware of a mobile node, in general, consists of the LCD display, DRAM, CD ROM drive, CPU, wireless interface card (in the case of a computer), and I/O subsystems. The percentage of power consumed by some of these components is shown in Figure 11.24. The section that follows will give a brief overview of the various means of power consumption and some effective solutions suggested for low-power design of the hardware devices.

Figure 11.24. Power consumed by various units of hardware.

image

CPU Power Consumption

The energy required for the CPU operation depends largely on the clock frequency (F). As the clock rate increases, frequent switching of the logic gates between different voltage levels (V), that is, the ground voltage and the peak voltage, takes place, which leads to higher power consumption. Another effect that significantly influences the CPU power consumption is the chaining of transistors. The larger the capacitance (C) of these transistors, the higher the energy required. Hence, the total power required by the CPU is proportional to CV2F. The solution suggested is as follows:

• The parameter C can be set during the chip design.

• The values of F and V can be set dynamically at run-time which, along with power-aware CPU scheduling policies, reduces the power consumption significantly.

Power-Aware CPU Scheduling

A small reduction in the value of the voltage V produces a quadratic reduction in the power consumed. This is the motivation behind the power-reduction techniques proposed by Weiser et al. in [36]. But the voltage V cannot be reduced until the clock rate is reduced. Some of the approaches to reduce the clock rate suggested in [36] are given below. The CPU utilization can be balanced between peak usage and idle periods. Idle periods or sleep times can be classified into hard or soft sleep times. Hard sleep times are unavoidable and result in compulsory idle times, whereas the soft idle times are those which can be delayed to a maximum extent possible. These two kinds of sleep times must be taken into account while balancing the CPU activities. The following algorithms suggest a few methods to calculate the CPU clock cycle time:

OPT algorithm: This algorithm works under the ideal condition that future activities and CPU usages are known a priori and so the system can adjust its CPU clock rate accordingly and so maximum reduction in power consumption can be obtained by balancing the periods of activities. Though this algorithm remains unrealistic, it can be used as a benchmark for other algorithms.

FUTURE algorithm: The basic principle behind the FUTURE algorithm is similar to that of OPT, the only difference being the window used for prediction. In the FUTURE algorithm, optimizations are made over a small window. Hence, it is realizable in a practical wireless environment.

PAST algorithm: Rather than peering into the future, this algorithm looks into the past and decides the clock rate. It is highly unreliable because it sees the past and assumes the future with some probability.

The window size on which these algorithms work acts as a trade-off between power consumption and response times.

Hard Disk Drive (HDD) Power Consumption

As mentioned earlier, the basic source for power consumption in hard disks is the disk spin. Various approaches have been suggested for turning off the drives and to bring down the speed of spinning. We now see how the spin-down can be performed on the disk drives.

• By using historical data: One method suggested is based on the traces of disk usage collected over a long period of time. By analyzing various spin-down thresholds, an optimal value for threshold has to be agreed upon, which acts as a balance between the two contradictory requirements of reducing power consumption and reducing the access delays.

• Spin-up/spin-down policies: Some of the policies used in deciding the time at which the hard disk speed has to be varied are given below.

– Optimal-optimal policy: According to this policy, by having a complete knowledge of the future, the optimal values for spin-down can be obtained. This tells when the disk has to be spun down to obtain maximum efficiency, and when the disk has to be spun up to get ready for the next disk operation. This is an unrealistic policy.

– Threshold-demand policy: This algorithm forces spin-down of the disk only after attaining a certain threshold value. But the spin-up takes place only if there exists a request for the disk.

– Predictive-predictive policy: Both the spin-up and spin-down time values are predicted based on the past values.

11.7 SUMMARY

This chapter listed various issues regarding energy management at the different layers of the protocol stack. It mainly concentrated on three major divisions in energy management: battery management, transmission power management, and system power management techniques. A summary of the solutions and the future directions at the device level, data link layer, and the network and higher layers are specified in Tables 11.2, 11.3, and 11.4, respectively.

Table 11.2. Device-dependent energy management schemes

image

Table 11.3. Solutions of energy management at the data link layer

image

Table 11.4. Solutions of energy management at the network and higher layers

image

Developing battery-efficient system architecture that has low cost and complexity remains a crucial issue. Designing smart battery packs that can select appropriate battery discharge policies under different load conditions is a challenging problem. Other issues that exist at the physical layer include efficient battery-scheduling techniques, selection of an optimal transmission power for the nodes, and finding the appropriate time duration for switching off the nodes. Further investigation at the data link layer is required in order to address issues regarding relay traffic, such as finding an optimal strategy that decides the amount of allowable relay traffic for a node. Developing battery-aware MAC scheduling algorithms for the nodes that increase the lifetime of the nodes is an important issue. At the network layer, many issues for research remain open such as designing an efficient routing algorithm that increases the network lifetime by selecting an optimal relay node. Energy efficiency at the application layer is becoming an important area of research. It includes development of power management analysis tools and ACPIs which assist in writing power-aware application programs.

11.8 PROBLEMS

  1. List at least two battery technologies that have the following properties. List your answers in the order of performance.
    1. Higher energy density
    2. Flat discharge characteristics
    3. Low cost
    4. Fast charging capability
    5. Higher lifetime
  2. Which battery is being commonly used for portable mobile nodes such as laptops? Give a few reasons to support your answer.
  3. Which of the three delay-free battery-scheduling techniques suggested in Section 11.4.2 performs better under (a) high-traffic and (b) low-traffic scenarios of an ad hoc wireless network consisting of large number of nodes? Assume that the network uses a MAC protocol that provides short-term fairness among the nodes.
  4. Let the maximum nominal and theoretical capacities N and C of a battery be equal to 25 units and 200 units, respectively, and let the current nominal and theoretical capacities of the battery be (a) ni = 23 and ci = 190, respectively. If the battery remains idle for three slots and then is subjected to two current pulses followed by three more idle slots, what will be the remaining capacities of the battery at the end of the idle slots when the binary pulsed discharge pattern is followed? Analyze and summarize your results. Repeat the same with (b) ni = 22 and ci = 1 and (c) ni = 2 and ci = 130. In case (c) assume one idle slot followed by transmission of five packets followed by five idle slots. Assume that the battery always transmits whenever it has a packet queued up for transmission and always gains one unit of charge when it idles for one time slot.
  5. Suggest a few metrics that can be associated with battery-aware routing techniques.
  6. List the possible steps of the algorithms executed at the source and the intermediate nodes of an ad hoc wireless network that follow the following strategies: (a) random strategy and (b) pay-for-it strategy. Assume a session between source s and destination d. Let R(s, d) be the set containing the available routes between s and d, sympathy(k, r) be the sympathy level of the kth node in route r, and credit(k, r) and debit(k, r) be the credit and debit of kth node in route r, respectively.
  7. What are the advantages of distributed power control algorithms in ad hoc wireless networks over the centralized power control algorithms?
  8. Let S and D be the source and the destination nodes of an ad hoc wireless network such that the distance between S and D be denoted by |SD| = d. If A is the intermediate node between S and D, then prove that the greatest energy saving is obtained when A is in the middle of S and D and d > (c/(α(1 - 21-α)))1/α. The variables c, α, and α are used in the same context as referred to in Section 11.5.2.
  9. Consider the ad hoc network as described in the previous problem. If the distance between S and D is divided into n subintervals with n > 0, prove that the greatest power saving is obtained when the intermediate nodes are equally spaced and the optimal value of n is approximately equal to d(a(α - 1)/c)1/α.
  10. Prove that the localized power-efficient routing algorithm discussed in Section 11.5.2 is loop-free.
  11. In the maximum-battery-cost-routing algorithm (MBCR), the cost of a node is a function of the remaining battery capacity of the node. Express the cost function of a node in terms of some parameters of the node other than battery capacity.
  12. What are the disadvantages of clustering in ad hoc wireless networks?
  13. What are the pros and cons of adding a separate signaling channel for an ad hoc wireless network? Suggest a few methods for calculating the time for which the nodes of an ad hoc wireless network that uses single channel should be switched off while they remain in the idle state.
  14. Describe how an energy-efficient multimedia processing and transmission can be achieved.

BIBLIOGRAPHY

[1] M. Adamou and S. Sarkar, "A Framework for Optimal Battery Management for Wireless Nodes," Proceedings of IEEE INFOCOM 2002, pp. 1783-1792, June 2002.

[2] D. Bruni, L. Benini, and B. Ricco, "System Lifetime Extension by Battery Management: An Experimental Work," Proceedings of International Conference on Compilers, Architecture, and Synthesis for Embedded Systems 2002, pp. 232-237, October 2002.

[3] C. F. Chiasserini and R. R. Rao, "Improving Battery Performance by Using Traffic-Shaping Techniques," IEEE Journal on Selected Areas of Communications, vol. 19, no. 7, pp. 1385-1394, July 2001.

[4] C. F. Chiasserini and R. R. Rao, "Pulsed Battery Discharge in Communication Devices," Proceedings of MOBICOM 1999, pp. 88-95, November 1999.

[5] D. Panigrahi, C. F. Chiasserini, S. Dey, and R. R. Rao, "Battery Life Estimation of Mobile Embedded Systems," Proceedings of IEEE VLSI Design 2001, pp. 55-63, January 2001.

[6] C. F. Chiasserini and R. R. Rao, "Energy-Efficient Battery Management," Proceedings of IEEE INFOCOM 2000, vol. 2, pp. 396-403, March 2000.

[7] T. Simunic, L. Benini, and G. D. Micheli, "Energy-Efficient Design of Battery-Powered Embedded Systems," IEEE Transactions on VLSI Design, vol. 9, no. 1, pp. 15-28, May 2001.

[8] K. Lahiri, A. Raghunathan, S. Dey, and D. Panigrahi, "Battery-Driven System Design: A New Frontier in Low-Power Design," Proceedings of ASP-DAC/VLSI Design 2002, pp. 261-267, January 2002.

[9] http://www.duracell.com/oem/primary/lithium/performance.asp

[10] J. Luo and N. K. Jha, "Battery-Aware Static Scheduling for Distributed Real-Time Embedded Systems," Proceedings of IEEE DAC 2001, pp. 444-449, June 2001.

[11] P. Rong and M. Pedram, "Battery-Aware Power Management Based on Markovian Decision Processes," Proceedings of ICCAD 2002, pp. 712-717, November 2002.

[12] B. Prabhakar, E. U. Biyikoglu, and A. E. Gamal, "Energy-Efficient Transmission over a Wireless Link via Lazy Packet Scheduling," Proceedings of IEEE INFOCOM 2001, vol. 1, pp. 386-394, April 2001.

[13] C. F. Chiasserini, P. Nuggehalli, and V. Srinivasan, "Energy-Efficient Communication Protocols," Proceedings of IEEE DAC 2002, pp. 824-829, June 2002.

[14] S. Jayashree, B. S. Manoj, and C. Siva Ram Murthy, "A Battery-Aware MAC Protocol for Ad Hoc Wireless Networks," Technical Report, Department of Computer Science and Engineering, Indian Institute of Technology, Madras, India, October 2003.

[15] V. Srinivasan, P. Nuggehalli, C. F. Chiasserini, and R. R. Rao, "Energy Efficiency of Ad Hoc Wireless Networks with Selfish Users," Proceedings of EW 2002, February 2002.

[16] J. H. Chang and L. Tassiulas, "Energy-Conserving Routing in Wireless Ad Hoc Networks," Proceedings of IEEE INFOCOM 2000, pp. 22-31, March 2000.

[17] S. Agarwal, A. Ahuja, and J. P. Singh, "Route-Lifetime Assessment-Based Routing (RABR) Protocol for Mobile Ad Hoc Networks," Proceedings of IEEE ICC 2000, vol. 3, pp. 1697-1701, June 2000.

[18] R. Wattenhofer, L. Li, P. Bahl, and Y. M. Wang, "Distributed Topology Control for Power-Efficient Operation in Multi-Hop Wireless Ad Hoc Networks," Proceedings of IEEE INFOCOM 2001, pp. 1388-1397, April 2001.

[19] S. Agarwal, R. H. Katz, S. V. Krishnamurthy, and S. K. Dao, "Distributed Power Control in Ad Hoc Wireless Networks," Proceedings of IEEE PIMRC 2001, vol. 2, pp. 59-66, October 2001.

[20] R. Ramanathan and R. Rosales-Hain, "Topology Control of Multi-Hop Wireless Networks using Transmit Power Adjustment," Proceedings of IEEE INFOCOM 2000, pp. 404-413, March 2000.

[21] V. Kawadia, S. Narayanaswamy, R. Rozovsky, R. S. Sreenivas, and P. R. Kumar, "Protocols for Media Access Control and Power Control in Wireless Networks," Proceedings of IEEE Conference on Decision and Control 2001, vol. 2, pp. 1935-1940, December 2001.

[22] M. Sanchez, P. Manzoni, and Z. J. Haas, "Determination of Critical Transmission Range in Ad Hoc Networks," Proceedings of MMT 1999, October 1999.

[23] A. Muqattash and M. Krunz, "Power-Controlled Dual Channel Medium Access Protocol for Wireless Ad Hoc Networks," Proceedings of IEEE INFOCOM 2003, vol. 1, pp. 470-480, April 2003.

[24] V. Kawadia and P. R. Kumar, "Power Control and Clustering in Ad Hoc Networks," Proceedings of IEEE INFOCOM 2003, vol. 1, pp. 459-469, April 2003.

[25] C. K. Toh, "Maximum Battery Life Routing to Support Ubiquitous Mobile Computing in Wireless Ad Hoc Networks," IEEE Communications Magazine, vol. 39, no. 6, pp. 138-147, June 2001.

[26] M. Woo, S. Singh, and C. S. Raghavendra, "Power-Aware Routing in Mobile Ad Hoc Networks," Proceedings of IEEE MOBICOM 1998, pp. 181-190, October 1998.

[27] R. L. Cruz and A. R. Santhanam, "Optimal Routing, Link Scheduling, and Power Control in Multi-Hop Wireless Networks," Proceedings of INFOCOM 2003, vol. 1, pp. 702-711, April 2003.

[28] A. Srinivas and E. Modiano, "Minimum Energy Disjoint Path Routing in Wireless Ad Hoc Networks," Proceedings of MOBICOM 2003, pp. 122-133, September 2003.

[29] I. Stojmenovic and X. Lin, "Power-Aware Localized Routing in Wireless Networks," Proceedings of IPDPS 2000, vol. 2, no. 11, pp. 1122-1133, May 2000.

[30] C. F. Chiasserini, I. Chlamtac, P. Monti, and A. Nucci, "Energy-Efficient Design of Wireless Ad Hoc Networks," Proceedings of Networking 2002, pp. 376-386, May 2002.

[31] C. F. Chiasserini and R. R. Rao, "A Distributed Power Management Policy for Wireless Ad Hoc Networks," Proceedings of IEEE WCNC 2000, vol. 3, pp. 1209-1213, September 2000.

[32] S. Singh and C. S. Raghavendra, "Power-Aware Multi-Access Protocol with Signaling for Ad Hoc Networks," ACM Computer Communication Review, vol. 28, no. 3, pp. 5-26, July 1998.

[33] P. Karn, "MACA — A New Channel Access Method for Packet Radio," Proceedings of ARRL/CRRL Amateur Radio Computer Networking Conference 1990, pp. 134-140, September 1990.

[34] R. Zheng and R. Kravets, "On-Demand Power Management for Ad Hoc Networks," Proceedings of IEEE INFOCOM 2003, vol. 1, pp. 481-491, April 2003.

[35] R. Zheng, J. C. Hou, and L. Sha, "Asynchronous Wakeup for Ad Hoc Networks," Proceedings of ACM MOBIHOC 2003, pp. 35-45, June 2003.

[36] Weiser, M. B. Welch, A. Demers, and S. Shenker, "Scheduling for Reduced CPU Energy," Proceedings of USENIX Association, OSDI 1994, pp. 13-23, November 1994.

[37] S. Jayashree, B. S. Manoj, and C. Siva Ram Murthy, "Energy Management in Ad Hoc Wireless Networks: A Survey of Issues and Solutions," Technical Report, Department of Computer Science and Engineering, Indian Institute of Technology, Madras, India, March, 2003.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset