Chapter 3. WIRELESS WANS AND MANS

3.1 INTRODUCTION

This chapter first introduces the fundamental concepts of cellular (wide area) wireless networks. The cellular architecture is described and the evolution of different standards for cellular networks is traced. Then the wireless in local loop, known as WLL, which brings wireless access into the residence and office, is described. An overview of the major WLL standards adopted all over the world is presented and broadband access implementations of WLL are also briefly described. Wireless ATM, for supporting seamless interconnection with backbone ATM networks and QoS for mobile users, is then discussed. Finally, two standards for wireless broadband access (wireless metropolitan area networks), IEEE 802.16 and HIPERACCESS, are presented.

3.2 THE CELLULAR CONCEPT

The cellular concept is a novel way to ensure efficient utilization of the available radio spectrum. The area to be covered by a cellular network is divided into cells, which are usually considered to be hexagonal. This is because, of the shapes which can completely cover a two-dimensional region without overlaps, such as the triangle, square, and hexagon, the hexagon most closely resembles the nearly circular coverage area of a transmitter. An idealized model of the cellular radio system consists of an array of hexagonal cells with a base station (BS) located at the center of each cell. The available spectrum in a cell is used for uplink channels for mobile terminals (MTs) to communicate with the BS, and for downlink channels, for the BS to communicate with MTs.

The fundamental and elegant concept of cells relies on frequency reuse, that is, the usage of the same frequency by different users separated by a distance, without interfering with each other. Frequency reuse depends on the fact that the signal strength of an electromagnetic wave gets attenuated with distance. A cluster is a group of cells which uses the entire radio spectrum. The cluster size N is the number of cells in each cluster. No two cells within a cluster use channels of the same frequency. Clustering ensures that cells which use the same frequency are separated by a minimum distance, called the reuse distance D. Figure 3.1 depicts the concept of clustering. Cells that use different sets of frequencies are labeled differently, and the outlined set of seven cells forms a cluster. If the radius of a cell is R, then the distance between two adjacent cells is image. For a hexagonal cellular structure, the permissible values of the cluster size N are of the form

image

where i and j are any non-negative integers. The lowest cluster size of three is obtained by setting i = j = 1. Figure 3.1 shows a cluster size of seven, given by i = 1 and j = 2. Let d be the distance between two adjacent cells. Applying the cosine law to the triangle ABC with sides a, b, and c,

image

Hence

image

which gives image. In general, it can be shown that the reuse factor, image

Figure 3.1. The cellular concept.

image

Two types of interference come into play when the cellular concept is used in the design of a network. The first one is the co-channel interference which results from the use of same frequencies in cells of different clusters. The reuse distance should be such that co-channel interference does not adversely affect the signal strength. On the other hand, adjacent channel interference results due to usage of adjacent frequencies within a cluster. The channel assignment to different cells within the cluster must minimize adjacent channel interference by not assigning neighboring frequencies to the same cell.

The advantage of the cellular concept is that it can increase the number of users who can be supported in a given area, as illustrated in the following example. Consider a spectrum of bandwidth 25 MHz. Assume that every user requires 30 KHz bandwidth. Then a single city-wide cell can support only 25,000/30 = 833 users. However if the city is split into hexagonal cells with seven cells per cluster, only 1/7th of the spectrum is available in any cell. Hence each cell can support 119 users. If there are 20 cells in the city, then the system can support 2,380 users simultaneously.

In general, if S is the total available spectrum, W is the bandwidth needed per user, N is the cluster size, and k is the number of cells required to cover a given area, the number of users supported simultaneously (capacity) is

image

In the previous example, with S = 25 MHz, W = 30 KHz, N = 7, and k = 20, n was calculated to be 2,380.

3.2.1 Capacity Enhancement

Each cellular service provider is allotted a certain band of frequencies. Considering the interference constraints, this restricts the number of channels that can be allotted to each cell. So, methods have been devised to enhance the capacity of cellular networks. It has been observed that the main reasons for reduction of cellular network capacity are off-center placement of antennas in the cell, limited frequency reuse imposed by a strict clustering scheme, and inhomogeneous propagation conditions. The nearest BS is not always the best for a mobile station, due to shadowing, reflections, and other propagation-based features. The simplistic model of cellular networks has to be modified to account for these variations. A few methods used to improve the capacity of cellular networks are discussed.

Cell-Splitting

Non-uniform traffic demand patterns create hotspot regions in cellular networks, which are small pockets with very high demand for channel access. In order to satisfy QoS constraints, the blocking probability in a hotspot region must not be allowed to shoot up. This gave rise to the concept of cell-splitting. A different layer of cells which are smaller in size, and support users with lower mobility rates, is overlaid on the existing (macro-cells) cellular network. These are called micro-cells. While macro-cells typically span across tens of kilometers, micro-cells are usually less than 1 Km in radius. Very small cells called pico-cells, of a few meters' radius, are also in use to cover indoor areas. This is shown in Figure 3.2. Depending on the demands in traffic at a given point of time, channels can be allocated explicitly for the micro-cellular layer, or a sharing algorithm can be envisaged with the macro-cellular layer, by which channels can be allotted to the micro-cells as the demand arises. Users with lower rates of mobility are handled by the layer of smaller cell size. If a highly mobile user is handled by the micro-cellular layer, there is an overhead of too many handoffs. In fact, the handoffs may not be fast enough to let the call continue uninterrupted [2].

Figure 3.2. Cell-splitting.

image

The introduction of a micro- or pico-cellular layer increases the available number of channels in hotspot regions and enables better frequency planning, so that the whole network does not have to be designed to handle the worst-case demand. This flexibility in frequency management leads to capacity enhancement.

Sectorization

This concept uses space division multiple access (SDMA) to let more channels be reused within a shorter distance. Antennas are modified from omnidirectional to sectorized, so that their signals are beamed only in a particular sector, instead of being transmitted symmetrically all around. This greatly reduces the downlink interference. A cell is normally partitioned into three 120-degree sectors or six 60-degree sectors. Further, the antennas can be down-tilted to reduce co-channel interference even more. Figure 3.3 shows the difference between omnidirectional and 60-degree sectored antennas. Reduction in interference allows shorter reuse distance, and hence increases capacity.

Figure 3.3. Omnidirectional and sectorized antennas.

image

Cellular networks rely on trunking, the statistical multiplexing of a large number of users on limited number of channels to support many users on the wireless spectrum. Trunking efficiency increases when a larger channel pool is available, instead of many smaller subsets of channels. When sectoring is used, the available channels of the cell must be subdivided and allocated to the different sectors. This breaks up the available channel pool into smaller sub-groups, and reduces trunking efficiency, that is, blocking probability may increase. The number of handoffs is also increased due to inter-sector handoffs being introduced.

Power Control

Cellular networks face the "near-far" problem. An MT which is very close to the BS receives very strong signals from the BS, and its signals are also extremely strong at the BS. This can possibly drown out a weak signal of some far-away MT which is on an adjacent frequency. To avoid this problem, the BS must issue power control orders to the MTs to receive a fairly constant, equal power from all MTs, irrespective of their distance from the BS. MTs which are farther away from the BS transmit at higher power than nearby MTs, so that the received power at the BS is equal. This saves power for the MTs near the BS, and avoids excessive interference. Reduction in interference increases the capacity of the cellular network.

3.2.2 Channel Allocation Algorithms

Efficient allocation of channels to the different cells can greatly improve overall throughput of the network, in terms of the number of calls supported successfully. The channel allocation algorithms that can be used vary greatly in complexity and effectiveness. Fixed channel allocation algorithms allot a set of channels to each cell, and the number of channels per cell is determined a priori. Cells may be allowed to borrow some channels from their neighboring cells to tide over temporary increase in demand for channels. This should not violate the minimum reuse distance constraints. So, one borrowing may prevent some other cells from borrowing certain channels, which bring identical frequencies into use too close to each other. This is termed "channel locking." The main drawback of fixed channel allocation algorithms is that they assume a constant, or at least a predictable, distribution of load over the different cells. But, in reality, demands are very unpredictable, which may lead to a scarcity of channels in some cells and an excess in others.

The dynamic channel allocation algorithms do not have any local channels allotted to cells. The decision of which channel to lend is made dynamically. This makes dynamic algorithms extremely flexible and capable of dealing with large variations in demand, but they involve a lot of computation. Dynamic algorithms require a centralized arbitrator to allocate channels, which is a bottleneck. Hence, distributed channel allocation algorithms are preferred, especially for micro-cells. All BSs look at the local information about their possible interferers, and decide on a suitable allocation of channels to maximize bandwidth utilization.

The hybrid channel allocation schemes give two sets of channels to each cell; A is the set of local channels and B is the set of borrowable channels. Cells allow borrowing only from set B, and reallocate calls to their local channel set as soon as possible. The hybrid allocation algorithms introduce some flexibility into the channel allocation scheme. A subset of the channels can be borrowed to account for temporary variations in demand, while there is a minimum guarantee of channels for all cells at all times. These algorithms are intermediate in terms of complexity and efficiency of bandwidth utilization.

3.2.3 Handoffs

An important concept that is essential for the functioning of cellular networks is handoffs, also called handovers. When a user moves from the coverage area of one BS to the adjacent one, a handoff has to be executed to continue the call. There are two main parts to the handoff procedure: the first is to find an uplink-downlink channel pair from the new cell to carry on the call, and the second is to drop the link from the first BS.

Issues Involved in Handoffs

Certain issues that need to be addressed in a handoff algorithm are listed below.

Optimal BS selection: The BS nearest to an MT may not necessarily be the best in terms of signal strength. Especially on the cell boundaries, it is very difficult to clearly decide to which BS the MT must be assigned.

Ping-pong effect: If the handoff strategy had very strictly demarcated boundaries, there could be a series of handoffs between the two BSs whose cells touch each other. The call gets bounced back and forth between them like a ping-pong ball. This could totally be avoided, since signal strength is not significantly improved by such handoffs.

Data loss: The interruption due to handoff may cause a loss in data. While the delay between relinquishing the channel in the old cell, and resuming the call in the new cell, may be acceptable for a voice call, it may cause a loss of few bits of data.

Detection of handoff requirement: Handoffs may be mobile-initiated, in which case the MT monitors the signal strength received from the BS and requests a handoff when the signal strength drops below a threshold. In the case of network-initiated handoff, the BS forces a handoff if the signals from an MT weaken. The BS inquires from all its neighboring BSs about the signal strength they receive from the particular MT, and deduces to which BS the call should be handed over. The mobile-assisted scheme is a combination of the network and mobile-initiated schemes. It considers the evaluation of signal strength from the mobile, but the final handoff decision is made by the BS.

Handoff Quality

Handoff quality is measured by a number of parameters, and the performance of a handoff algorithm is judged in terms of how it improves these parameters. Some of the measures of handoff quality are as follows:

Handoff delay: The signaling during a handoff causes a delay in the transfer of an on-going call from the current cell to the new cell. If the delay is too large, the signal may fall below the minimum carrier to interference ratio (C/I) required for continuation of the call, and the call may get dropped. The handoff protocol should aim to minimize this delay.

Duration of interruption: In a conventional handoff algorithm, called hard handoff, the channel pair from the current BS is canceled, and then the channel pair from the next BS is used to continue the call. This can cause an interruption in the call. Though this is imperceptible to humans and the speech loss can be interpolated by the human ear, it adversely affects data transmission. So, handoff interruption must be minimized.

Handoff success: The probability of a successful handoff, that is, continuation of the call when a mobile user crosses a cell boundary, is called the handoff success rate. This is influenced by the number of available channel pairs in the adjacent cells, and the capacity to switch before the signal falls below the acceptable C/I. The handoff strategies should maximize the handoff success rate.

Probability of unnecessary handoff: Unnecessary handoffs, as in the ping-pong effect, increase the signaling overhead on the network and lead to unwanted delays and interruptions in calls. So, the probability of such unnecessary handoffs should be minimized.

Improved Handoff Strategies

Several improvements have been explored over the conventional hard handoff method, in order to tackle the issues related to handoffs.

Prioritization: In order to reduce handoff failure, handoffs are given priority over new call requests. A certain number of channels may be reserved in each cell explicitly for handoffs. There is a trade-off here between probability of dropping a call due to handoff failure, and bandwidth utilization.

Relative signal strength: There is a hysteresis margin required by which the signal strength received from the new BS has to be greater than the signal strength of the current BS signal, to ensure that the handoff leads to significant improvement in the quality of reception. This reduces the probability of unnecessary handoffs. There is a minimum time for which an MT must be in a cell before it can request a handoff, called the dwell time. The dwell timer keeps track of how long the MT has been in the cell. Dwell time has to be decided depending on the speed of the mobile users in the region. A very large dwell time may not allow a handoff when it really becomes necessary for a rapidly moving mobile. The concept of dwell time reduces the ping-pong effect as frequent handoffs will not occur on the border of two cells.

Soft handoffs: Instead of a strict "handing over the baton" type of handoff, where one BS transfers the call to another, a period of time when more than one BS handles a call can be allowed, in the region of overlap between two or more adjacent cells. Fuzzy logic may be used to model the fading of one BS signal and the increasing intensity of the other. In the overlap region, the MT "belongs" to both cells, and listens to both BSs. This requires the MT to have the capability to tune in to two frequencies simultaneously.

Predictive handoffs: The mobility patterns of users can be predicted by analysis of their regular movements, using the location tracking data. This can help in the prediction of the requirements of channels and handoffs in different cells.

Adaptive handoffs: Users may have to be shifted across different layers, from micro- to macro-cellular or pico-cellular to micro-cellular, if their mobility differs during the call.

3.3 CELLULAR ARCHITECTURE

Every cell has a BS to which all the MTs in the cell transmit. The BS acts as an interface between the mobile subscriber and the cellular radio system. The BSs are themselves connected to a mobile switching center (MSC), as shown in Figure 3.4. The MSC acts as an interface between the cellular radio system and the public switched telephone network (PSTN). It performs overall supervision and control of the mobile communications, which include location update, call delivery, and user identification. The authentication center (AuC) validates the MTs by verifying their identity with the equipment identity register (EIR). The MSCs are linked through a signaling system 7 (SS7) network, which controls the setting up, managing, and releasing of telephone calls. The SS7 protocol introduces certain nodes called signaling transfer points (STPs) which help in call routing. For a detailed description of mobile network architecture, refer to [3].

Figure 3.4. Cellular architecture.

image

A subscriber in a mobile network should be efficiently traced to deliver calls to him/her, and he/she should be able to access the network through the mobile end-system, irrespective of his/her location. This involves management of different databases in the cellular network architecture, and is called location management. The method used currently for location management requires the MT, also called the mobile station (MS), to report its position to the network periodically. The network stores the location information of each MT in location databases and this information is retrieved during call delivery.

In the current location management scheme, each user is permanently associated with the home location register (HLR) in his/her subscribed cellular network. This HLR contains the user profile consisting of the services subscribed by the user (such as SMS and paging), billing information, and location information. The visitor location register (VLR) maintains the information regarding roaming users in the cell. VLRs download the information from the users' respective HLRs. The number and placement of the VLRs vary among networks. Registration of an MT in a new cell entails updates of its HLR, and the VLRs of its old and new cells. For ease of management, many cells may be grouped into one registration area (RA) and updates performed only on movement out of the RA. Delivery of a call requires tracing the current location of the handset by requesting the information from the HLR.

Recent improvements in location management include the use of distributed databases and replication of user profiles to enable faster access to user information. Partition-based architectures have been used to group MSCs into partitions, and update databases only if the MT moves out of the partition. This greatly reduces the number of updates, but the size of the database to be maintained at each partition increases.

3.4 THE FIRST-GENERATION CELLULAR SYSTEMS

After the basic concepts common to most cellular networks, the specific generations of cellular networks and the standards of each generation are discussed in the following sections. The first implementations of the cellular concept constitute the first-generation (1G) systems. These systems, such as the advanced mobile phone system (AMPS) in the United States and Nordic mobile telephony (NMT) deployed in several European countries, are analog-based (i.e., they employ analog modulation and send information as a continuously varying waveform).

3.4.1 Advanced Mobile Phone System

AMPS divides the 800 MHz part of the frequency spectrum into several channels, each 30 KHz wide. The cellular structure uses a cluster size of seven, and each cell is roughly 10-20 Km across. The AMPS system uses 832 full-duplex channels, with FDM to separate the uplink and downlink channels. The channels are classified into four main categories: downlink control channels for system management, downlink paging channels for paging an MT (locating an MT in the network and alerting it when it receives a call), bidirectional access channels for call setup and channel assignment, and bidirectional data channels to carry user voice/data. AMPS provides a maximum data transmission rate of 10 Kbps.

When the MT is powered on, it scans for the most powerful control channel and broadcasts its 32-bit serial number and 10-digit telephone number on it. The BS which hears this broadcast registers the MT with the mobile switching office (MSO), and informs the HLR of the MT of its present location. The MT updates its position once every 15 minutes. To make a call, the MT sends the number to be called through an access channel. The BS sends this request to the MSO, which assigns a duplex channel for the call. MTs are alerted to incoming calls through the paging channel. The call is routed through the home to its current location. Once the MT is located, it takes up the call on the allotted voice channel.

In order to conserve the battery power of the handset, the MT goes into a "sleep state" when it is idle. The designed sleep time is 46.3 ms in the AMPS system. The MT periodically wakes up to scan the paging channel and check if there is any call addressed to it.

The service of a mobile network has similar measures of performance like that of a wired network. The concept of trunking, used in all communication networks, exploits the statistical behavior of users to enable a fixed number of channels to be shared by a much larger number of users. The grade of service (GoS) of a trunked network is a measure of accessibility of the network during its peak traffic time. GoS is specified as the probability of a call being blocked, or experiencing a queuing delay greater than a threshold value. The AMPS system was designed for GoS of 2% blocking.

3.5 THE SECOND-GENERATION CELLULAR SYSTEMS

As mentioned earlier, 1G systems are analog which leads to the following problems: (i) No use of encryption (1G systems do not encrypt traffic to provide privacy and security, as analog signals do not permit efficient encryption schemes, and thus voice calls are subject to eavesdropping). (ii) Inferior call quality (This is due to analog traffic which is degraded by interference. In contrast to digital traffic stream, no coding or error correction is applied to combat interference). (iii) Spectrum inefficiency [This is because at any given time a channel is allocated to only one user, regardless of whether the user is active (speaking) or not. With digital traffic it is possible to share a channel by many users (which implies multiple conversations on a single channel) using TDMA or CDMA, and further, digital signals allow compression, which reduces the amount of capacity needed to send data by looking for repeated patterns].

The second-generation (2G) systems, the successors of 1G systems, are digital [i.e., they convert speech into digital code (a series of pulses) which results in a clearer signal] and thus they overcome the deficiencies of 1G systems mentioned above. It may be noted that the user traffic (computer data which is inherently digital or digitized voice) must be converted into radio waves (analog signals) before it can be sent (between MT and BS). A 2G system is called personal communications services (PCS) in the marketing literature. There are several 2G standards followed in various parts of the world. Some of them are global system for mobile communications (GSM) in Europe, digital-AMPS (DAMPS) in United States, and personal digital cellular (PDC) in Japan.

3.5.1 Global System for Mobile Communications

GSM1 is an extremely popular 2G system, which is fully digital, used across over 100 countries [4].

1 GSM originally stood for Groupe Speciale Mobile, the name of the working group that designed it.

There are four variants of GSM:

• Operating around 900 MHz [This first variant reuses the spectrum intended for Europe's analog total access communication system (TACS)]

• Operating around 1,800 MHz [licensed in Europe specifically for GSM. This variant is sometimes called digital communications network (DCN)]

• Operating around 1,900 MHz (used in United States for several different digital networks)

• Operating around 450 MHz (latest variant for replacing aging analog networks based on NMT system)

Apart from voice service, GSM also offers a variety of data services.

The modulation scheme used in GSM is Gaussian minimum shift keying (GMSK). GSM uses frequency duplex communication, and each call is allotted a duplex channel. The duplex channels are separated by 45 MHz. Every channel is of 200 KHz bandwidth. Thus, GSM uses FDM to separate the channels. The downlink (BS-MT) channels are allotted 935-960 MHz, and the uplink (MT-BS) channels are on 890-915 MHz as shown in Figure 3.5. The uplink "frame" of eight slots is shifted by a delay of three slots from the downlink frame, so that the MT does not have to send and receive at the same time. A procedure called adaptive frame alignment is used to account for propagation delay. An MT which is far away from the BS starts its frame transmission slightly ahead of the actual slot commencement time, so that the reception at the BS is synchronized to the frame structure. The exact advancement is instructed by the BS to the MT, with MTs that are farther away from the BS requiring a greater advancement.

Figure 3.5. GSM frequency bands and TDMA frame.

image

Each physical channel is divided into eight periodic time slots (each of which is 0.577 ms duration), which are time-shared between as many as eight users (logical channels) using TDMA. A complete cycle of eight time slots is called a (TDMA) frame. A slot comprises the following four parts: (i) header and footer (These are empty space at the beginning and end of the slot to separate a slot from its neighbors and to negate the effects of propagation delay for distances up to 35 Km from the BS), (ii) training sequence (This is to help a receiver lock on to the slot), (iii) stealing bits (These identify whether the slot carries data or control information), and (iv) traffic (This part carries user traffic (voice/data) as well as control information and error correction). Users cannot use all frames; for every 24 GSM frames that carry voice/data, one is "stolen" for signaling and another reserved for other types of traffic (such as short text messages or caller line identification). Thus eight slots make up a TDM frame and 26 TDM frames make up a multiframe. Multiframes are in turn grouped into superframes and hyperframes. Some of the slots here are used to hold several control channels for managing the GSM system.

The control channels in GSM are classified into three broad categories. The broadcast control channel (BCCH) is a downlink channel that contains the BS's identity and channel status. All MTs monitor the BCCH to detect if they have moved into a new cell. The dedicated control channel (DCCH) is used for call-setup, location updates, and all call-management related information exchange. Every call has its own allotted DCCH. The information obtained on the DCCH is used by the BS to keep a record of all the MTs currently in its footprint (coverage area). The third category, common control channel (CCCH), consists of the downlink paging channel, which is used to page any MT to alert it for an incoming call, the random access channel, which supports slotted ALOHA-based (random request without reservation of time slot) request from MT to BS for call-initiation, and the access grant channel, on which the BS informs the MT of an allotted duplex channel for a call.

As each MT is assigned only one slot within each frame, the maximum speed for data services is around 34 Kbps (1/8 of the 270.8 Kbps capacity of a 200 KHz GSM carrier). Forward error correction (FEC) and encryption reduce the data rate (by at least one-third; the precise amount depends on how the handset and network encode voice and data) to around 9.6 Kbps.

An important feature of GSM, from the user's point of view, is the subscriber identity module (SIM), a smart card which is pluggable into a GSM phone (mobile handset). The SIM stores information such as the subscriber's identification number, the networks and countries where the subscriber is entitled to service, and other user-specific information. By inserting the SIM card into another handset, the user is able to use the new handset to make/receive calls while using the same telephone number. Thus SIM provides personal mobility.

Due to conventional call-handling strategies, GSM requires the call to always be routed through the home network of the mobile. This causes a circuitous route for calls to be delivered to roaming subscribers. The HLR of the destination mobile is first contacted, and then pointers are followed to reach the MT's current cell, even if it is the same cell as that of the originating MT. This could be avoided by routing directly to the foreign network, once the location of the receiving mobile is established. On the other hand, such rerouting of traffic through a foreign network may be done intentionally if the call termination charges of the local service provider are very high. This is called tromboning.

3.5.2 Data Over Voice Channel

Cellular systems were primarily designed to support voice calls only. But the demand for supporting other data services, ranging from short messages to Web-browsing, slowly emerged. It was realized that cellular networks have to be modified to support data services. The main problems in using a voice network for data transmission are listed below [5].

Signal distortion: Speech encoders are being used on all links to exploit the redundancy in speech and compress it, in order to conserve bandwidth. However, data cannot be expected to have such redundancy, and the data receivers cannot interpolate data the way a human listener can interpolate even with degradation in speech quality. This issue is compounded when encoders in tandem recompress data, that is, on every hop of the network, speech encoders compress the data, assuming it has the same amount of redundancy as speech.

Handoff error: Handoffs introduce a certain delay in transfer of the call from one cell to another, which may lead to loss of data.

Interfacing with fixed network: A PSTN modem expects a 2,100 Hz tone from the source when a data call is initiated. On the other hand, PSTN networks do not indicate non-voice service. So the cellular network should be able to differentiate between a data call and a voice call.

The main problem is in making the network recognize a data call, and handle it differently from a voice call, by probably disabling some optimizations made on voice calls. Some possible solutions that have been suggested are listed here.

• A control message could be transmitted all along the path of the call, to indicate a data call so that voice coding can be disabled.

• A two-stage dial-up operation can be used, first to the cellular carrier and then to the subscriber. The carrier has different numbers for each service it offers. For example, to set up a data call with an MT, the cellular carrier is first dialed, and it then informs the MT of a data call.

• A subscriber could be assigned separate subscriber numbers for each service he/she opts for.

3.5.3 GSM Evolution of Data Services

GSM and other 2G standards started providing data services overlaid on the cellular network. This was meant to be a stop-gap arrangement before the third-generation (3G) networks could be formally deployed, and these data services constituted the 2.5G of cellular networks. It started out with small messages through SMS, and today, Web-browsing is possible over the mobile network. Some of the popular data services provided over GSM are now briefly described.

Short Messaging Service (SMS)

SMS is a connectionless transfer of messages, each up to 160 alphanumeric characters in length. To send longer messages, small packets are either concatenated at the receiver, or the sender sends compressed messages. SMS can perform point-to-point data service as well as broadcast throughout the cell. The message transfer takes place over the control channel.

High-Speed Circuit-Switched Data (HSCSD)

HSCSD is a simple upgrade to GSM. As the name implies, it is a circuit-switched protocol for large file transfers and multimedia data transmissions. Contrary to GSM (one TDMA slot/user), it allots up to eight TDMA slots per user and hence the increased data rates. By using up to four consecutive time slots, HSCSD can provide a data rate of 57.6 Kbps to a user. Its disadvantage is that it increases blocking probability by letting the same user occupy more time slots that could have otherwise been used for many voice calls.

General Packet Radio Service (GPRS)

GPRS, which promises to give every user a high-capacity connection to the Internet, uses TCP/IP and X.252 and offers data rates up to 171.2 Kbps, when all eight time slots of a GSM radio channel are dedicated to GPRS. A variety of services has been provided on GPRS, ranging from bursty data transmission to large file downloads. Being a packetized service, GPRS does not need an end-to-end connection. It uses radio resources only during actual data transmission. So, GPRS is extremely well-suited for short bursts of data such as e-mail and faxes, and non-real-time Internet usage. Implementation of GPRS on the existing GSM network requires only the addition of packet data switching and gateway nodes, and has minimal impact on the already established network. The HLR is now enhanced to also record information about GPRS subscription.

2 X.25 is an ITU-T (International Telecommunications Union–Telecommunication Standards Sector) protocol standard for packet-switched WAN communications that defines connection establishment and maintenance between user devices and network devices.

Enhanced Data Rates for GSM Evolution (EDGE)

EDGE, also referred to as enhanced GPRS or EGPRS, inherits all the features from GSM and GPRS, including the eight-user TDMA slot structure and even the slot length of 0.577 ms. However, instead of the binary GMSK, it uses 8-PSK (octal PSK) modulation which triples the capacity compared to GSM. It provides cellular operators with a commercially viable and attractive method to support multimedia services. As 8-PSK is more susceptible to errors than GMSK, EDGE has nine different modulation and coding schemes (air interfaces), each designed for a different quality connection.

Cellular Digital Packet Data (CDPD)

CDPD is a packet-based data service that can be overlaid on AMPS and IS-136 systems. It supports a connectionless network service, where every packet is routed independently to the destination. The advantage of CDPD lies in being able to detect idle voice channels of the existing cellular network, and use them to transmit short data messages without affecting the voice network. The available channels are periodically scanned and a list of probable candidates for data transmission is prepared. The system uses channel sniffing to detect channels which have been idle and can carry data. The system must continuously hop channels so that it does not block a voice call on a channel it is currently using. To avoid this channel stealing from the voice network, channel hopping is performed very rapidly, at the rate of once every ten seconds. Usually, a timed hop is performed by using a dwell timer, which determines how long the CDPD can use a channel. If the channel has to be allocated in the meantime to a voice call, CDPD performs an emergency hop. So, it is essential to find alternate idle channels in real time to ensure that data is not lost.

3.5.4 Other 2G Standards

Digital-AMPS (D-AMPS) is the digital version of the first-generation AMPS and is designed to coexist with AMPS. It is a TDMA-based system known as IS-54 (Telecommunications Industry Association Interim Standard). It uses AMPS carriers to deploy digital channels, each of which can support three times the users that are supported by AMPS with the same carrier. Digital channels are organized into frames and there are six slots per frame. D-AMPS supports data rates of around 3 Kbps. D-AMPS+, an enhancement of D-AMPS for data (similar to HSCSD and GPRS in GSM), offers increased data rates in the range of 9.6-19.2 Kbps. While D-AMPS maintains the analog channels of AMPS, its successor known as IS-136 is a fully digital standard.

IS-95, another 2G system, also known as cdmaOne or IS-95a, was first deployed in South Korea and Hong Kong in 1993, and then in the United States in 1996. It is a fully digital standard that operates in the 800 MHz band (like AMPS, IS-136) and is the only 2G system based on CDMA. It is incompatible with IS-136. It supports data traffic at the rates of 4.8 and 14.4 Kbps. cdmaTwo or IS-95b, an extension of IS-95, offers support for 115.2 Kbps.

Personal digital cellular (PDC), deployed in Japan, is a system based on DAMPS. The main reason for PDC's success is i-mode, which is a mobile Internet system developed by NTT DoCoMo (refer to Section 4.5.3). Using 9.6 Kbps or slower data rates, it provides access to thousands of Web sites specially adapted to fit into a mobile phone's low data rate and small screen.

3.6 THE THIRD-GENERATION CELLULAR SYSTEMS

The aim of the third-generation (3G) cellular system is to provide a virtual home environment, formally defined as a uniform and continuous presentation of services, independent of location and access. This means that the user must be able to access the services that he/she has subscribed to, from anywhere in the world, irrespective of his/her method of access, as if he/she were still at home. This requires very stringent QoS adherence, and highly effective and optimized architectures, algorithms, and operations of the network elements. The use of "intelligent network architecture" is foreseen here [6], [7], [8]. The networks should contain advanced algorithms to handle location information retrieval and update, handoffs, authentication, call routing, and pricing. From the user point of view, the 3G systems with very high-speed wireless communications (up to 2 Mbps) plan to offer Internet access (e-mail, Web surfing, including pages with audio and video), telephony (voice, video, fax, etc.), and multimedia (playing music, viewing videos, films, television, etc.) services through a single device. Such services cannot be realized over the present 2G systems — to effect a simple transfer of a 2 MB presentation, it would take approximately 28 minutes with a 9.6 Kbps GSM data rate.

3.6.1 3G Standards

An evolution plan has been charted out to upgrade 2G technologies into their 3G counterparts. Table 3.1 lists some existing 2G systems and their corresponding 3G versions [9].

Table 3.1. Evolution plan to 3G standards

image

In 1992, the International Telecommunications Union (ITU) initiated the standardization of 3G systems, and the outcome of this effort was called international mobile telecommunications-2000 (IMT-2000). The number 2000 stood for three things: (i) the system would become available in the year 2000, (ii) it has data rates of 2,000 Kbps, and (iii) it was supposed to operate in the 2,000 MHz region, which ITU wanted to make globally available for the new technology, so that users could roam seamlessly from country to country. None of these three things came to pass, but the name has remained. The service types that the IMT-2000 network is supposed to provide to its users are given in Table 3.2 [10]. As different parts of the world are dominated by different 2G standards, the hope for a single worldwide 3G standard has not materialized, as can be seen in Table 3.1.

Table 3.2. IMT-2000 service types

image

Universal mobile telecommunications system (UMTS) is the European 3G standard which is also called wideband CDMA (W-CDMA) as the standard is based on CDMA (direct sequence spread spectrum) technology. The term "wideband" here denotes use of a wide carrier. W-CDMA uses a 5 MHz carrier (channel bandwidth) which is 25 times that of GSM and four times that of cdmaOne. It is designed to interwork with GSM networks, which means a caller can move from a W-CDMA cell to a GSM cell without losing the call. The first W-CDMA networks are being deployed in Japan by NTT DoCoMo. Of the existing 2G systems, cdmaOne is already based on CDMA. cdma2000, basically an extension of cdmaOne, is not designed to interwork with GSM, which means it cannot hand off calls to a GSM cell. Commercial versions of cdma2000 include 1Xtreme by Motorola and Nokia and high data rate (HDR) by Qualcomm. UWC-136 is a TDMA-based 3G standard developed by the Universal Wireless Communications Consortium, as an upgrade of IS-136.

3.6.2 The Problems with 3G Systems

3G systems fundamentally need greater bandwidth and lower interference to be able to meet QoS requirements for data and multimedia services. Field tests have indicated that CDMA is an attractive option for mobile cellular networks, as it can operate in the presence of interference, and can theoretically support very large bandwidth [11]. The performance of 3G systems is further improved by the use of smart antennas. This refers to the technology of controlling a directional antenna array with an advanced digital signal processing capability to optimize its radiation and/or reception pattern automatically in response to the signal environment [12].

3G systems promised to be the ultimate panacea to all problems of dropped calls, low data rates, and mobility restriction. The initial time-chart had scheduled complete deployment of 3G by the year 2000. But this has really not happened. The recommended spectrum allocation for 3G in the IMT-2000 could not be implemented as the requested frequencies by the ITU were partially or fully in use in many countries. And it is becoming increasingly difficult to find a common slice of spectrum to enable global roaming.

Another reason why 3G systems did not take off is the disappointing performance of CDMA in practice. CDMA was projected to be the ultimate solution to interference. Theoretically, an infinite number of users could use the same frequency band, since all of them hopped on all the frequencies in a pseudo-random sequence. But, implementations of CDMA have led critics to believe otherwise. Table 3.3 shows the stark difference between the theoretical and practical performances of CDMA [13].

Table 3.3. CDMA — The debate

image

It would be interesting to speculate where mobile telephony is heading at this point in time. The first event on the agenda, of course, would be to implement 3G networks completely [14]. The trend seems to be headed toward having cells of different sizes: on the one hand, large cells to handle global access, and at the other extreme, pico-cells to handle indoor communication.

Even though 3G networks are not yet fully deployed, some researchers started working on the the next generation of systems, called 4G systems, targeted for deployment in the year 2010, to provide seamless integration with wired networks and especially IP, high bandwidth (data rates of up to 100 Mbps), ubiquity (connectivity everywhere), high-quality multimedia services, adaptive resource management, and software-defined radio. [Efficient and relatively flexible handsets are required to combat the diverse range of cellular standards/air interfaces, without which roaming between different networks becomes difficult (without significant adjustment or replacement of handsets). Software-defined radio provides a solution to this problem, by implementing the radio functionality (such as modulation/demodulation, signal generation, coding, and link-layer protocols) of different standards as software modules running on a generic hardware platform. Also, the software modules which implement new features/services can be downloaded over the air onto the handsets.]

3.7 WIRELESS IN LOCAL LOOP

In cellular systems, mobility was the most important factor influencing the design of the networks. Wireless in local loop (WLL) technology is, on the other hand, in the scenario of limited mobility. The circuit (loop) that provides the last hop connectivity between the subscriber and PSTN is called the local loop. It has been conventionally implemented using common copper wiring. Optical fiber is not a popular local loop technology due to the high cost of deployment. WLL, also known as fixed wireless access (FWA) or radio in the local loop (RLL), is the use of wireless connectivity in the last hop between the subscriber and the telephone exchange (end office). It provides a cost-effective means of connecting far-flung areas. In urban areas where high capacity is required, the use of WLL can provide the means to extend the existing network [15].

WLL has many advantages for both the subscribers and the service providers. The deployment of WLL is much easier, and it can be extended to accommodate more customers as the demand increases. Lower investment, operations, and maintenance costs make WLL an attractive, cost-effective option for the customer. WLL offers a wide range of services from basic telephony to Internet surfing. With the introduction of broadband wireless systems, bandwidth-intensive applications such as video-on-demand can also be supported.

WLL must satisfy QoS requirements to compete with more efficient copper wire transmission technologies such as integrated services digital network (ISDN) and digital subscriber line (DSL). It must match the voice and data quality of regular telephone networks. This means it should adhere to 64 Kbps or 32 Kbps transmission channels, blocking probability of less than 10-2 and BER less than 10-3 for voice and 10-6 for data. The power consumption of subscriber units must be low since they run on batteries and are not powered by the exchange. It must support authentication and encryption schemes to ensure security and data privacy.

3.7.1 Generic WLL Architecture

The architecture of a WLL system is more or less similar to that of a mobile cellular system. The BS is implemented by the base transceiver station system (BTS) and the base station controller (BSC). The BTS, also called radio port (RP) or radio transceiver unit (RTU), performs channel coding, modulation/demodulation, and implements the radio interface for transmission and reception of radio signals. A BSC, alternatively called the radio port control unit (RPCU), controls one or more BTSs and provides them with an interface to the local exchange.

The fixed subscriber unit (FSU) or radio subscriber unit (RSU) is the interface between the subscriber's wired devices and the WLL network. The FSU performs all physical and data-link layer functions from the subscriber end. The basic architecture of WLL networks is shown in Figure 3.6.

Figure 3.6. WLL architecture.

image

3.7.2 WLL Technologies

WLL technology is at the intersection between cellular and cordless telephony. Though there is no single global standard for WLL, many systems have been developed by suitable adaptation of cellular and cordless telephony standards.

Cellular-Based WLL Systems

WLL standards have been developed based on both TDMA (GSM-based) and CDMA. IS-95-based CDMA WLL supports channel rates of 9.6 Kbps and 14.4 Kbps. Since WLL FSUs are fixed, the BTS and FSU can be arranged in line-of-sight, which reduces interference. The cell can have six or more sectors, and hence increased capacity. For packetized data transfer, some dedicated channels are provided, besides statistical multiplexing with the voice channels. Up to 128 Kbps data services are available on CDMA-based WLL.

Cordless-Based WLL Systems

Cellular systems target maximization of bandwidth efficiency and frequency reuse, high coverage, and easy support for mobility in a high-speed fading environment. On the other hand, low-range (a few hundred meters) cordless-based systems are optimized for low complexity equipment, high voice quality, and low delays for data transfer, in a relatively confined static environment (with respect to user speeds). The range can also be increased using point-to-point microwave hops, where the radio signals are up-converted to microwave or optical frequencies, and down-converted again before the WLL terminal. The major TDMA-based low-range WLL systems are digital enhanced cordless telecommunications (DECT), personal access communication system (PACS), and personal handyphone system (PHS).

  1. DECT: DECT3 is a radio interface standard developed by ETSI. It is used extensively in indoor systems such as wireless private branch exchanges (PBXs) for intra-office connectivity and in cordless phones for residential areas. DECT system, operating in the 1,880-1,900 MHz range, supports users to make and receive calls within range of BSs around 100 m (indoor) and 500 m (outdoor). It also supports pedestrian mobility — speeds of the order of 10 Kmph. The DECT system uses TDMA/TDD mode for radio communications between handset and BS, with 24 time slots per frame, providing a net data rate of 32 Kbps. DECT provides 120 duplex channels with a bandwidth of 144 KHz/pair, unlike GSM, which offers only 50 KHz/duplex pair. DECT uses a dynamic channel allocation algorithm for channel assignment.

    3 It originally stood for digital European cordless telecommunications and today it stands for digital enhanced cordless telecommunications to underline its claim of being a worldwide standard for cordless telephony.

  2. PACS: PACS system employs TDMA/TDM on the radio interface using π/4-QPSK modulation. It operates in TDD and FDD modes for the unlicensed PCS band and licensed PCS band, respectively. The radio frame is 2.5 ms in duration which is divided into eight time slots, each of which transports data at 32 Kbps. Channel assignment is performed using quasi-static autonomous frequency assignment (QSAFA), where an FSU listens to all the channel pairs and chooses the one with least interference. It then chooses a time-slot within the selected channel pair.
  3. PHS: PHS, developed in Japan, supports very high-density pedestrian traffic. It uses the modulation format π/4-DQPSK, at a channel rate of 384 Kbps, in the RF band of 1,900 MHz (as DECT) with a bandwidth of 300 KHz per channel. In each of the 300 KHz wide RF carriers, there are four traffic channels, one of which is a dedicated control channel. PHS works on a TDMA/TDD mode for communication, whose frame duration is 5 ms, and uses a dynamic channel allocation scheme as in the case of DECT.

Proprietary Systems

Due to the absence of a universal standard specifically meant for WLL, a number of proprietary systems (independent technologies) have mushroomed. Cellular-based systems such as E-TDMA of Hughes Network Systems (HNS) and Lucent's Wireless Subscriber System attempt to maintain the desirable feature of cellular systems, namely, high coverage, while improving the capacity of the system. E-TDMA is an extension to the IS-136 cellular TDMA standard. It uses discontinuous transmission along with digital speech interpolation. This means both the FSU and BSC transmit only when speech is present, which is about 40% of the time. This leads to effective sharing of bandwidth. Proprietary systems Qualcomm's QCTel and Lucent's Airloop use CDMA for the MAC scheme, which helps in increasing transmission rates [15].

Satellite-Based Systems

In the absence of a common global standard for cellular systems, handheld devices which connect directly to a satellite present an attractive option to achieve worldwide communication. A satellite system consists of a satellite in space which links many Earth (ground) stations on the ground, and transponders, which receive, amplify, and retransmit signals to or from satellites. Most commercial communication satellites today use a 500 MHz uplink and downlink bandwidth. Mobile satellite systems complement existing cellular systems by providing coverage for rural areas, where it may not be economically viable to install BSs for mobile communications, and support for vehicular mobility rules out the use of WLL also.

Satellites may be geostationary, that is, stationary with respect to a point on Earth. These satellites have an orbit on the equatorial plane at an altitude of 35,786 Km such that they move with the same angular velocity as the Earth and complete a revolution in 24 hours. Satellites can also be placed on inclined orbits, at different altitudes and on non-equatorial planes. Low Earth orbiting (LEO) satellites, which orbit the Earth at lesser altitudes, play a major role in mobile communications.

While geostationary satellites can provide a much larger coverage, they require high subscriber unit power because of the high altitudes at which geostationary satellites operate. LEO satellites, on the other hand, operate at low altitudes of the order of hundreds of kilometers. Hence, more satellites are required to cover a given area. Another problem with geostationary satellites is that the associated propagation delay is significant. For a height of 36,000 Km, the speed of light delay is 0.24 s. On the other hand, LEO satellites have a propagation delay of about 12 ms [16].

The special aspect of satellite systems with respect to the MAC layer is that the time taken to detect collisions is very large. This is because collisions can be detected only when the transmission from the satellite down to the ground units arrives. Hence collision avoidance techniques are used instead of collision detection. A TDMA scheme with a centralized coordinator for allocation of time slots reduces contention. But, a fixed time-slot allocation leads to under-utilization of transponder capacity. The assignment should be made more flexible, with channels as well as frames pooled together, and allocated on demand to any Earth station. This method is termed demand assignment multiple access (DAMA), and can be used in conjunction with FDMA or TDMA. Two examples of satellite-based mobile telephony systems are studied below.

Iridium: This was an ambitious project, initiated by Motorola in the early 1990s, with 66 LEO satellites4 providing full global coverage (Earth's surface). The satellites operated at an altitude of 780 Km. The significant point about Iridium's architecture was that satellites did call routing and processing. There was line of sight (LoS) visibility between neighboring satellites and therefore the satellites communicated directly with one another to perform these call routing operations. Iridium ended up as a major commercial failure due to economic reasons in the year 2000.

4 The project initially called for the use of 77 LEO satellites and this fact gave it the name Iridium, as Iridium is the chemical element with an atomic number of 77.

Globalstar: Iridium requires sophisticated switching equipment in the satellites because it relays calls from satellite to satellite. In Globalstar, which uses a traditional "bent-pipe" design, 48 LEO satellites are used as passive relays. All routing and processing of calls are done by ground stations (or gateways), so a satellite is useful only if there is a ground station underneath it. In this scheme, much of the complexity is on the ground, where it is easier to manage. Further, since typical Globalstar ground station antennas can have a range of many kilometers, a few such stations are needed to support the system. Globalstar can provide full global coverage except for the regions in the middle of oceans where ground station deployment is not possible or costs a lot, and those regions near the poles.

Global Positioning System (GPS)

Besides the use of satellite systems for WLL, another important use of satellites in the recent past has been the global positioning system (GPS)[17]. The GPS program is funded and controlled by the U.S. Department of Defense. The space segment of GPS consists of satellites which send encoded radio signals to the GPS receivers. The system has 24 satellites that orbit the Earth at about 18,000 Km in 12 hours, on six equally spaced orbital planes. The movement of the satellites is such that five to eight satellites are visible from any point on Earth. The control segment consists of five tracking stations located around the world which monitor the satellites and compute accurate orbital data and clock corrections for the satellites. The user segment consists of the GPS receivers, which compute the four dimensions – x, y, z coordinates of position, and the time – using the signals received from four GPS satellites.

GPS has been used primarily for navigation, on aircraft, ships, ground vehicles and by individuals. Relative positioning and time data has also been used in research areas such as plate tectonics, atmospheric parameter measurements, astronomical observatories, and telecommunications.

3.7.3 Broadband Wireless Access

Fixed wireless systems are point-to-point (where a separate antenna transceiver is used for each user) or multipoint (where a single antenna transceiver is used to to provide links to many users). The latter one is the most popular and useful for WLL. A multipoint system is similar to a conventional cellular system. However, in a multipoint system (i) cells do not overlap, (ii) the same frequency is reused at each cell, and (iii) no handoff is provided as users are fixed. Two commonly used fixed wireless systems, which are typically used for high-speed Internet access, are local multipoint distribution service (LMDS) and multichannel multipoint distribution service (MMDS). These can be regarded as metropolitan area networks (MANs).

LMDS

LMDS, operating at around 28 GHz in the United States and at around 40 GHz in Europe, is a broadband wireless technology used to provide voice, data, Internet, and video services. Due to the high frequency at which LMDS systems operate, they are organized into smaller cells of 1-2 Km (against 45 Km in MMDS), which necessitates the use of a relatively large number of BSs in order to service a specific area. However, LMDS systems offer very high data rates (maximum cell capacity of 155 Mbps).

MMDS

MMDS operates at around 2.5 GHz in the licensed band and at 5.8 GHz in the unlicensed band in the United States, and LoS is required in most cases. The lower frequencies (or larger wavelengths of MMDS signals) facilitate longer ranges of around 45 Km. Due to the fact that equipment operating at low frequencies is less expensive, MMDS systems are also simpler and cheaper. However, they offer much less bandwidth (the maximum capacity of an MMDS cell is 36 Mbps). MMDS is also referred to as wireless cable as this service provides broadcast of TV channels in rural areas which are not reachable by broadcast TV or cable.

3.8 WIRELESS ATM

3.8.1 ATM — An Overview

ATM has been the preferred network mechanism for multimedia applications, LAN interconnections, imaging, video distribution, and other applications that require quality of service (QoS) guarantees. ATM uses an asynchronous mode of transfer [asynchronous time-division or statistical multiplexing (flexible bandwidth allocation)] to cope with the varying requirements of broadband services. ATM defines a fixed-size cell (which is the basic unit of data exchange) of length 53 bytes comprised of a 5-byte header and a 48-byte payload. The ATM layer provides the higher layers functions such as routing and generic flow control. First, a virtual connection is established between the source and the destination. This virtual connection is identified by the virtual path identifier (VPI) and virtual channel identifier (VCI). After the connection is set up, all cells are sent over the same virtual connection using the VPI/VCI in the cell header. The VPI/VCI is used by the switching node to identify the virtual path and virtual channel on the link, and the routing table established at the connection setup time is used to route the cell to the correct output port. Cells are transmitted without consideration of their service requirements. The ATM adaptation layer (AAL) is responsible for providing the QoS requirements of the application. Each AAL layer will be supporting a subset of traffic and, by changing the AAL layer, different traffic requirements of the application can be transparently passed on to the ATM layer. The AAL's primary function is to provide the data flow to the upper layers taking into account the cell loss and incorrect delivery because of congestion or delay variation in the ATM layer.

In order to minimize the number of AAL protocols needed, services are classified on the basis of the following parameters:

• Timing relationship between source and destination

• Bit rate (constant or variable)

• Type of connection (connection-oriented or connection-less)

The service classes are further classified as constant bit rate (CBR), variable bit rate (VBR), available bit rate (ABR), and unspecified bit rate (UBR), on the basis of the data generation rate.

3.8.2 Motivation for WATM

The reasons for extending the ATM architecture to the wireless domain include supporting integrated multimedia services in next-generation wireless networks, seamless integration of wired ATM networks with wireless networks, the need for a scalable framework for QoS provisioning, and support for mobility. The introduction of ATM in the wireless domain creates new challenges as the original design did not consider varying conditions of the wireless domain. The issues involved in extending wired ATM to the wireless domain are location management, mobility management, maintaining end-to-end connection, support for QoS, and dealing with the characteristics of radio (wireless) links. Recall that HIPERLAN/2, a WLAN system, offers the capabilities of WATM.

3.8.3 Generic Reference Model

Figure 3.7 explains schematically the generic reference model that is used in WATM networks. Mobile WATM terminals (MTs) have radio links to base stations (BSs or radio transceivers) which are connected to a WATM access point (WATM-AP/AP) via wires. The APs are connected to enhanced mobility ATM switches entry (EMAS-E). The APs are traffic concentrators and lack routing capabilities. Rerouting is usually performed at the EMAS-E and the node that performs the rerouting is termed as the cross-over switch (COS) (Note that some APs may have routing capabilities, in which case they perform the functions of AP and EMAS-E). EMAS-Es are in turn connected to enhanced mobility aware ATM switches network (EMAS-N). EMAS-E and EMAS-N are important entities that support mobility in WATM. The EMAS-N switches are finally connected to the ATM switches of the traditional ATM network, enabling wired and wireless networks to coexist.

Figure 3.7. Generic reference model.

image

3.8.4 MAC Layer for WATM

MAC protocols lay down the rules to control the access to the shared wireless medium among a number of users. The main challenge in the MAC protocol for the WATM is to provide the support for standard ATM services (including CBR, VBR, ABR and UBR) efficiently in the wireless environment. The three main categories of MAC protocols are:

  1. Fixed assignment: Fixed assignment schemes essentially apportion the available resource (time or frequency) among the users in a definite manner in order to provide the QoS required. Fixed assignment schemes such as TDMA and FDMA suffer from inefficient bandwidth usage and, in the case of radio spectrum (where the frequency range is very limited), this turns out to be a serious drawback. These traditional schemes are good for CBR traffic but not for VBR traffic, which is the most dominant traffic in WATM.
  2. Random assignment: As opposed to fixed assignment schemes, random assignment involves random allocation of the resource, namely, the wireless channel. These schemes suffer from large delay due to the contention resolution algorithms. Schemes that use backoff techniques are unpredictable and hence cannot provide guaranteed QoS for WATM. An example of such a scheme is the CSMA/CA.
  3. Demand assignment: Users (in contention with other users) address explicitly or implicitly their need for bandwidth, but once the demand is accepted, they can transfer the packets in a contention-free environment. If a user enters an idle period, the bandwidth assigned to it can be used by other users. Unlike fixed assignment, bandwidth is not wasted as it is assigned only on demand. Further, the bandwidth wastage due to collisions is reduced as only the request phase is in contention and subsequently the transmission is contention-free. Downlink frames can be transferred using a different channel (frequency division) or on the same channel time multiplexed with the uplink sub-frame (time division). Time division provides better flexibility as it has more control over the periods by varying the periods for downlink and uplink sub-frames. Some of the proposed frequency division MAC protocols for WATM are DQRUMA [18], PRMA/DA [19], and DSA++ [20]. Time division MAC protocols include MASCARA [21], PRMA/ATDD [22], and DTDMA/TDD [23].

3.8.5 Handoff Issues in WATM

Handoff is said to occur when a mobile terminal (MT) moves from the control of one BS to another, as discussed earlier. A handoff tends to disrupt existing connections of the MT, hence care needs to be taken during handoffs. This section deals with the issues related to handoffs and the proposed solutions to tackle the problems that arise. There are two levels of handoff, one at the radio layer and the other at the data link layer.

Types of Handoffs

Handoffs can be classified into two types: The first type of handoff occurs when the MT decides the possibility of handoff and the EMAS-E chooses the next BS from the prospective choices. This is called a backward handoff. During this time, there is a smooth transition in the power levels of the source and the destination BS. The next type of handoff occurs when the MT decides on the BS to which the handoff is to take place. This is called a forward handoff. There is an abrupt break in the radio connection during a forward handoff.

Different Situations of Backward Handoff

A backward handoff can occur in one of the following three situations:

  1. Intra AP: In this case, the source and destination radio BSs belong to the same AP. The EMAS-E merely finds out the resource availability at the new BS and forwards the response to the MT. The issues involved in an Intra-AP handoff decision are similar to those in cellular networks.
  2. Inter AP/Intra EMAS-E: In this case, the BSs belong to different APs connected to the same EMAS-E. The EMAS-E inquires about the availability of resources at the destination AP and forwards the response to the MT.
  3. Inter EMAS-E: In this case, the BSs belong to different EMAS-Es. This is the most complicated of the handoffs. The MT asks the destination EMAS-E for the availability of resources. The destination EMAS-E in turn queries the corresponding AP and the response is sent back. The source EMAS-E now requests that the destination EMAS-E reserve resources and the handoff is performed.

Once the handoff is performed, the paths need to be reconfigured from the COS.

Different Situations of Forward Handoff

Similar to the backward handoff, the forward handoff takes place in the following three ways:

  1. Intra AP: When a radio disruption occurs, the MT disassociates itself from the BS. The MT then tries to associate with another BS under the same AP. This is conveyed to the EMAS-E using a series of messages from MT and AP.
  2. Inter AP/Intra EMAS-E: This case is similar to the Intra AP handoff except for the fact that source and destination APs are different.
  3. Inter EMAS-E: The MT disassociates itself from the old BS and associates itself with the new BS. This is conveyed to the EMAS-E1 (source EMAS-E or the EMAS-E from which the handoff has taken place) and EMAS-E2 (destination EMAS-E or the EMAS-E to which the handoff has taken place) by MT. EMAS-E2 tries to reserve resources for the connection requested by the MT at the corresponding AP. EMAS-E2 establishes a connection to the COS. EMAS-E1 then releases the connection to the COS.

Protocols for Rerouting After Handoff

When an MT and its connections are handed over from one BS to another, the connections need to be reestablished for the data transfer to continue. In case of the intra AP handoff and inter AP/intra EMAS-E handoff, a change in the routing tables of the AP and EMAS-E, respectively, is enough to handle rerouting. However, the inter EMAS-E handoffs involve wide area network (WAN) rerouting, and hence are more challenging. This section describes the generic methods for rerouting. There are three generic methods for rerouting [24]:

  1. Partial path rerouting scheme: This involves the tearing down of a portion of the current path and the establishing of a new sub-path. The new sub-path is formed from the COS to the destination switch. One way of implementing this scheme is to probe each switch on the path from the source switch to the end-point, to find the switch with which the current QoS requirements are best satisfied.
  2. Path splicing: A probe packet is sent from the destination switch toward the source, searching for a switch that can act as the COS point. Once such a switch is found, the new path between the destination and the source passes through this COS, splicing the old path.
  3. Path extension: This is the simplest of the rerouting schemes. Here the current path is extended from the source switch to the destination switch. This method usually results in a non-optimal path. Moreover, if the MT moves back to the source switch, a loop will be formed. This will result in needless delay and wastage of bandwidth. The protocols for implementing this scheme have to take the above aspects into consideration.

The important point to note in the above discussion is the trade-off between computational complexity and the optimality of the generated route. Some specific examples of rerouting are: Yuan-Biswas rerouting scheme, Bahama rerouting for WATM LAN, virtual connection tree rerouting scheme [25], source routing mobile circuit (SRMC) rerouting scheme [26], and nearest common node rerouting (NCNR) scheme [27].

Effect of Handoff on QoS

Handoffs have a major bearing on an WATM service QoS performance. The reasons for this are as follows [24]:

  1. When a handoff occurs, there is a likelihood of incompatibility between the QoS requirements of the MT and the destination switch handling it.
  2. There also exists a possibility of disruption of the active connection during the handoff.

The former is a post-handoff situation, whereas the latter occurs during the handoff.

When a network accepts a connection to a non-mobile end-point, it provides consistent QoS to the traffic on the connection for the lifetime of the connection. However, this cannot be guaranteed in case the destination is an MT. The failure may occur when the BS to which an MT is migrating is heavily loaded and hence cannot satisfy the QoS requirements of the MT. In such cases, one or more of the connections may have to be torn down. There are three ways in which this problem can be handled:

  1. In case of a backward handoff, the EMAS-E chooses the best possible BS so that the number of connections affected by the handoff is minimized [28].
  2. The probability of a switch not being able to match the migrating MT's QoS requirements is high; therefore, a method of preventing frequent complete tear-down of connections is to use soft parameters. In this paradigm, the parameters are actually a range of values which is acceptable instead of fixed numbers. As long as the destination can provide guaranteed QoS within this range, the MT continues with its connections. But if the range is violated, then the connection must be torn down.
  3. In the worst case, when the network is forced to tear down a connection, one way of improving the situation is to provide some priority to the connections so that the MT can set some preferences for the higher priority connections. Low-priority connections are sacrificed to maintain QoS parameters of the higher priority ones.

QoS violations can also result from disruption of the connection during the course of the handoff. During a handoff, the connection gets temporarily disconnected for a small amount of time, and no data can be exchanged during this interval. Hence, the objective is to reduce this time as well as the disruption caused by it. This disruption can be of two kinds: loss of cells and corruption of their temporal sequence.

In order to prevent loss of cells, they are required to be buffered in the switches on the route between the source and the destination. A major problem associated with buffering is the storage required at the EMAS-Es. One way out is to buffer only when necessary and discard the cells belonging to real-time traffic whose value expires after a small amount of time. On the other hand, throughput-sensitive traffic cannot bear cell loss and hence all cells in transit during handoff have to be buffered.

3.8.6 Location Management

In a WATM network, the MTs are mobile and move from one BS to another over a period of time. Therefore, in order to enable communication with them some methods need to be developed to keep track of their current locations. This process comes under the purview of location management (LM). The following are some of the requirements of an LM system [29]:

  1. Transparency: The LM system should be developed in such a manner that the user should be able to communicate irrespective of mobility.
  2. Security: The system must guard against unauthorized access to the database of MT addresses and MT locations.
  3. Unambiguous identification: The LM system should be capable of uniquely identifying MTs and their locations.
  4. Scalability: The system must be scalable to various sizes of networks. For doing this efficiently, it should harness the advantages of the hierarchical nature of networks.

Location management needs to address three broad issues:

  1. Addressing: This involves how the various entities such as MTs, switches, APs, and BSs are addressed. It specifies the location of the terminal in the network so that, given the address, a route between communicating terminals can be established.
  2. Location updating: This involves how the LM system is notified about the change in an MT's location and how the LM system maintains this information.
  3. Location resolution: This involves how the information maintained by the LM system is used to locate a specific MT.

Location Update

There are several entities that play a role in location update:

  1. Location server (LS): The LS is responsible for maintaining the current location information for the MTs. Each LS is associated with an EMAS and maintains address for MTs.
  2. Authentication server (AuS): The AuS is responsible for maintaining authentication information for each MT. Each AuS is associated with an EMAS. The AuS stores a table of permanent unique id of the MTs and the corresponding authentication key which the MT needs to supply during communication.
  3. End-user mobility-supporting ATM switch (EMAS): An EMAS maintains the address of the LS and AuS associated with it. If it is on the edge of the network, then it also maintains ids of MTs currently associated with it.

Location Resolution

Location resolution deals with the methodology of obtaining the current location of an MT so the user can communicate with it. It is important to note that, if the current location maintained by the LS is the full address of the MT, that is, up to its current BS, then every time the MT migrates to a new BS the LS needs to be updated. This is not efficient, because messaging causes substantial bandwidth overhead. A better method would be to store the address of a switch under whose domain the MT is currently located. Hence as long as the MT moves within the domain of the switch, the LS need not be updated.

3.9 IEEE 802.16 STANDARD

Metropolitan area networks (MANs) are networks that span several kilometers and usually cover large parts of cities. These networks are much larger in size than LANs and their functionalities differ markedly from those of LANs. MANs often connect large buildings, each of which may contain several computers. Using fiber, coaxial cables, or twisted pair for interconnection is prohibitively expensive in such scenarios. The usage of broadband wireless access (BWA) for this purpose is an economical alternative which has been explored by researchers as well as by the industry. This led to the requirement of a standard for BWA to be used in wireless MANs (WMANs) and wireless local loops (WLLs). IEEE 802.16, which is officially called air interface for fixed broadband wireless access systems, was the result of the standardization efforts.

IEEE 802.16 is based on the OSI model, as shown in Figure 3.8 [30]. IEEE 802.16 specifies the air interface, including the data link layer (DLL) and the physical layer, of fixed point-to-multipoint BWA systems. The DLL is capable of supporting multiple physical layer specifications optimized for the frequency bands of the application. Base stations (BSs) are connected to public networks. A BS serves several subscriber stations (SSs), which in turn serve buildings. Thus the BS provides the SS with a last-mile (or first-mile) access to public networks. It may be noted that BSs and SSs are stationary. The challenge faced by this standard was to provide support for multiple services with different QoS requirements and priorities simultaneously.

Figure 3.8. IEEE 802.16 protocol stack.

image

3.9.1 Differences Between IEEE 802.11 and IEEE 802.16

While IEEE 802.11 has been a successful standard for WLANs, it is not suited for use in BWA. This fact can be appreciated when the differences between IEEE 802.11 and IEEE 802.16, listed below, are studied.

IEEE 802.11 has been designed for mobile terminals, which is irrelevant in the context of MANs. IEEE 802.16 has been designed for broadband data such as digital video and telephony.

• The number of users and bandwidth usage per user is much higher in a typical IEEE 802.16 network when compared to a typical IEEE 802.11 basic service set. This calls for usage of a larger frequency spectrum in IEEE 802.16 as against the ISM bands used by IEEE 802.11. BWA typically uses millimeter wave bands and microwave bands (above 1 GHz frequencies).

IEEE 802.16 is completely connection-oriented and QoS guarantees are made for all transmissions. Though IEEE 802.11 provides some QoS support for real-time data (in the PCF mode), it has not been designed for QoS support for broadband usage.

3.9.2 Physical Layer

The physical layer uses traditional narrow-band radio (10-66 GHz) with conventional modulation schemes for transmission. Above the physical transmission layer, there is a convergence sublayer to hide the transmission technology from the DLL. Efforts are going on to add two new protocols, IEEE 802.16a and IEEE 802.16b, at the physical layer, which attempt to bring the IEEE 802.16 closer to IEEE 802.11. While IEEE 802.16a operates in the 2-11 GHz frequency range, IEEE 802.16b operates in the 5 GHz ISM band.

The signal strength of millimeter waves falls off sharply with distance from the BS, which results in a reduction in the signal to noise ratio (SNR). To account for this, the following three modulation schemes are used. The modulation scheme to be used is chosen depending on the distance of the SS from the BS.

  1. QAM-64, which offers 6 bits/baud, is used by subscribers that are located near the BS.
  2. QAM-16, which offers 4 bits/baud, is used by subscribers that are located at an intermediate distance from the BS.
  3. QPSK, which offers 2 bits/baud, is used by subscribers that are located far away from the BS.

If we assume 30 MHz of spectrum, QAM-64 offers 180 Mbps, QAM-16 offers 120 Mbps, and QPSK offers 60 Mbps. It is apparent that subscribers farther away get lower data rates. It may also be noted that millimeter waves travel in straight lines, unlike the longer microwaves. This allows BSs to have multiple antennas, which point at different sectors of the surrounding terrain. The high error rates associated with millimeter waves have called for the usage of Hamming codes to do forward error correction in the physical layer. This is in contrast to most other networks where checksums detect errors and request retransmission of frames that are in error. The physical layer can pack multiple MAC frames in a single physical transmission to gain improved spectral efficiency, because of the reduction in the number of preambles and physical layer headers.

While voice traffic is generally symmetric, other applications such as Internet access have more downstream traffic than upstream traffic. To accommodate them, IEEE 802.16 provides a flexible way to accommodate bandwidth by using frequency division duplexing (FDD) and time division duplexing (TDD). The bandwidth devoted to each direction can be changed dynamically to match the traffic in the corresponding direction.

3.9.3 Data Link Layer

The DLL was designed with the wireless environment in mind, which demands an efficient usage of the spectrum. Broadband services call for very high uplink and downlink bit rates, and a range of QoS requirements. Security issues also assume importance in this scenario. It is preferred that the DLL be a protocol-independent engine, that is, the DLL should have convergence layers for all protocols including ATM, IP, and Ethernet. The DLL of IEEE 802.16 was designed to meet all these requirements. The DLL of IEEE 802.16 can be subdivided into three sublayers, whose functionalities are explained in this section. The sublayers are listed from the bottom up.

  1. Security sublayer: This is the bottom-most sublayer, which deals with privacy and security. This is crucial for public outdoor networks where the transmission can be heard over a city. This layer manages encryption, decryption, and key management. It may be noted that only the payloads are encrypted, and the headers are kept intact. This means that the snooper can identify the participants in a transmission, but cannot read the data being transmitted.
  2. MAC sublayer common part: This is the protocol-independent core, which deals with channel management and slot allocation to stations. Here the BS controls the system. MAC frames are integral multiples of physical layer time slots. Each frame contains the downstream (BS to SS) and upstream (SS to BS) maps, which indicate the traffic in the various time slots and other useful information. The MAC sublayer strikes a trade-off between the stability of contention-less operation and the efficiency of contention-based operation, using a TDM/TDMA mechanism. On the downlink, data to the SS is multiplexed using TDM, and on the uplink, the medium is shared by the SSs using TDMA.

    All services offered by IEEE 802.16 are connection-oriented, and each connection (uplink) is given one of the following four classes of service:

    1. Constant bit rate service: This is intended for transmitting uncompressed voice, where a predetermined amount of data is generated at fixed time intervals. Certain time slots are dedicated to each connection of this type and they are available automatically without explicit request.
    2. Real-time variable bit rate service: This is intended for compressed multimedia and soft real-time5 applications, where the amount of bandwidth required may vary with time. To accommodate such variances, the BS polls the SS periodically to query the bandwidth needed for the following period.

      5 This refers to the category of real-time traffic where the missing of deadlines results in non-catastrophic events such as degradation of the quality of communication.

    3. Non-real-time variable bit rate service: This is intended for large file transfers and other such heavy transmissions that are not real-time. To accommodate them, the BS polls the SS often, but not periodically.
    4. Best effort service: All other applications contend for best effort service. Polling is absent and SSs contend by sending requests for bandwidth in time slots marked in the upstream map as available for contention. Collisions are reduced by using the binary exponential back-off algorithm.
  3. Service specific convergence sublayer: This is the topmost sublayer in DLL and its function is to interface to the network layer, which is similar to the logical link sublayer in other 802 protocols. IEEE 802.16 has been designed to integrate seamlessly with both connection-less protocols such as PPP, IP, and Ethernet, and connection-oriented protocols such as ATM. While mapping ATM connections to IEEE 802.16 is quite straightforward, mapping packets to IEEE 802.16 is done in a judicious manner by this sublayer.

    A request/grant scheme is used for handling bandwidth allocation. Bandwidth requests are always per connection. Bandwidth grants may be per connection (GPC) or per SS (GPSS). Bandwidth GPSS is suitable if there are many connections per SS and this offloads the responsibilities of the BS. SS redistributes bandwidth among its connections maintaining QoS and service level agreements. While this method allows sophisticated QoS guarantees and a low overhead, it needs complex SSs. Bandwidth GPC is suitable if there are only a few users per SS. The BS grants bandwidth to each connection. This incurs a higher overhead, but allows a simpler SS.

3.10 HIPERACCESS

The HIPERACCESS standard of the ETSI BRAN (discussed in the previous chapter) pertains to broadband radio access systems [4] and is the European counterpart to the IEEE 802.16 standard. It uses fixed bidirectional radio connections to convey broadband services between users' premises and a broadband core network. HIPERACCESS systems are the means by which residential customers and small-to medium-sized enterprises can gain access to broadband communications delivered to their premises by radio. They provide support for a wide range of voice and data services and facilities. They use radio to connect the premises to other users and networks, and offer "bandwidth on demand" to deliver the appropriate data rate needed for various services. These systems intend to compete with other broadband wired access systems such as xDSL and cable modems.

HIPERACCESS network deployments can potentially cover large areas. A milliwave spectrum is employed in order to limit the transmission range to a few kilometers, owing to the large capacity requirements of the network. Hence, a typical network consists of a number of cells each operating in a point-to-multipoint (PMP) manner; each cell consists of an access point (AP) equipment device located approximately at the cell center and a number of access termination (AT) devices which are spread across the cell. The cell is divided into four sectors to increase the spectral efficiency by reusing the available radio frequency (RF) channels in a systematic manner within the deployment region. The protocol stack of the HIPERACCESS standard consists of the physical layer, the convergence layer, and the data link control (DLC) layer.

3.10.1 Physical Layer

The physical layer involves adaptive coding of the data obtained from the DLC layer, transmission of data, and support of the different duplex schemes, namely, frequency division duplexing (FDD), half-FDD (H-FDD), and time division duplexing (TDD). The AP equipment handles more than one RF channel and more than one user (AT), hence its architecture is different from that of the AT.

Modulation

Modulation techniques employed are based on QAM, much along the lines of IEEE 802.16 (with 2M points constellation, where M is the number of bits transmitted per modulated symbols). For the DL, QPSK (M = 2) and 16-QAM (M = 4) are mandatory and 64-QAM (M = 6) is optional. For the UL, QPSK is mandatory and 16-QAM is optional.

PHY-Modes

A PHYsical (PHY) mode includes a modulation and a coding scheme (FEC). Several sets of PHY-modes are specified for the DL. The reason for specifying different sets of PHY-modes is to offer a higher flexibility for the HA-standard deployment, where the adequate choice of a given set of PHY-modes will be determined by the parameters of the deployment scenario such as coverage, interference, and rain zone. The Reed Solomon coding scheme is generally employed for coding the data stream.

Duplex Schemes

As the communication channel between the AP and ATs is bidirectional, DL and UL paths must be established utilizing the spectrum resource available to the operator in an efficient manner. Two duplex schemes are available: one is frequency-domain-based and one is time-domain-based. FDD partitions the available spectrum into a DL block and an UL block. In HIPERACCESS, both DL and UL channels are equal in size, 28 MHz wide. In the H-FDD case, the AT radio equipment is limited to a half-duplex operation to reduce the cost. The AP recognizes in this case the fact that switching from transmission operation to reception operation (and vice versa) at the AT is not immediate. It is emphasized that the half-duplex operation is an AT feature only. The AP has a different impact on the deployment cost and on system capacity if a half-duplex operation is employed. In contrast to FDD, TDD uses the same RF channel for DL and UL communications. The DL and UL transmissions are established by time-sharing the radio channel where DL and UL transmission events never overlap. In HIPERACCESS, the channel is 28 MHz wide as in the FDD case. The AP establishes a frame-based transmission and allocates a portion of its frame for DL purposes and the remainder of the frame for UL purposes.

3.10.2 Convergence Layer

The task of the convergence layer is to adapt the service requirements of the applications of the higher layers to the services offered by the DLC layer. There are two types of convergence layers, namely, the cell-based convergence layer and the packet-based convergence layer. The classification is similar to the one discussed in the HIPERLAN/2 standard. The convergence layer is comprised of two parts, namely, the service-independent common part (CP) and the service-specific convergence sublayer (SSCS). This classification is similar to the discussion on the data link layer of IEEE 802.16.

3.10.3 DLC Layer

The basic features of the DLC layer are efficient use of the radio spectrum, high multiplex gain, and maintenance of QoS. Multiplexing schemes are employed to make a better use of the available frequency spectrum at a lower equipment cost. Unlike multiplexing, multiple access derives from the fact that every subscriber has access to every channel, instead of a fixed assignment as in most multiplex systems.

There are broadly two kinds of transmissions: uplink (UL) transmission (from AT to AP) and downlink (DL) transmission (from AP to AT). For the AP to control the access of ATs, TDMA is employed. UL transmission events of the ATs are scheduled by the AP that controls them. The DL data stream is multiplexed in the time domain (TDM). Each TDM region (part of the DL frame) is assigned a specific physical mode (consisting of coding and modulation schemes). The TDM regions are allocated in a robustness-descending order; for example, an AT with excellent link conditions, which is assigned to a spectrally efficient physical mode, starts its reception process at the beginning of the frame and continues through all TDM regions, ending the reception process with its associated TDM region. An AT with worse link conditions will be assigned to a more robust physical mode and its reception process will end before the AT of the previous example.

In addition to the DL TDM region, there could be TDMA transmissions present in a TDMA region on the DL. In this scheme, an AT may be assigned to receive DL transmissions either in a TDM region or in a TDMA region. With this option, the AT may seek DL reception opportunities immediately after it has stopped its UL transmission within the current DL frame.

The DLC layer is connection-oriented to guarantee QoS. Connections are set up over the air during the initialization of an AT, and additional new connections may be established when new services are required.

Radio Link Control (RLC) Sublayer

The RLC sublayer performs functions pertaining to radio resource control, initialization control, and connection control.

Radio resource control: This comprises all mechanisms for load-leveling, power-leveling, and change of physical modes. These functions are specific to each AT.

Initialization control: This includes functions for the initial access and release of a terminal to or from the network as well as the reinitialization process required in the case of link interruptions. These mechanisms are AT-specific.

DLC connection control: This includes functions for the setup and release of connections and connection aggregates (groups of connections). These functions are usually connection-specific.

The ARQ protocol is implemented at the DLC level. It is based on a selective repeat approach, where only the erroneously received PDUs are retransmitted.

3.11 SUMMARY

Cellular systems offer city-wide or country-wide mobility, and may even provide compatibility across certain countries, but they are not truly global at present. Satellite-based systems offer mobility across the world. A WLL system aims at support for limited mobility, at pedestrian speeds only, with the main purpose being a cost-effective alternative to other wired local loop technologies. Therefore, it does not require elaborate diversity mechanisms, whereas frequency and spatial diversity are essential to cellular networks. The main purpose of broadband wireless networks is to offer huge bandwidth for applications such as multimedia and video-on-demand.

An increasing demand for mobile communications has led to efforts for capacity and QoS improvement. Superior algorithms are used for dynamic channel allocation, in cellular, WLL, and satellite networks. Location management techniques have been streamlined to involve minimal and quick database access. Strategies have been devised to minimize the handoff delay and maximize the probability of a successful handoff. The main standards and implementations have been discussed for the different kinds of wireless communication systems. It has been observed that there are problems in reaching a global consensus on common standards for cellular networks and WLL, due to large investments already made in these networks employing different standards in different countries.

The mobile communication systems of the future aim at low cost, universal coverage, and better QoS in terms of higher bandwidth and lower delay. The ultimate goal is to provide seamless high bandwidth communication networks through a single globally compatible handset. This chapter also described the various issues in the design of a WATM network. As wireless technology and gadgets become increasingly popular, users will expect better QoS and reliability. WATM is one of the better ways of satisfying these needs. Special mechanisms need to be built to handle handoffs and to reroute the paths after the handoff and for location management. This chapter also described the IEEE 802.16 and HIPERACCESS standards, which show the state of the art in broadband wireless access standards. Table 3.4 compares the technical features of IEEE 802.11b WLANs, IEEE 802.16 WMANs, and GSM WWANs.

Table 3.4. A brief comparison among IEEE 802.11b WLANs, IEEE 802.16 WMANs, and GSM WWANs

image

3.12 PROBLEMS

  1. An alternative solution to the cellular system is the walkie-talkie system which provides a direct link between the mobile terminals. Compare the two systems.
  2. How do you separate the different layers (macro, micro, and pico) of a cellular network in order to avoid co-channel interference across layers?
  3. How does frequency reuse enhance cellular network capacity? Consider an area of 1,000 sq. Km to be covered by a cellular network. If each user requires 25 KHz for communication, and the total available spectrum is 50 MHz, how many users can be supported without frequency reuse? If cells of area 50 sq. Km are used, how many users can be supported with cluster sizes of 3, 4, and 7? Besides the number of users, what other major factor influences the decision on cluster size?
  4. A particular cellular system has the following characteristics: cluster size = 7, uniform cell size (circular cells), user density = 100 users/sq. Km, allocated frequency spectrum = 900-949 MHz, bit rate required per user = 10 Kbps uplink and 10 Kbps downlink, and modulation code rate = 1 bps/Hz. Calculate the average cell radius for the above system if FDMA/FDD is used.
  5. Using the same data as in the previous problem, if TDMA/TDD is adopted, explain how the wireless medium is shared.
  6. Due to practical limitations, it is impossible to use TDMA over the whole 3.5 MHz spectrum calculated in Problem 4. Hence the channel is divided into 35 subchannels and TDMA is employed within each channel. Answer the following questions assuming the data and results of Problem 4.
    1. How many time slots are needed in a TDMA frame to support the required number of users?
    2. To have negligible delays, the frame is defined as 10 ms. How long is each time slot?
    3. What is the data rate for each user? How many bits are transmitted in a time slot?
    4. If one time slot of each frame is used for control and synchronization, and the same cell radius is maintained, how many users will the whole system support?
    5. How long will it take for a signal from an MT farthest away from the BS to reach the BS?
    6. The TDMA slots must be synchronized in time at the BS, but different MTs are at different distances and hence will have different delays to the BS. If all the nodes have a synchronized clock and synchronize transmission of slot at time t = t0, what guard time will we have to leave in each frame so that data will not overlap in time?
    7. Suppose the clock synchronization error can be ±10 ns. What guard time is needed to ensure uncorrupted data? What is the maximum data rate?
    8. Suppose the previous guard time is not acceptable for our system and that the clocks are not synchronized at all. The MTs have a mechanism to advance or delay transmission of their data, and the delay will be communicated to the MTs using the control channel. What can the MTs use as a reference to synchronize their data? How can a BS calculate the delay for each MT?
  7. Briefly explain what happens to the cell size if each user needs to double the data rate.
  8. What are the power-conserving strategies used in cellular systems?
  9. A vehicle moves on a highway at an average speed of 60 Km/h. In the traffic of the city, the average speed drops down to 30 Km/h. The macro-cellular radius is 35 Km, and the micro-cellular radius is 3 Km. Assume the macro-cellular layer is used on the highways and the micro-cellular in the city.
    1. How many handoffs are expected over a journey of six hours on a highway?
    2. How many handoffs are there in a one-hour drive through the city?
    3. What would happen if there was absolutely no traffic and the vehicle could move at 70 Km/h in the city?
    4. What does this show about deciding which layer should handle a call?
  10. The Internet is all set to take over the world as a very important form of communication. How does this affect the local loop?
  11. Why are the ETSI recommended bit error rates lower for data than voice?
  12. Consider a WLL system using FDMA and 64-QAM modulation. Each channel is 200 KHz and the total bandwidth is 10 MHz, used symmetrically in both directions. Assume that the BS employs 120-degree beam-width antennas. (Given spectral efficiency of 64-QAM is 5 bps/Hz.) Find:
    1. The number of subscribers supported per cell.
    2. The data capacity per channel.
    3. The data capacity per cell.
    4. The total data capacity, assuming 40 cells.
  13. Frequency allocation is done automatically (dynamically) by DECT, PACS, and PHS. The alternative is manual assignment. How is manual (static/fixed) assignment of frequencies done?
  14. Compare the use of satellite systems for last-mile connectivity to rural areas, with cellular and WLL systems.
  15. GSM requires 48 bits for synchronization and 24 bits for guard time. Compute:
    1. The size of a WATM packet.
    2. The size of a WATM request packet if the request contains only MT ID of 1 byte.
    3. Number of mini-slots per slot.
  16. Suppose the frame in the case of TDD protocols is of fixed period, T, and the RTT for an MT and the BS is t. Calculate the time of waiting for an MT before it gets the ACK for its request for TDD and FDD schemes. Assume the number of packets that BS sends is only 70% of what MTs send to BS. Compute the percentage of used slots for FDD and an adaptive TDD.
  17. In location management, it was said that the home base EMAS-E redirects the connection to the current location of the MT. Using a scheme similar to partial path rerouting, describe a way of doing this.
  18. In case of the Inter EMAS-E forward handoff, unlike the backward handoff case, the destination EMAS-E has to notify the old EMAS-E about the handoff request from MT. Why is this so?
  19. Why is the route generated by the path splicing scheme at least as optimal (if not more) as that generated by the partial path rerouting scheme?

BIBLIOGRAPHY

[1] B. H. Walke, Mobile Radio Networks: Networking, Protocols, and Performance, John Wiley & Sons, January 2002.

[2] J. Sarnecki, C. Vinodrai, A. Javed, P. O'Kelly, and K. Dick, "Microcell Design Principles," IEEE Communications Magazine, vol. 31, no. 4, pp. 76-82, April 1993.

[3] Y. B. Lin and I. Chlamtac, Wireless and Mobile Network Architecture, John Wiley & Sons, October 2000.

[4] P. Nicopolitidis, M. S. Obaidat, G. I. Papadimitriou, and A. S. Pomportsis, Wireless Networks, John Wiley & Sons, November 2002.

[5] D. Weissman, A. H. Levesque, and R. A. Dean, "Interoperable Wireless Data," IEEE Communications Magazine, vol. 31, no. 2, pp. 68-77, February 1993.

[6] M. Laitinen and J. Rantala, "Integration of Intelligent Network Services into Future GSM Networks," IEEE Communications Magazine, vol. 33, no. 6, pp. 76-86, June 1995.

[7] B. Jabbari, "Intelligent Network Concepts in Mobile Communications," IEEE Communications Magazine, vol. 30, no. 2, pp. 64-69, February 1992.

[8] J. Homa and S. Harris, "Intelligent Network Requirements for Personal Communications Services," IEEE Communications Magazine, vol. 30, no. 2, pp. 70-76, February 1992.

[9] V. K. Garg, Wireless Network Evolution: 2G-3G, Prentice Hall PTR, New Jersey, August 2001.

[10] A. Dornan, Essential Guide to Wireless Communication, Prentice Hall PTR, New Jersey, December 2000.

[11] A. J. Viterbi and R. Padovani, "Implications of Mobile Cellular CDMA," IEEE Communications Magazine, vol. 30, no. 12, pp. 38-41, December 1992.

[12] G. Tsoulos, M. Beach, and J. McGeehan, "Wireless Personal Communications for the 21st Century: European Technological Advances in Adaptive Antennas," IEEE Communications Magazine, vol. 35, no. 9, pp. 102-109, September 1997.

[13] J. D. Gibson, Ed., The Mobile Communications Handbook, IEEE Press/CRC Press, June 1996.

[14] N. Haardt and W. Mohr, "Complete Solution for 3G Wireless Communication: Two Modes on Air, One Winning Strategy," IEEE Personal Communications Magazine, vol. 7, no. 6, pp. 18-24, December 2000.

[15] P. Stavroulakis, Ed., Wireless Local Loops: Theory and Applications, John Wiley & Sons, 2001.

[16] U. D. Black, Second Generation Mobile and Wireless Systems, Prentice Hall, New Jersey, October 1998.

[17] GPS World-Home Page, http://www.gpsworld.com

[18] Z. Liu, M. J. Karol, and K. Y. Eng, "Distributed Queueing Request Update Multiple Access (DQRUMA) for Wireless Packet (ATM) Networks," Proceedings of IEEE ICC 1995, vol. 2, pp. 1224-1231, June 1995.

[19] J. G. Kim and I. Widjaja, "PRMA/DA: A New Media Access Control Protocol for Wireless ATM," Proceedings of IEEE ICC 1996, vol. 1, pp. 240-244, June 1996.

[20] D. Petras and A. Kramling, "MAC Protocols with Polling and Fast Collision Resolution Scheme for ATM Air Interface," Proceedings of IEEE ATM Workshop 1996, pp. 10-17, August 1996.

[21] F. Bauchot, "MASCARA, A Wireless ATM MAC Protocol," Proceedings of IEEE ATM Workshop 1996, pp. 647-651, August 1996.

[22] F. D. Priscoli, "Multiple Access Control for the MEDIAN System," Proceedings of ACTS Mobile Summit 1996, pp. 1-8, November 1996.

[23] D. Raychaudhuri et al., "WATMnet: A Prototype for Wireless ATM System for Multimedia Applications," IEEE Journal on Selected Areas in Communications, vol. 15, no. 1, pp. 83-95, January 1997.

[24] A. Acharya, J. Li, B. Rajagopalan, and D. Raychaudhuri, "Mobility Management in Wireless ATM Networks," IEEE Communications Magazine, vol. 35, no. 11, pp. 100-109, November 1997.

[25] A. S. Acampora and M. Naghshineh, "An Architecture and Methodology for Mobile-Executed Handoff in Cellular ATM Networks," IEEE Journal on Selected Areas in Communications, vol. 12, no. 8, pp. 1365-1375, October 1994.

[26] O. T. W. Yu and V. C. M. Leung, "B-ISDN Architectures and Protocols to Support Wireless Personal Communications Internetworking," Proceedings IEEE PIMRC 1995, vol. 2, pp. 768-772, September 1995.

[27] B. A. Akyol and D. C. Cox, "Signaling Alternatives in a Wireless ATM Network," IEEE Journal on Selected Areas in Communications, vol. 15, no. 1, pp. 35-49, January 1997.

[28] J. Schiller, Mobile Communications, Addison-Wesley, January 2000.

[29] R. R. Bhat, "Wireless ATM Requirements Specification," ATM Forum Draft:98-0196, 1998.

[30] A. S. Tanenbaum, Computer Networks, Prentice Hall PTR, New Jersey, August 2002.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset