Chapter 14
Spare Capacity Monetization by Opportunistic Content Scheduling

BELL LABS and ALCATEL-LUCENT

14.1 Summary

In recent years, mobile data traffic has been growing at an exponential rate with traffic volume doubling every year and some studies predicting even more than 18-fold increase in data traffic in the next 5 years [1]. This rapid increase in data traffic is making it necessary to grow network capacity at a much faster rate than ever before. Unfortunately, the rate of growth of data revenue is much lower than the rate of growth of traffic, raising the specter that at some point capacity costs may even begin to outweigh revenues thus greatly lowering the incentives for capital investments. At the same time, the highly uneven usage by data users has resulted in significant network inefficiencies whereby even congested networks are heavily underutilized most of the time. In this chapter, we present the capacity monetization system PLUTUS that taps this stranded network capacity for additional revenue and enhanced network efficiency.

Many factors have contributed to the mobile data tsunami, including the introduction of smartphones along with a wide variety of multimedia-focused mobile applications and the increase in performance of the mobile technology as it moved from 2G on to 4G. The industry's bold but short experimentation with unlimited data plans allowed the subscribers armed with powerful devices to unleash an onslaught of uncontrolled traffic. The resulting data usage patterns that have emerged are chaotic at best creating random, intermittent bottlenecks and congested hotspots spread across time and space. This has created a situation where even at low average utilization, the peak network load can be significantly high. As CAPEX decisions are mostly driven by peak load, this has driven up the cost of carrying data over mobile network. Operators have responded by putting in place tiers, caps, and throttling in order to regulate bandwidth demand. However, by over pricing data and over restricting usage, operators run the risk of stifling the demand for mobile data, thus loosing the opportunity to generate high revenue from increasing data usage.

There is growing recognition on the part of operators that they must tap into the abundant unused spare capacity in their network both to create new sources of revenue and to keep data delivery costs low. According to some studies [2, 3], peakload to average utilization ratio in mobile networks can be as high as four times with most of the traffic carried during a few busy hours while the network stays underutilized the rest of the time. As all the time periods where the network is underutilized is already sunk cost for the operator, additional traffic can be carried during such time periods at negligible marginal cost. As reported in [1,4] every other bit being carried on mobile networks is now video, with more than 25% coming from Youtube alone [1,4]. As audio and video traffic involves downloading large files or chunks thereof (e.g., using progressive download or adaptive streaming), it can be easily time shifted to periods with low network utilization. Likewise, content recommended on social networks (e.g., Facebook) and media sites (e.g., Netflix, Pandora) that has a high chance of being watched could be preloaded and cached onto users device during off-peak times. In addition, the abundant capacity of WiFi and Small Cell networks can also be tapped for carrying time-shifted traffic. Operators can, therefore, start generating additional revenues by selling their unused capacity at a relatively low price as well as increase network efficiency by encouraging subscribers to shift their usage to appropriate times on available networks. This controlled rearrangement of data usage patterns can lead to a reduction in the peak-to-average ratio ideally flattening the load curve, thus slowing down the rate of network expansion by accommodating more traffic without investing in capacity enhancements. Similar ideas have been tried in other industries such as smart electric grids [5, 6] for reducing peak load by shifting load toward off-peak hours when electricity is cheaper or for alleviating road congestion by the use of tolls or for increased efficiency by Yield Management [7] approaches practiced in airline, hotel, and other industries.

In this chapter, we present the design of PLUTUS, a capacity monetization and network efficiency enhancement system. The PLUTUS system lets the operator realize extra revenue by delivering additional data using unused capacity of diverse access networks. The PLUTUS system is best suited for the delivery of large files (multimedia content, app updates, etc.) that constitute the majority of the traffic on the mobile network and are also the biggest source of congestion. A data file marked for delivery using the PLUTUS system is opportunistically transferred at the right time using the best available network. This includes dynamic resuming or suspending of data transfers in reaction to the availability of unused capacity. It also includes seamless network switchovers while maintaining session continuity. In addition, the PLUTUS system data transfers are tightly controlled to make the best use of the spare capacity by evenly spreading out the network load. In PLUTUS data items transferred to the users handset are cached locally in the device storage. This enables local playback of multimedia content providing a high quality experience.

As 80% of the mobile network cost is in the radio access network (RAN), which is also the most capacity constrained portion of the network, monetizing the spare capacity in RAN can bring the most benefit. As reported in previous studies [2, 3], the cell loads fluctuate unpredictably and rapidly at thegranularity of minutes and there may not be much correlation among cells in different locations. The PLUTUS system is designed for fast identification and reaction to changes in available capacity at the cell/sector/carrier level without introducing significant signaling or data overheads, which has been a challenging problem [8–10]. In addition, the PLUTUS system dynamically and quickly adapts to changes in the traffic patterns to avoid creating any new peaks or congestion in the network. All this is accomplished while factoring in the mobility of users.

In order to increase users’ confidence and gain widespread acceptance, the PLUTUS system also provides reasonable estimates for the additional delays introduced by time shifting. These estimates are derived from data collected on the historic patterns of users’ mobility, network access and usage. In order to stay within the estimated delays, a combination of offline- and online-controlled scheduling of data transfers that adapts dynamically to changes in network and user state is applied. This scheduling also aims to minimize battery drain and avoid tying up scarce RAN resources from data transfers that may get stretched out for a long time when there is limited capacity to share. By preloading and locally caching content in advance of consumption, PLUTUS can further reduce peak loads and also enhance the user experience. The PLUTUS system also addresses efficient ways of interfacing with existing applications and services that want to take advantage of the spare capacity in the network. The rest of the chapter is structured as follows. We start out by surveying the related work in this space. Next we describe the main ideas used in the PLUTUS system including how pricing plans can be designed to give the right incentive to the user while ensuring that the operators are able to profitably monetize the spare capacity in their network. Then we present the design and architecture of the PLUTUS system. Finally, we present implementation details and performance results from various trials with subscribers and operators around the world.

14.2 Background

Pricing plans for wireless data have seen a number of changes over the past few years. First, to drive up user adoption, the expensive metered data plans were replaced by the popular unlimited all-you-can-eat plans. As initially the data traffic was not very high, these plans made good revenue for the Internet service providers (ISPs). But in recent years, these plans have contributed to the exponential growth in the data traffic making it unprofitable for the service providers to continue to offer them. The ISPs have responded by replacing these with tiered and sometimes usage-based plans [11, 12]. Some of these plans include surcharges for exceeding tier limits while others throttle the data transfers once the limit is crossed. ISPs are also trying other ways of charging for data, namely, based on time [13] and application [14].

Data pricing has been an active area of research with many ideas influenced by pricing work in other industries (e.g., utilities, road transportation, airlines). As early as in 1950s, Houthakker [15], Steiner, and others proposed a dynamic pricing scheme where pricing for peak periods is higher than the pricing for off-peak periods and traffic is charged based on when it traverses the network. Odlyzko [16] have proposed static partitioning of the network resources into separate logical traffic classes that are charged differently. Interested reader is referred to Sen et al. [17] for a comprehensive survey of the large body of work in this area. Unfortunately, so far there has been little adoption by ISPs of any of these proposed pricing schemes. However, this is likely to change as the ISPs must implement efficient schemes to manage their growing traffic costs and remain profitable. We believe that the key to adoption is a pricing scheme that cannot only be easily implemented by the ISP but is also predictable and simple for the user. This is the motivation behind the approach taken in this chapter.

The ability to quickly detect and react to congestion is key for efficient monetization of spare capacity. Many schemes have been proposed for wireline networks [18], but there is limited work for wide area broadband wireless networks. This is because the shared nature of the transmission channel, interference from external sources, and use of rate adaptation algorithms (e.g., proportional fair) makes it a challenging problem. It has been proposed that channel utilization may be a better measure of congestion in wireless network [19]. Solutions to determine channel utilization are based on both active probing [8, 9, 19, 20, 35] and passive probing [10]. We describe a client-based active probing approach that we have developed for PLUTUS.

ISPs are also implementing other solutions for managing traffic growth including video optimizations (e.g. transcoding, transrating, pacing, adaptive streaming), offloading (to WiFi, small cells), upgrading to more efficient IP and 4G technologies, and so on. These solutions are orthogonal to PLUTUS and their use does not eliminate the need for time shifting and, when used in conjunction with PLUTUS, can achieve even greater network utilization. Detailed discussion of these solutions, however, is out of scope of this chapter.

14.3 The Plutus Approach

The PLUTUS system enables the operator to deliver additional data using capacity that is already built into the network and is not currently in use. The main idea is to classify data transfers into one of two classes, the “Any Time Data” (ATD) class for transfers that can happen at any time irrespective of the state of the network and the “Surplus Data” (SD) class for delivery using capacity that is not in use by the ATD class (however, some minimum capacity may be reserved for the SD class). Note that the ATD class corresponds to the present mode of operation without any time shifting, while the SD class represents data transfers at opportunistic times when the network has unusedcapacity or when other networks (e.g., WiFi, small cells) are available. The SD class data transfers may, therefore, incur additional delays but have negligible marginal cost for delivery and, therefore, can include price discounts to incentivize user adoption. The PLUTUS system manages all aspects of delivery for data items in the SD class including classification, opportunistic scheduling, temporary or long-term caching, differential accounting and charging, and so on. The ATD class data transfers, however, continue to operate as before.

The typical use case for PLUTUS is as follows. An application or user uses PLUTUS to issue a SD class data transfer request at any time irrespective of the loading state of the network. On submission, a delivery estimate is provided followed by status updates and notifications as the data is delivered. In that respect, PLUTUS has resemblance to the “Digital FedEx Service” [21] proposal for deeply discounted delivery of content using spare network capacity which in turn is inspired by courier-based delivery of physical goods scheduled to happen by a fixed set of deadlines. The PLUTUS system fully manages the data delivery including transfers at times when the network has unused capacity and scheduling to evenly spread out the network load. PLUTUS suspends or resumes the data delivery in reaction to the changes in the availability of unused capacity, and it caches the transferred data locally at the receiver. In PLUTUS, transferred data is consumed directly from local cache. This ensures high quality user experience that is not impacted by network impairments [22] with delivered data available for use even when not connected to the network.PLUTUS uses a progressive mode of delivery and can let the user start using the data before it is fully transferred (e.g., for multimedia content playback). This is unlike existing pricing approaches [16, 23] that only provide price incentives to time shift usage but have limited guarantees on the user experience during or after the time shift, PLUTUS can ensure, post-delivery, high quality user experience that is not impacted by network impairments with delivered data available for use even when not connected to the network. The PLUTUS system is, therefore, very well suited for the delivery of multimedia clips and files, e-mail attachments, application, software updates and so on, which constitute a significant portion of the traffic on the mobile network [4].

With PLUTUS, the network can be thought of being partitioned into separate logical channels that differ only in the price paid by the users to use them. The higher priced channel referred to as the ATD channel retains commonly used pricing structures that are currently in place for the mobile networks (flat, tiered, etc.). The discounted channel referred to as the SD channel offers discounts over the ATD channel. A typical SD channel pricing offering may include a tiered data plan at a c14-math-0001 discount over an identical ATD plan or a zero-rated media application for a fixed monthly subscription such that the user is not charged for any data usage within the application all of which is carried over the SD channel. Thus PLUTUS maintains the simplicity of logical channels each with relatively constant and easily understood pricing. This is in contrast to much of the recent work [23] that relies heavily on incentives that are time and space dependent. We believe thatstaying with pricing plans that the users are already comfortable with as well as by not requiring significant enhancements to the network accounting and charging systems can make for an easier adoption. Section 14.3.2 describes a more formal mathematical model for setting the discounts on SD channel.

PLUTUS uses priority as one of the mechanisms for creating logical channels: data transfers over the SD channel are carried out using only the capacity which is not in use by the ATD channel. The SD channel capacity is, therefore, not fixed but can exhibit wide temporal and spatial variations depending on the usage on the ATD channel. This means that by itself the SD channel data sessions may not only experience wide throughput variations but also suffer from periods of interruptions when no capacity is available. In a modified form, some minimum capacity may be reserved for SD sessions, thereby partitioning network capacity between the SD and the ATD channels. In the rest of the chapter, we mostly focus on the priority-based approach and where needed point out the extensions for the capacity partitioning approach.

Even with time shifting, the network can quickly become unusable if the traffic load shifted from the peak gets concentrated to create new peaks. If this happens, then the pricing mechanisms will not be very effective in fully exploiting the available spare capacity in the network. PLUTUS solves this problem by load balancing the traffic on the SD channel and by caching the transferred data in the local cache thus avoiding loading the network at the time of consumption. Load balancing is managed using an efficient delay and interruption-tolerant protocol and via special proxies that tightly schedule the transfer of data over the SD channel. This scheduling also minimizes the amount of time the sessions are kept radio active. This reduces the battery drain on the users’ device and network resources unnecessarily during the transfers. In addition, because data is routed via proxies, it can be closely tracked, identified, (de)prioritized, and differentially charged by the network.

The pricing mechanisms considered in this chapter bear close resemblance to the Paris Metro Pricing (PMP) proposal [16] that uses price differentiation. As in PLUTUS, the basic idea in PMP is to partition the network into separate logical channels such that user is charged different prices for using them. The rationale being that the channels with higher prices would stay less congested than those with lower prices, thus providing a better service. PMP's mechanisms for creating logical channels include fixed partitioning of network capacity among channels or giving higher priority to traffic carried over higher priced channels. The biggest advantage of PMP comes from its simplicity. However, PMP is not without its challenges [16]. This is especially so when PMP is used as a mechanism for flattening the peaks and for encouraging the use of spare capacity in mobile networks. First and foremost, the adoption of an approach such as PMP may be limited if the users are not satisfied with the even lower quality of service (QoS) for the lower priced channels especially in mobile networks where the capacity is already constrained. Second, such an approach may not be very effective in managing congestion becausemultimedia content, which is a significant portion of the mobile traffic, has high QoS requirement and is, therefore, not well suited for shifting to lower capacity channels. With PLUTUS, on the other hand, the quality of experience stays high especially with multimedia content, because the data is cached and served from local storage. Unlike PLUTUS, PMP does not include any additional provisions for evenly distributing traffic load. Thus with PMP, the network channels may quickly become unusable because of traffic getting concentrated around (new) peak periods. Other issues with PMP [16] are how to set prices and allocate capacities for the channels and how to ensure predictable performance of the channels. Finally, the interface modification needed for the applications and network to make use of the different channels is a challenge as well. These challenges are addressed by PLUTUS as we describe in the following text starting with how to set discounts for the SD channel.

14.3.1 Pricing Model

One of the biggest challenges faced by service providers in pricing data is how to set price-based incentives and discounts. There is the fear that highly discounted plans can end up cannibalizing the premium full price plans ending in lower overall revenue. On the other hand, if the discounts are not substantial, then they may not be effective in driving the shift of traffic over to surplus capacity. In this section, we present a pricing scheme for which the service provider revenue can only increase no matter how heavily the discounted plans are subscribed. This is possible with our approach as the traffic on the SD channel can be very evenly load balanced and, therefore, any revenue lost from discounts is gained by the stranded capacity that gets released when the traffic is moved over to the SD channel. In addition, with the tight control that can be exerted on the data transfer delays in our approach, the right amount of segregation can be created between the end-users experience on data transfer using the SD channel versus using the ATD channel. For example, content delivery could be artificially delayed even when there is spare capacity to make the SD channel less appealing to users who demand high quality. As pointed out by Odlyzko [16] and Deneckere and McAfee [24], such segregation mechanisms, which are widely practiced by many industries (e.g., Couriers, Hi-Tech), can result in stable performance of the different channels with even load distribution. Furthermore, it has been established [24] that such a mechanism when used for price discrimination can be a strict Pareto improvement: can promote social welfare with strict benefits for all parties (service providers and users).

As mentioned earlier, the network cost is driven by peak load as new capacity is added when peak load reaches the current capacity. However, even at peak loads, the overall network is heavily underutilized. Typically, the peak-to-average-load ratio c14-math-0002 is in the range 3–4. This implies that the highest averagenetwork utilization c14-math-0003 is mostly between 25% and 35% (assuming that capacity exhaust happens at peak loads corresponding to full c14-math-0004 network usage). We show that with our pricing scheme, the spare capacity can be discounted by as much as c14-math-0005 percent while maintaining or increasing the ISP's revenues. Therefore, under our scheme, the spare capacity can be sold at almost one-third the price of any time data in most networks. In reality, the actual discount offered by the ISP may be less because they would want to set the discount just enough to maximize the overall revenue. We also show how to compute this optimal discount under our scheme.

So far we assumed a single pricing tier for each of the premium and nonpremium class. In practice, pricing plans may offer multiple tiers (e.g., monthly data bundle of 5 GB for $50 or 3 GB for $30, etc.) and users may have many possible ways of adopting new pricing plans such as by moving from a higher to a lower tier of the premium class and making up for the reduced data bundle by supplementing with a nonpremium pricing plan tier. The analysis presented above can also be extended to this more general case; however, we omit the details for lack of space.

The pricing plan presented above can also be offered in a “bundled” mode where instead of offering new surplus capacity plans, the ISP may just increase the amount of data provided to the user at a slightly higher cost. The additional data would be allowed only at off-peak times (over the SD channel). The analysis presented above directly applies to this case by virtue of the logical partitioning of each user into the premium and the nonpremium classes.

14.3.1.1 Other Scenarios

14.3.1.1.1 Network Efficiency Enhancements Without Pricing Plan Changes

With our SD-channel-based delivery approach, network peaks can also be flattened and the user experience enhanced by anticipating, delivering, and locally caching data in advance of usage. This can even be accomplished without price-based incentives as long as there are effective mechanisms for predicting user behavior. Although no prediction mechanism is perfect and, therefore, much of the data preloaded into the local cache may get wasted and not be consumed, this is not a significant problem because the marginal cost of delivering data using spare capacity is negligible. Note that in such a mode of operation, no user behavior change is required (adoption of new pricing schemes or delayed data consumption) and the user may even be oblivious to any such preloading and caching of data.

14.3.1.1.2 Sender-Pays or Two-Sided Pricing

The approach described here can easily accommodate pricing schemes where the content provider is charged for all or portion of the content delivery. Such pricing models have been tried in the past (e-book delivery for Kindle) and are being considered as an equivalent of toll-free service for data traffic. For such pricing mechanisms to be appealing to the content providers, the delivery costs especially for large multimedia files (movies, shows) should be fairly low. This is possible when the content is delivered using surplus network capacity because it has negligible marginal cost. With this approach, the additional benefit is predictable delivery estimates and higher quality playback experience as the content is cached locally on the users device [potentially with digital rights management (DRM) and access controls for premium content]. Also with PLUTUS, the charging can be simple as the content provider can simply pay according to the size of the delivered data and PLUTUS can ensure that the users data plan is not charged for the bytes delivered on the SD channel. This addresses many of the issues with sender-pays plans [25].

14.4 Architecture and Design

There can be wide variations in the SD channel capacity, and therefore, it may not be well suited for session-oriented protocols such as Transmission Control Protocol (TCP) that time out or exhibit poor performance under impairments. Moreover, keeping the sessions “alive” for long periods of time can drain the handset battery quickly and also unnecessarily tie up limited radio resources (e.g., traffic channels). Therefore, the implementation of SD channel requires more than just strictly prioritizing the ATD channel data packets over SD channel packets in the network routers and switches and RAN elements. The end applications that use the SD channel need to be resilient to session interruptions and must be capable of relaunching sessions after capacity becomes available again. Also the network must be able to convey capacityavailability to the applications that must always stay ready in standby mode to be able to avail of this capacity. The system could, therefore, get complex if each application and the network were to be modified to work this way. A more scalable approach is to enhance the network to provide most of these features, thus minimizing the impact on individual applications. This motivates our design for PLUTUS system whose main components are shown in Figure 14.1.

Figure 1.1

Figure 14.1 Architecture of the PLUTUS system.

The Control and Scheduling Components (CSC) track and control all the data transfers on the SD channel. These data transfers are routed via proxy components [Server Proxy Components (SPC) and Client Proxy Component (CPC)] that can suspend or resume data transfers as needed. The Server Monitoring Component (SMC) and Client Monitoring Component (CMC) together track the network state including the availability of capacity on the SD channel, the availability of other network accesses (e.g., WiFi, Femto), and also the user's radio link state and device state (battery, storage). The information on alternative network availability is used by the PLUTUS Client Connection Manager component (CCM) for automatically switching over data transfers to the best network. Owing to the possibility of long interruptions, the SD channel data sessions operate mostly in the background with data caching performed by the Server Caching Component (SCC) and Client Caching Component (CCC). PLUTUS exposes application programming interfaces (APIs) for applications to initiate data transfers over the SD channel as well as for getting notifications on the status of data transfer and for accessing their transferred data from the PLUTUS caches. The PLUTUS system has additional Admin Components (AC) for accounting and charging of data transferred over SD channels and for provisioning of operator policies and user preferences for controlling the data transfer process. We describe all these components in the following in detail.

14.4.1 Components

14.4.1.1 The Scheduling component

The CSC is responsible for computing estimates on delays for data transfers on the SD channel and also for scheduling the data transfers within the estimated delays. Using historic data to predict anticipated user and network dynamics, the CSC constructs a globally efficient, low delay, data transfer schedule. Data transfers are then instrumented according to this schedule with changes applied based on the actual system dynamics.

First, the amount of capacity predicted to be available on the SD channel of each cell in each time period along with the predicted availability of the access points (e.g., cells, WiFi, small cells) to each device during each time period is computed. This is based on the past and current trends on network and device availability, user mobility, device usage, traffic load, etc. Each data transfer job is then characterized by the deadline by which it needs to finish (its delay estimate), the outstanding data to be transferred (size), and the set of other jobs that must be processed before it (e.g., based on a ordering specified by the user or theirsubmission time). Next a global scheduling problem is solved for assigning the (pre-emptive) processing of the data transfer jobs on selected access points at times when the SD channel of the access points is predicted to be available to the respective jobs. It is acceptable to schedule a single data transfer job over several nonconsecutive times or different accesses. The objective is to complete jobs by their deadline such that all the job ordering constraints are enforced. The solution to the scheduling problem is the temporal assignments of jobs and portions thereof to the access points and the rates for the underlying data transfer. The solution to the scheduling problem also provides data transfer delay estimates in the case when a priori deadlines are not given.

We now show how to mathematically model and solve this scheduling problem. We assume that the scheduling is performed at the granularity of time slots each of which is c14-math-0069 seconds long. In other words, the deadlines for job completion have to be at the granularity of c14-math-0070 so either a job can be scheduled in this time slot or not. The choice for c14-math-0071 depends on whether we want the schedule to be more flexible in adapting to dynamic changes (c14-math-0072 should be small) or we want to reduce the overhead of recomputing the schedule more frequently (c14-math-0073 should be large). In practice, c14-math-0074 second can provide a good trade-off for these two requirements.

Let c14-math-0075 denote the set of access points (e.g., cells, WiFi, small cells). For an access point c14-math-0076, let c14-math-0077 denote the set of time slots when it is predicted to be available. Let c14-math-0078 denote the available capacity in bytes of access point c14-math-0079 in time slot c14-math-0080. Thus c14-math-0081 is c14-math-0082 times the average available rate of access point c14-math-0083 in time slot c14-math-0084. Let c14-math-0085 be the set of users, and let c14-math-0086 be the list of jobs c14-math-0087 of user c14-math-0088 arranged in the order in which they need to be processed. Each job c14-math-0089 is characterized by its size c14-math-0090 in bytes, its arrival time c14-math-0091, and its deadline c14-math-0092. Let c14-math-0093 denote the preferred access point c14-math-0094 of user c14-math-0095 among the access points that are available to user c14-math-0096 at time slot c14-math-0097. Note that c14-math-0098 is not defined if user c14-math-0099 is not in proximity of any available access point. In case there are multiple such access point, then we assume that there is a mechanism to pick the preferred one (e.g., prefer WiFi to 3G).

The schedule is determined by solving a max-flow problem in a graph constructed as follows. The graph has four types of nodes: user nodes (c14-math-0100), one per user c14-math-0101; job nodes (c14-math-0102), which are one for each job for each user,

equation

available time slot nodes (c14-math-0104) one per access point and time slot in which it is available

equation

and a pair of source and sink nodes c14-math-0106 and c14-math-0107, respectively. In this graph, the edges and their capacities are as follows. From the source node c14-math-0108, there is a directed edge to each of the nodes in c14-math-0109 of infinite capacity. From a node c14-math-0110 that corresponds to user c14-math-0111, there is a directed edge to every node c14-math-0112 that corresponds to the job c14-math-0113 for user c14-math-0114. This edge has capacity set to the size c14-math-0115 of job c14-math-0116. From every node c14-math-0117 that corresponds to the job c14-math-0118 for user c14-math-0119, there is a directed edge to every time slot node c14-math-0120, which has the following properties. The time slot c14-math-0121 corresponding to c14-math-0122 must be in the time interval c14-math-0123 in which the job is available to be processed. The node c14-math-0124 must correspond to the access point c14-math-0125 that is available to user c14-math-0126 in time slot c14-math-0127. The capacity of this edge is set to c14-math-0128 corresponding to the available capacity of access point c14-math-0129 in time slot c14-math-0130. Finally, there is a directed edge from every time slot node c14-math-0131, which corresponds to time slot c14-math-0132 and access point c14-math-0133, to the sink node c14-math-0134 of capacity c14-math-0135. It can be seen that a feasible flow from c14-math-0136 to c14-math-0137 in this graph corresponds to a schedule that satisfies all constraints except possibly the constraint on the ordering among the jobs of a user. However, the latter can be easily attained by rearranging the schedule individually for each user so that the jobs are scheduled in the desired order on the set of access points and time slots determined using flow computation. The max-flow in this graph corresponds to a schedule that maximizes the total amount of processing of the jobs, and in particular, it can be used to test feasibility of processing all jobs. Other objective functions (besides maximizingthe total flow) can also be used to enforce additional requirements (e.g., fairness).

The scheduling problem as defined can be solved using linear programming techniques but can quickly become intractable even for medium size networks with a few millions of users. However, we believe that fast heuristics with bounded performance should be possible under practical assumptions. Additional constraints may be introduced in the problem formulation for a more efficient schedule. These include constraints to minimize the delay for data transfers. It may also include constraints to avoid high signaling load on the network and high device battery drain because of frequent job interruptions (e.g., because of pre-emptions, lack of capacity, network switchovers) resulting in inefficient traversals between different radio states [26]. In addition, constraints may also be introduced to take advantage of the spectral efficient wireless broadcast and multicast capacity if available for data transfers. This involves identifying overlaps across different users jobs that can be scheduled together using multicast [27].

This “offline” solution to the scheduling problem serves as a guide for the “online” scheduling of the data transfers. It provides the necessary input to make the decision as to when to start, suspend, and resume each of the data transfer and at what rate. As the offline solution is based on predictions regarding the availability of capacity at the access points, mobility of the user, and the parameters of the currently submitted data transfer jobs (sizes, deadlines,) it can quickly become suboptimal unless it is dynamically updated to deal with the occasional deviations in user actions, mobility, network conditions, and unanticipated traffic loads. In addition, the schedule must be dynamically updated to accommodate new data transfer jobs. Most of the times, it should be possible to handle these changes efficiently locally with only incremental updates thus requiring very infrequent computationally intensive recomputations of the schedule.

Online scheduling optimizations in PLUTUS also include adjusting the schedule including the job priorities to give more opportunities to data transfers that have fallen behind or at times when they can make more efficient use of network resources (e.g., when radio link is of higher quality), thus increasing the network efficiency. An additional consideration is to keep the SD sessions radio active for the least amount of time in order to minimize battery drain and to avoid tying up scarce RAN resources. By scheduling data transfers, which share limited capacity (e.g., within one cell), one at a time rather than all at the same time, the CSC not only lowers the average transfer time but also reduces the average time spent in radio active state. The benefit of serial over parallel data transfer scheduling within a single cell is illustrated in Figure 14.2. When all three data transfers are scheduled together (as in Fig. 14.2a), they all take a long time to finish because each one gets only one-third of the cell's capacity and, therefore, also stays active for the whole time. When scheduled one at a time (as in Fig. 14.2b), they all still finish within the same time period. However, some data transfers finish earlier (c14-math-0138 and c14-math-0139), and more importantly, eachof them stay active only one-third of the time. More generally, the CSC scheduler not only schedules a few data transfers at any given time but also pre-empts and switches data transfers in order to make fair progress across all of them. Such fairness is required to gain the trust of the end users possibly by also putting a neutral entity in charge of overseeing the scheduling operations.

Figure 1.2

Figure 14.2 Serial scheduling is more efficient. (a) Parallel data transfer and (b) serial data transfer.

14.4.1.2 The Proxy components and the Data Transfer Protocol

In PLUTUS, the SD channel application's data session is proxied via the CPC that runs in the background on the client side and is optionally proxied via the network-resident SPC. The session data flow for a download involving both the CPC and the SPC is illustrated in Figure 14.1. Upload works in the same way and is, therefore, not described. The download can have two logical phases. In the first phase, data is downloaded by the SPC from the data source in the network and temporarily cached in the server cache SCC. In the second phase, the data cached in the SCC is downloaded by the CPC to the client cache CCC using the disruption-tolerant data transfer (DTDT) protocol running between the SPC and CPC. The second phase does not have to wait for the first phase to finish but rather the download to the client may start as soon as the data is available on the server cache. The DTDT takes care of suspending or resuming content transfer and stopping or restarting sessions in reaction to scheduling controls from the CSC. The DTDT can be implemented using any standard data transfer protocol (e.g., TCP) when enhanced with the additional capability of automatically stopping and starting data connections and rate limiting. All the state about how much data is transferred and where to resume data transfer from is maintained by the CPC because it has that information. The SPC component is not required if (for download) the data source server is capable of resuming the data transfer from any specified offset. Such capability is widely supported by http downloads from web servers on the World Wide Web. In such cases, the CPC makes the request to the data source server to resume the data transfer from the position where it terminated the last time (e.g., via range request for http downloads). This is, therefore, well suited for multimedia content because commonly used protocols such as progressive download and adaptive bitrate streaming are all based on http download. The preferred mode of operation for the PLUTUS system is to avoid proxying all the data transfers via a SPC as it results in a more distributed and scalable architecture.

14.4.1.3 The Monitoring and Data collection component

The CMC and the SMC together monitor the surplus capacity of the network. Note that it is not just sufficient to use network-based QoS mechanisms to prioritize ATD traffic over SD traffic. Monitoring is needed to identify opportune times when SD channel data transfers can be started, otherwise data transfers must be kept alive all the time with serious impact on battery life and network performance. In PLUTUS the focus is mostly on RAN and in particular on cell level (per radio link) monitoring. Typical monitoring systems make the assumption that theavailable capacity can be determined as the difference of the capacity of the bottlenecked link to the offered load. However, in a mobile network, this is not always the case because the capacity of the radio link is very much dependent on the channel quality of the users and, hence, can be time varying. Thus even though the observed load on the cell may be low, the cell may still not have any capacity left over. Therefore, a monitoring solution has to measure the cell load in conjunction with the users channel quality. On the network side, such monitoring is best done from the cells base transceiver station (BTS) where both the users channel quality and the allocated BTS resource usage can be monitored. This results in more accurate monitoring but can be a challenge to deploy because of (i) the significant cost and challenge of upgrading all the BTSs many with proprietary interfaces, (ii) the additional cell (e.g., backhaul) overhead to send monitored data from the BTSs to the PLUTUS system, and (iii) the resulting high delays in informing the PLUTUS clients of the changes in network load thus slowing their reaction time.

On the client side, a combination of active and passive probing techniques can be used for monitoring local conditions such as the users channel quality as well for estimating global network state including cell congestion. Network availability is accurately estimated by passively monitoring the radio channel slot occupancy [28]. However, this can be challenging because it requires access to proprietary vendor-specific APIs for the radio modem of the device. Active probing, on the other hand, involves sending probe packets to estimate the available bandwidth and, therefore, is mostly independent of the device firmware, hardware, operating system, and so on. The challenge with active probing, however, is the additional signaling and data overhead it can introduce. We describe in Section 14.4.2 how active-probing-based client-side monitoring of available capacity is implemented in PLUTUS.

In addition to tracking the user and network state, the CMC monitors the device state including battery level, charging status, storage usage, processor occupancy, and so on. This is to ensure that device resource usage is according to user preferences and settings (e.g., SD channel data transfers only when battery level is high or the handset is getting charged). Data is also collected by the CMC on the availability of other networks (WiFi, small cells) so that data transfers can be moved to the least cost network. Data may also be collected by the CMC on the users network and content access and mobility patterns. This may include data on the locations much frequented by the user, the types of networks available there, their loading state, and their probability of being used by the user. Such historic data can be a sufficiently accurate predictor of the future [2, 29, 30] and, therefore, helps in deciding when and where to schedule users data transfers on the SD channel. Likewise data collected on the users content access patterns helps to predict contents of interest that can be preloaded in advance of users request. Although there can be privacy concerns related to the collection of such personal data, these may be ameliorated by opt-out and anonymization mechanisms [31].

14.4.1.4 The Admin component

As transferring large files may not just incur significant cost to the user but may also result in substantial battery drain and quickly fill up the storage on their device, PLUTUS lets the user set preferences via the AC to control the transfer process. This may include allowing content transfer only above certain battery levels or only during charging periods or only when there is enough storage on the device. It may also include preferring the use of their home WiFi and requiring automatic switchover to make use of a preferred access network when available. Likewise the operator may configure policies to block data transfers during certain time periods or may allow the use of their managed WiFi hotspots. In order to enforce these preferences and policies, PLUTUS closely tracks the network and device state including the availability of network accesses.

The AC also interfaces with policy and charging functions (e.g., PCRF) in the network to convey session information (e.g., TCP connection's c14-math-0140 tuple) for every data transfer over the SD channel. This helps the network gateways to identify, classify, differentially account, and charge all the packets belonging to the SD channel traffic.

14.4.1.5 The Application, User Interfacing component

Applications interface via the PLUTUS APIs to request data transfer on the SD channel. The request may be triggered from a direct user action where the user explicitly selects a digital item to be delivered via their SD channel data plan. The request may also be triggered by an indirect user action where the user has allowed the application to automatically deliver data items that become available on their social networks, subscribed channels, playlists, video queues, camera folder and so on. The data transfer is then routed via the SD proxies and is scheduled for delivery by the CSC using the DTDT protocol. The PLUTUS APIs provide delivery state updates to the application including when the data item is available for use. Delivered item is cached locally and is accessed by the application using PLUTUS APIs.

14.4.1.6 The Prediction and Preloading Component

The PLUTUS system is ideally suited for preloading content on users device in advance of consumption. An important challenge, however, is to be able to deliver content that has the greatest chance of being consumed by the user. However, more and more traffic on the mobile network is from recommendation-based content. This includes much of the usage on Pandora, Netflix, and even Youtube [32], three of the biggest source for mobile data traffic [4]. In addition, social-network-basedrecommendations can play a big role in this preloading as Facebook, Twitter, and other social networks are becoming a significant source of referrals [33] especially for video content [34]. This is in addition to subscription-based preloading of content from media channels, playlists, movie queues, watch it later applications, magazines, news web sites, and so on. The best delivery options for preloading popular content may be mobile multicast and broadcast. Thus a combination of unicast and multicast may be used for preloading content [27].

14.4.2 Client-Side Monitoring of Available Capacity

PLUTUS uses an active probing mechanism whereby traffic is occasionally downloaded by the client to allow it to sample the available bandwidth [18] on the SD channel. One of the parameters that the client measures is the download throughput. However, throughput by itself is not enough to estimate the available capacity. This is because in wireless networks, the throughput depends not only on the available capacity but also on the maximum cell capacity (which can vary from cell to cell based on its technology, frequency, number of carriers, etc.) and clients channel quality. In particular, even with only one client on the cell, the download throughput may be low if the client has poor channel quality. In addition, the throughput also depends on the number of users or level of sharing in the cell. This is because the base station or eNodeB uses proportionally fair scheduling to divide radio resources fairly and evenly between all active users attached to a cell.

The estimation of the available capacity by the client proceeds as follows. First, by correlating historic data about channel conditions, as measured by parameters such as signal-to-noise ratio (SNR) and received signal strength indication (RSSI) with user throughput, PLUTUS computes the maximum cell capacity for each channel condition. This is performed separately for each cell. Then in real time, the client measures the download throughput and simultaneously evaluates its channel conditions over a sequence of time windows. Each of these time windows is short enough that within each of these windows, either there is no (ATD channel) data transfer from any other user or the cell capacity is fully utilized from such data transfers. Thus in these time windows, a PLUTUS client gets 1/n-th of the maximum cell throughput, that is possible for the given channel condition. Here n is the number of cell users active during the time window, inclusive of this client. This is illustrated in Figure 14.3.

Let the client observe throughput c14-math-0141 over a time period defined by a sequence of time windows c14-math-0142. Let the max channel capacity in these time windows (as estimated by channel quality measurements) be c14-math-0143. Then the average available capacity over this time period spanning these c14-math-0144 time windows is estimated as c14-math-0145, where c14-math-0146 is the number of timewindows in which the throughput exceeds half the cells max capacity: those time windows c14-math-0147 for which c14-math-0148. This is because in such time windows, there is no other active user, otherwise the throughput would be at most c14-math-0149. This fraction is then compared to a cell-specific capacity threshold c14-math-0150. Only if the fraction is not less than c14-math-0151, the cells available capacity can be utilized for SD channel data transfers in the subsequent time period.

Usually, the active probing as described earlier operates independent of client's data transfers. However, in PLUTUS, the two are combined. The PLUTUS proxy components monitor the capacity availability as it is transferring users data. In other words, no special data transfers are needed for active probing, thus eliminating the overheads. The only exception is when there is no available capacity on the cell and, hence, the users data transfers are kept suspended. In that case, there is some impact on the data transfers on the ATD channel by the active probing for SD channel capacity. However, this is minimized by appropriately spacing out the active probing activities while still ensuring that capacity availability can be detected in a timely manner. Further optimizations include scheduling probing activities across clients within a cell to ensure that only a small number of clients are actively probing in the cell at any given time. The selection of clients for active probing within each cell is dynamically adjusted to keep the probing load spread across the clients, thus minimizing their battery drain from probing.

Figure 1.3

Figure 14.3 Active probing throughput in three time windows.

14.5 Performance Evaluation

We implemented the PLUTUS system with client software running on Android and iOS handsets and other control and server software resident in the network. Here we present some results from analyzing the performance of PLUTUS on a commercial network for which data was collected using a tool developed internally. The data includes network load and other information for a day from more than c14-math-0152 3G cell sites. We also present results on the performance of the PLUTUS system in the controlled laboratory environment of mobile operators.

14.5.1 Network Utilization

The traditional understanding is that during peak hours the network is very heavily loaded and there is not much spare capacity during those times. As a result, most solutions for utilizing spare capacity are primarily designed to look for spare capacity during “off-peak” hours. However, we found that this view is not accurate and even busy cell sites can exhibit many short periods of network availability even during peak hours. This effect is particularly pronounced when the network load is analyzed at the smaller time granularity of minutes. As a result, we have designed PLUTUS to quickly identify and react to availability of spare capacity even at these short intervals thus maximizing the efficiency gains.

Figure 14.4 shows the typical load of a congested cell. The top graph shows the hourly distribution of load where hours 15–19 are seen to be the most loaded and can be identified as peak hours. However, as we drill down to c14-math-0153-min data intervals for the peak hours, it is clearly seen that there are many time intervals where the network load is quite low, for example, between peaks at A,B and C,D, and this spare capacity can be leveraged for data transfers on the SD channel even during peak hours. The PLUTUS system's capability to detect spare capacity even in peak periods not only enables it to maximize the utilization gains but also helps it to reduce the wait time for SD data session, making it more attractive to the user.

Figure 1.4

Figure 14.4 Network load.

14.5.2 Delay

In PLUTUS, the SD channel data sessions have a lower priority than the ATD sessions and, hence, can incur additional delays as they are waiting for capacity to become available. We used the network load data from top 200 loaded cells of the aforementioned 3G network, to analyze the delay introduced by using the SD channel instead of the ATD channel. We used the current network load as baseline and introduced new data sessions of different sizes. The comparison was done for equal number of SD and ATD sessions over a 24-h period. However, the number of sessions varied in proportion to the existing load on the system at a given time to reflect higher user activity at busy times compared to nonbusy times. To compute delay, we compare the time difference between completing the new data session on an ATD versus SD channel.

The 200 cells used for this analysis have an average utilization of c14-math-0154. Although we do not have any data on how close these cells are to exhausting their capacity but given that these are the top loaded cells for a heavily utilized network, we can reasonably assume that their peak-to-load ratio is close to c14-math-0155.

In our analysis, we allow the SD channel to use available capacity from the network only if the total load on the network at that time is c14-math-0156 of the network capacity. If the network load is higher, the network is considered as congested and only ATD traffic is allowed. As seen in Figure 14.5, almost c14-math-0157 of the smallest (10 MB) SD channel data sessions incur no additional delay compared to equivalent ATD channel data sessions. Also c14-math-0158 of large data sessions (500 MB) incur no additional delays. In this network with maximum throughput of 60 MB/min, the available throughput at average network utilization of c14-math-0159 is 39 MB/min. Therefore, a 500-MB ATD session can take approximately 13 min to finish while only c14-math-0160 of equivalent SD sessions incur additional delays of 4 min or more. This means that c14-math-0161 of 500-MB SD sessions incur no more than c14-math-0162 additional delay. Also more than c14-math-0163 of small and medium sized sessions incur no additional delay at all. The relatively short delay on the SD channel is due to the availability of enough intermittent network capacity even during peak periods as was shown in the previous section. In addition, during peak periods, because an ATD session gets additional service (compared to an equivalent c14-math-0164 session) only at times when the network is highly loaded (c14-math-0165 or more), it cannot finish much faster.

Figure 1.5

Figure 14.5 SD channel additional delay.

We analyzed the delay performance of PLUTUS in a laboratory environment under different congestion conditions. In particular, we considered two scenarios where network is congested either c14-math-0166 or c14-math-0167 of the time. In this laboratory, the maximum throughput of the clients ranged between 500 and 600 Kbps. As shown in Figure 14.6, when network is congested c14-math-0168 of the time, there is an additional SD channel delay of less than a minute for a download of 63 MB data that takes 20 min on the ATD channel. Even for networks with c14-math-0169 congestion when the SD channel has spare capacity only half the time, a user download of 45 MB that takes over 20 min, sees an additional delay of just c14-math-0170 min or c14-math-0171. As in most networks the peak period is less than c14-math-0172, it follows that the users will see very short additional delays between sessions on ATD and SD channel. Moreover, although an ATD session would stay active c14-math-0173 of the time during the download, the SD session would be active only c14-math-0174 of the time, c14-math-0175 of the time when network is not congested and an additional c14-math-0176 because of the delay. As SD sessions are active for a shorter duration during the download the PLUTUS system can not only improve device and battery performance but also reduce the impact on the network by not tying up precious resources for as long.

Figure 1.6

Figure 14.6 SD channel delay versus congestion.

Figure 1.7

Figure 14.7 User performance.

14.5.3 User Experience

In the laboratory environment, we also studied the impact on session quality of users moving some of their sessions to the subsidized SD channel. Figure 14.7 shows the network with four peaks created by ATD session activity. The grey curve depicts the average throughput of six ATD sessions that were active for 5 min and then inactive for the next 5 min thereby creating a pattern of 5-min long-peak and off-peak periods. The black curve depicts user throughput as three of the six ATD sessions were converted into SD sessions so that only three ATD sessions stayed active during peak times and the activity of the three SD sessions got shifted to off-peak times. As can be seen that this resulted in the throughput of the individual ATD sessions going up from 100 Kbps to more than 150 Kbps. This means that the premium ATD users see less buffering and better quality even during peak times. The ATD sessions that got time shifted as SD sessions to off-peak periods got even better average throughput of more than 250 Kbps. This is because the PLUTUS system scheduled at most two SD sessions simultaneously in order to avoid creating new peaks and to reduce the length of time that SD sessions are active. Thus with PLUTUS spreading out the load more evenly in the network, it results in improved device throughput across the board, thereby increasing user satisfaction.

14.6 Conclusions and Future Work

In this chapter, we presented a capacity monetization and network efficiency enhancement PLUTUS system for efficiently and cost-effectively delivering multimedia content using unused capacity of diverse access networks. It works by scheduling the data transfers at the appropriate times using the appropriate access networks and by caching data on the large storage on end devices. We showed that with PLUTUS the operators revenue increase can be ensured using only simple and predictable pricing plans. We addressed the many challenges of designing the scheduling and monitoring components of the PLUTUS system. We also presented the initial results on the viability along with the efficiency gains from deploying the PLUTUS system both in commercial and in laboratory environments.

Future research directions include creating improved models for predictions of the network state and user behavior based on the data collected by the PLUTUS clients. Linear-programming-based approach to model and solve the basic scheduling problem can be quite powerful but can also be computationally intensive. Future research directions include the design of faster heuristics with bounded performance guarantees that can also flexibly accommodate the various constraints of the scheduling problem. Future work also includes personalized recommendation systemsthat can maximize the “hit” ratio while balancing the limits on the network capacity for delivering content along with the storage on users device for preloading content.

Acknowledgments

We thank the entire Kaveri team at Alcatel-Lucent Ventures for their contributions to the PLUTUS system.

References

  1. 1. Cisco. Cisco visual networking index: global mobile data traffic forecast, 2011-2016.
  2. 2. U. Paul, A. P. Subramanian, M. M. Buddhikot, and S. R. Das. Understanding traffic dynamics in cellular data networks. In Proceedings IEEE INFOCOM, 2011
  3. 3. M. El-Sayed, A. Mukhopadhyay, C. Urrutia-Valdés, and Z. J. Zhao. Mobile data explosion: monetizing the opportunity through dynamic policies and QoS pipes. Bell Labs Technical Journal, 16(2), 2011, 79–100.
  4. 4. Sandvine. Global Internet Phenonomena Report 1H 2012. Available at: http://www.sandvine.com/downloads/documents/Phenomena_1H_2012/Sandvine_Global_Internet_Phenomena_Report_1H_2012.pdf, 2012.
  5. 5. P. Du and N. Lu. Appliance commitment for household load scheduling. IEEE Transactions on SmartGrid, 2(2), 2011, 411–419.
  6. 6. A. H. Mohsenian-Rad, V. W. S. Wong, J. Jatskevich, and R. Schober. Optimal and Autonomous Incentive-Based Energy Consumption Scheduling Algorithm for Smart Grid. Innovative Smart Grid Technologies 2010. IEEE, 2010.
  7. 7. A. Ingold, I. Yeoman, and U. McMahon. Yield Management: Strategies for the Service Industries, 2001.
  8. 8. K. Lakshminarayanan, V. Padmanabhan, and J. Padhye. Bandwidth estimation in broadband access networks. In Proceedings of ACM Internet Measurements Conference, Taormina, Oct. 2004.
  9. 9. D. Koutsonikolas and Y. Charlie Hu. On the feasibility of bandwidth estimation in 1x evdo networks. In MICNET, 2009.
  10. 10. A. Gerber, J. Pang, O. Spatscheck, and S. Venkataraman. Speed testing without speed tests: estimating achievable download speed from passive measurements. In IMC, 2010.
  11. 11. Kevin C. Tofel, Gigaom. ATT ends Flat-rate mobile plans. Available at: http://gigaom.com/mobile/att-shuts-down-the-mobile-broadband-buffet/2010
  12. 12. arstechnica. Verizon confirms the future of 3G data is tiered. Available at: http://arstechnica.com/gadgets/2010/09/verizon-confirms-the-future-of-3g-data-is-tiered/2010
  13. 13. Stacey Higginbotham, Gigaom. Differential data rates based on Time and Apps. Available at: http://gigaom.com/2010/12/14/mobile-operators-want-to-charge-based-on-time-and-apps/2010
  14. 14. MTN. MTN provides free facebook. Available at: http://www.mtn.co.ug/MTN-Services/Communication/Facebook.aspx, 2011.
  15. 15. H. S. Houthakker. Can Speculators Forcast Prices. The Review of Economics and Statistics, 1957.
  16. 16. A. Odlyzko. Paris metro pricing for the Internet. Proceedings of the 1st ACM Conference on Electronic Commerce, pp. 140–147, ACM, 1999.
  17. 17. S. Sen, S. Ha, C. Joe-Wong, and M. Chiang. Pricing Data: A Look at Past Proposals, Current Plans, and Future Trends. Available at: http://arxiv.org/abs/1201.4197
  18. 18. R. S. Prasad, M. Murray, C. Dovrolis, and K. Claffy. “Bandwidth estimation: metrics, measurement techniques, and tools,” IEEE Network, 17, 2003, 27–35.
  19. 19. A. Jardosh, K. Ramachandran, K. Almeroth, and E. Belding-Royer. Understanding congestion in IEEE 802.11b wireless networks. In Proceedings of the Internet Measurement Conference, Oct. 2005.
  20. 20. P. Acharya, A. Sharma, E. M. Belding, K. C. Almeroth, and K. Papagiannaki. “Rate adaptation in congested wireless networks through real-time measurements,” IEEE Transactions on Mobile Computing, 9(11), 2010, 1535–1550.
  21. 21. S. Humair. Yield management for telecommunication networks: defining a new landscape. PhD Thesis, MIT, 2001.
  22. 22. F. Qian, K. S. Quah, J. Huang, J. Erman, A. Gerber, Z. M. Mao, S. Sen, and O. Spatscheck. Web caching on smartphones: Ideal vs. Reality. In Mobisys, 2012.
  23. 23. S. Ha, S. Sen, C. Joe-Wong, Y. Im, and M. Chiang. TUBE: time dependent pricing for mobile data. In SIGCOMM, 2012.
  24. 24. R. J. Deneckere, R. P. McAfee. “Damaged goods,” Journal of Economics and Management Strategy, 5(2), 1996, 149–174.
  25. 25. D. Bubley. New Report: 10 Reasons Why the “toll-free” 1-800 Apps Concept Won't Work. Available at: http://disruptivewireless.blogspot.com/2012/07/new-report-10-reasons-why-toll-free-1.html, 2012.
  26. 26. J. Huang, F. Qian, A. Gerber, Z. M. Mao, S. Sen, and O. Spatscheck. A close examination of performance and power characteristics of 4G LTE networks. In Mobisys, 2012.
  27. 27. R. Bhatia, G. Narlikar, I. Rimac, and A. Beck. UNAP: user-centric network-aware push for mobile content delivery. In IEEE INFOCOM 2009, April 2009.
  28. 28. N. Shah, T. Kamakaris, U. Tureli, and M. Buddhikot. Wideband spectrum sensing probe for distributed measurements in cellular band. In International Workshop on Technology and Policy for Accessing Spectrum, ACM, vol. 222, Aug. 2006.
  29. 29. P. Deshpande, A. Kashyap, C. Sung, and S. R. Das. Predictive methods for improved vehicular WiFi access. In Proceedings MobiSys ’09, June 2009.
  30. 30. A. J. Nicholson and B. D. Noble. BreadCrumbs: forecasting mobile connectivity. In Proceedings of the Annual International Conference Mobile Computing and Networking (MobiCom), pp. 46–57, 2008.
  31. 31. C. Shepard, A. Rahmati, C. Tossell, L. Zhong, and P. Kortum. “LiveLab: measuring wireless networks and smartphone users in the field,” ACM SIGMETRICS Performance Evaluation Review, 38(3), 2010, 15–20.
  32. 32. R. Zhou, S. Khemmarat, and L. Gao. The impact of YouTube recommendation system on video views. In Internet Measurement Conference, 2010.
  33. 33. Zoe Fox, Mashable Impact of Social Media Referrals. Available at: http://mashable.com/2012/02/01/pinterest-traffic-study/
  34. 34. REELSEO. Facebook is the 2nd Largest Referral Source for Online Video. Available at: http://www.reelseo.com/facebook-2nd-video/2010
  35. 35. S. Sen, J. Yoon, J. Hare, J. Ormont, and S. Banerjee. Can they hear me now?: A case for a client-assisted approach to monitoring wide-area wireless networks. In Internet Measurement Conference, 2011.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset