Chapter 6
Human Factors in Smart Data Pricing

SOUMYA SEN, CARLEE JOE-WONG, SANGTAE HA, and MUNG CHIANG

6.1 Introduction

Smart data pricing (SDP) explores mechanisms beyond traditional flat-rate or usage-based pricing models to alleviate network congestion, monetize bandwidth, and help consumers save on their monthly bills. But realizing such mechanisms requires the creation of the right economic incentives and tools to understand and modify consumer behavior in a desired manner. The need for such a holistic approach makes SDP a truly interdisciplinary research topic that draws on principles from network engineering, economics, and human–computer interaction (HCI). Therefore, understanding how to enable consumers to respond to SDP incentives remains the most important factor in the successful adoption of such data pricing plans. HCI researchers have explored related issues in the context of computer networks and electricity markets, and in this chapter, we introduce these earlier findings and build on them to address the needs of mobile networks. In mobile networks, the feasibility of SDP is particularly promising because of the variance in users’ content consumption behaviors, varying degrees of elasticity of demand of different content types, and the richness of capabilities afforded by smart devices to make them a part of the network congestion management infrastructure.

Past research has shown that consumers prefer simplicity in billing, predictability in their expense, and ease in understanding feedback (e.g., pricing signals) from the network. Hence, for SDP to be successful, these user concerns must be addressed in both the realization of the data plan (i.e., creation of economic incentives) and the means of communicating with users (i.e., the design of the user interfaces (UIs) for the feedback-control loop between the users and the network operators). To achieve this, we need to understand user psychology through HCI techniques such as focus group studies, testing of user reaction to designs of interfaces that convey the pricing signals, development of experimental testbeds and application prototypes for offering new pricing plans, and field trials of these pricing plans.

In this chapter, we first discuss some of the basic principles and methodologies of HCI research in Section 6.2, followed by a detailed overview of findings from previous HCI works that have studied user psychology in energy markets (Section 6.3), home network management (Section 6.4), andbandwidth pricing (Section 6.5). We then discuss in Section 6.6 how day-ahead dynamic time-dependent pricing (DDTDP) has the potential to account for the lessons learned from previous HCI research and meet the design goals of creating a “win—win” for both consumers and operators. We follow this introduction to DDTDP with a discussion of the perspectives of various stakeholders of the Internet ecosystem (i.e., network operators, users, application developers, content providers, and regulators) on the feasibility and promise of DDTDP in Section 6.7. In Section 6.8, we then discuss our trial of DDTDP, the system implementation and the graphical user interface (GUI) features, followed by the results from the trial using both HCI-based qualitative findings (e.g., from focus groups, trial debriefing) and quantitative validation (e.g., from usage logs). Finally, we conclude in Section 6.9 with a discussion of the results learned from this trial and past HCI works, as well as future directions that can be explored as SDP practices continue to evolve.

6.2 Methodology

In this section, we give an overview of HCI research techniques relevant to SDP. We focus on the process of designing and testing the effectiveness of SDP systems, for example, designing UIs for offering prices and then evaluating them through expert analysis and field or laboratory studies. For a more detailed overview of HCI in general, including its theoretical underpinnings, we refer the reader to Reference 1. Readers familiar with the basic design principles and methodologies may skip ahead to Section 6.3 for information on prior HCI works on related topics.

6.2.1 Designing Systems with Users in Mind

Arguably, the most important factor in the successful adoption of SDP mechanisms is the usability and convenience of the application interfaces designed to modify user behavior. As many SDP mechanisms involve direct user interaction with the prices [e.g., selecting a desired quality-of-service (QoS) or delaying traffic to later times], complete SDP systems usually require a UI to facilitate this interaction. This interface should effectively communicate relevant information and simplify actions desired by users, for example, making it easy for them to delay a particular app until a cheaper time.

Interface design begins by considering the goals of target users. In the context of SDP, these would be SDP customers, with the general goal of saving as much money as possible without compromising their experience. More specifically, we can construct scenarios of how a user might interact with the interface; for instance, suppose an Internet Service Provider (ISP) offers a QoS-based pricing plan, in which a user can choose to pay more for higher QoS on a given session. One scenario might be that “Alice is streaming YouTube on her Galaxy tablet during her lunch break and wants to view an HD video with higher QoS. She opens the pricing app on her tablet, scrolls through a list of apps to the YouTube app, and checks the box with ‘higher QoS.’ She then clicks on the YouTube icon, opening the YouTube app, and starts playing the video with higher QoS.”

Scenarios allow designers to imagine tasks that a user wishes to perform, which express the dynamics of the interface and allow problems to be spotted in transitioning from one stage of a task to another. Designers can also imagine scenarios in which users make errors and wish to correct them—for instance, if a user accidentally selects the wrong app for higher QoS, is there a clear sequence of actions for unselecting it? By focusing on a user's perspective, scenarios allow a designer to simplify user interactions and transitions between app screens.

To facilitate navigation among screens, a designer needs to ensure that links to different screens are clearly labeled and visible. In particular, users should understand the effect of clicking any given button, for example, whether it will simply bring them to another screen or take some action that affects the amount spent on their data plan. If a user is required to provide input to the interface, such as specifying particular apps for higher QoS, then the interface must first indicate the types of information being requested. We illustrate some key principles of interface design by discussing DataWiz,1 a free mobile application for data monitoring for iOS and Android platforms, which was developed by the authors and colleagues at DataMi.2 DataWiz is a monitoring and alerting application with several desirable functionalities, but does not include pricing components. These GUIs, shown in Figure 6.1, were based on several focus group studies, feedback from multiple network operators, and the help of HCI researchers and professional graphic designers and app developers:

  • Consistency. Navigation sequences, layout, and terminology should be as consistent as possible throughout the app, for example, similar displays for daily and monthly information. DataWiz's main screen (Fig. 6.1a) allows users to tap the weekly and monthly bars to display weekly or monthly usage on the center pie chart, in a manner analogous to the daily usage shown in the figure.
  • Shortcuts. Users should be able to quickly execute repeated sequences of actions, for example, selecting some actions as “default.” For instance, tapping the monthly bar of the main screen in Figure 6.1a displays the monthly usage on the pie chart, while the default was set to daily usage to give the user an immediate update on how much data he/she has used up from his/her “virtual” daily quota.
  • Informative Feedback. Should a user input something into the interface, there should be some confirmation, for example, highlighting an app after it is selected, to confirm that this input was received. In Figure 6.1b, users can switch between displaying 3G and WiFi, as well as daily, weekly, and monthly usage. Their selection is highlighted accordingly, for example, 3G monthly usage in the figure.
  • Closure. Users should receive an indication when an action has been completed, for example, confirmation that they have finished selecting higher QoS. For instance, when toggling between DataWiz's graph displays in Figure 6.1b, a new graph is displayed when the selection has been completed.
  • Error Prevention and Handling. Users should be able to easily reverse actions or go back to a previous screen, aiding them in correcting errors and reducing user anxiety at accidentally pressing a wrong button. For instance, users at DataWiz's usage prediction screen (Fig. 6.1c) can press the back button to easily return to the home screen of Figure 6.1a.
  • Simplicity. Few screens and simple displays are easier for users to understand and take less storage on the interface's physical device. In DataWiz, the number of screens is minimized, with the app's main screens easily accessible from the four links on the top of the screens in Figure 6.1c–e.
image

Figure 6.1 DataWiz screenshots, illustrating basic design principles. (a) Home screen, (b) usage graphs, (c) usage prediction, (d) usage details,(e) maps screen, and (f) settings screen.

To achieve these goals, designers can manipulate the following basic visual tools of interface design to suggest appropriate actions to users.

  • Font and Color. Font style and color can be used to further differentiate different groupings. For instance, one section might have a white and another a dark gray background. On DataWiz's usage prediction screen (Fig. 6.1c), colors separate the past from the future: the white bars indicate future (i.e., predicted) usage, whereas green bars indicate past usage. The larger font of the dates of the usage details screen in Figure 6.1d shows how font size can be used to draw attention to blocks of text, while the orange icons in the map screen of Figure 6.1e draw attention to the locations where the user has consumed data.
  • Grouping and Structure. Information that is logically separate should be separated either by space or by placing it on different lines, for example, prices for different times of the day might be separated by extra space between them. On DataWiz's settings page in Figure 6.1f, we see this grouping with thicker lines dividing the billing data, data cap, and alert settings.
  • Alignment and Order. Users generally read from top to bottom and left to right; thus information should be placed to reflect this order. For instance, in a list of prices and corresponding times, we might put the most recent time at the top to reflect the fact that it is most immediately important. In the DataWiz app, the monthly data cap setting is listed above the weekly and daily caps on the settings screen (Fig. 6.1f) to reflect its greater importance.
  • Icons. Design of intuitive icons is an essential component of a well-designed interface as it helps users to navigate with ease. Figure 6.1c–e shows navigational icons on the top (from left to right) for (a) “back” to the home screen (Fig. 6.1a), (b) the “usage prediction” screen (Fig. 6.1c), (c) the “usage details” screen (Fig. 6.1d), and (d) the “usage location” or map screen (Fig. 6.1e). Similarly, on the home screen (Fig. 6.1a), the user has access to two additional icons at the bottom of the screen for modifying “settings” and viewing the “usage location.” In between these icons is an icon at the bottom center of the screen that can be swiped upwards to pull out the usage graphs.
  • White Space. White space allows the designer to separate blocks of related elements, enforcing the desired grouping and structure. Including extra space around a particular button or piece of information also highlights this information. On DataWiz's main screen (Fig. 6.1a), we see white space used around the center circle to draw attention to its information.

Throughout the design process, a designer should continually evaluate the interface prototype and incorporate feedback to improve the design. Even at the earliest iterations, evaluation is important, because it is much easier to correct major mistakes at an early stage. In the case of SDP, experts might review the design to anticipate any problems users would have or users might be asked to explore the interface in a laboratory or a natural setting. In the remainder of this chapter, we consider these methods for evaluating the success of SDP UIs.

6.2.2 Expert Evaluations

While trials with future users would provide the most accurate evaluation of a prototype interface design, it is infeasible to conduct such trials at every design iteration. Thus, we first focus on methods for expert analysis, which have lower overhead and can help identify obvious problems. These fall into two categories: ensuring that designs adhere to accepted cognitive principles and that they are consistent with known empirical results.

An expert or designer can conduct a cognitive walkthrough with the aid of scenario descriptions like the ones used in the interface design. First, the expert specifies a user's desired task (e.g., delaying an app store download from 8 pm until midnight to take advantage of lower prices at midnight). Given this task, the evaluator writes down the specific list of actions required to accomplish the task—for instance, clicking on the “Deferral” tab in the app, selecting the “App Store” application, and entering “midnight” as the desired time for the App Store to start downloading. For each task, the evaluator also writes down the system response (e.g., the App Store icon is highlighted after it is selected by the user). The evaluator then asks the following four questions in order to assess whether a user can easily complete the given task.

  • Does the Action Achieve the User's Goal at That Point? Is it reasonable to assume that, as part of achieving the larger task, a user would set the action's effect as an intermediate goal? For instance, the user might make “selecting the App Store” a subgoal of delaying App Store downloads.
  • Will the User See That the Action Is Available? With each action, the user will generally need to press a button or provide input. An undesirable design would require intermediate steps such as navigating to a separate screen on the app to select the appropriate button.
  • Will Users Be Able to Identify the Appropriate Action? Given that the action's effect is a subgoal and that the user can see that the action is available, thedesigner must ensure that users can recognize how to execute this action, for example, pressing the correct button.
  • Will Users Understand the Feedback from the Action? After a user takes an action and feedback is given, the user must understand the meaning of this feedback, for example, that highlighting a button means it was successfully selected.

More generally, an evaluator can examine the entire system to assess whether it conforms to general design guidelines, called heuristics. While many different heuristics have been suggested, most generally conform to the design principles suggested in Section 6.2.1. Additionally, the designer should create a help screen allowing users to search for documentation and instructions should they require them.

6.2.3 Conducting a Field Trial

A trial with real users is the most effective way to test an interface, as the interface's success ultimately rests on its reception by users. While user responses can be best tested in the field, that is, in a user's natural environment, in some situations a controlled laboratory setting may yield more insight. For instance, it is easier to test specific features of the design when users work with the interface in a controlled setting while the features are varied. Field studies are also usually more expensive, as a researcher may need to travel to participants’ homes or workplaces and purchase portable recording equipment. Laboratory studies, on the other hand, require participants to take the time to come to a research laboratory. In the following discussion, we outline different aspects of the trial design for both options.

Participants. It is important to recruit participants representative of actual interface users, and ideally to choose people who will become actual users. In particular, participants’ knowledge of computing devices and their experience with similar interfaces should be similar to that of real users. The number of participants recruited depends on the purpose of the trial: if a statistical analysis is used, at least 10 participants are necessary, while 3–5 are generally adequate to identify usability problems [2].

Variables and Hypotheses. In a laboratory setting, researchers should identify the dependent and independent variables. For instance, are researchers comparing the speed of user response with two different designs? These variables are especially important in a laboratory setting, where experiments can be set up to evaluate the dependent variables for all possible combinations of independent variables. The experiments are designed to test hypotheses or predictions of how the independent variables affect dependent ones. For instance, we might hypothesize that one design yields a faster user response than another. Theexperimental results will then either validate the hypothesis or validate the null hypothesis, that is, the baseline assumption that the dependent variable is not affected by the independent one. Hypothesis testing is especially powerful when combined with a careful experimental design that allows statistical methods to be used for calculating quantitative confidence metrics of the hypothesis's validity.

Experimental Design. Once a hypothesis has been chosen, the experimenter must choose whether to vary the dependent variables between subjects or within subjects. Between-subject variation involves randomly sorting subjects into different groups, each of which will experience a different combination of the independent variables. One of the groups, designated as the “control” group, experiences no manipulation of the experimental variables, which helps to ensure that observed differences in the dependent variables are in fact due to changes in the independent variables. This type of variation eliminates any learning effect, in which a user's experience with one set of variables affects his or her experience with another.

The learning effect can be a serious disadvantage of within-subject or repeated experiments, in which the same group of subjects is exposed to a variety of experimental conditions in sequence. By varying the sequence, the learning effect can be somewhat controlled, though not completely. Within-subject experiments have the advantage of controlling for subject variation; they also require fewer total subjects and thus fewer experimental resources. Another possibility is a mixed design, in which one variable is manipulated between groups and another within groups.

Quantitative Analysis. Quantitative methods can be combined with standard statistical tools for hypothesis testing on quantitative variables. SDP's main quantitative aspect is the user's willingness to consume less data during congested hours, which may be tested for different interface designs and pricing plans. However, depending on the type of SDP in question, other quantitative metrics may also be relevant, for example, measuring the average amount of time that a user delays apps given a set of time-dependent prices. Standard statistical tests and comparisons can be used to conduct such tests [3].

Qualitative Analysis. We identify two main methods of soliciting qualitative data on participants’ experiences: protocol analysis and interviews or questionnaires. Protocol analysis can involve analyzing audio or video recordings of the user interacting with the interface; however, in the context of SDP, it is perhaps more applicable to think of recording keystrokes within the interface, as pricing interfaces generally do not require active engagement aside from pressing buttons and typing in input. These keystrokes can tell us how often users press the wrong buttons, how often they use different features of the interface, and how rapidly they perform different tasks. One potentially significant disadvantage is the volume of data collected, especially from very engaged users; this can be mitigated by automated analysis that summarizes the raw keystroke logs.

The quantitative data provided by protocol analysis can be supplemented bymore subjective data provided directly by experiment participants. Structured postexperiment interviews can provide high level insights into participants’ experience and identify any unexpected design flaws. These are generally conducted according to an interview template of topics to cover and specific questions to ask. The interviewer usually begins with a very general question and then focuses on specific points, either following the question script or jumping off of comments in the subject's responses.

A more fixed form of questioning uses prepared questionnaires or surveys. While less flexible, these methods also have less overhead and are thus especially effective with large numbers of subjects or ones in remote locations. Given these limitations, most experimenters conduct pilot studies with the questionnaire to identify possible points of confusion or misunderstanding before releasing it to more respondents. Questionnaires usually need to be distributed to large numbers of people in order to yield a reasonable number of responses.

Questionnaires should begin by explaining the type of information sought and then proceed to ask questions tending to this purpose. Given that survey respondents are often self-selecting, the survey should include general questions on a user's background, to put the other responses in context for the experimenter and ensure a representative sample population. These will then be followed by open-ended, scalar, multiple-choice, or ranking questions. Open-ended questions are the most flexible question format but also require more effort on the part of the participant and are the least likely to yield consistent answers. Thus, it is usually best to employ other types of questions. Scalar or rank questions ask a user either to assign numbers, for example, on a scale from 1 to 10, to describe the accuracy of a specific statement; or to rank several choices, usually in order of preference. These types of questions give more consistent results;however, it is important to specify what the numbers on a scale mean. Moreover, while too many choices can paralyze users or result in arbitrary decisions, too few can skew responses and miss subtleties in user opinions. If scalar or ranking questions are not appropriate, multiple-choice questions are usually employed.

6.2.4 Choosing an Evaluation Method

In the above discussion, we identified five methods for evaluating an interface: cognitive walkthroughs, heuristic analysis, statistical methods, laboratory studies, and field trials. In the latter two cases, we can identify two types of user feedback: interview- or questionnaire-based and protocol analyses. In Table 6.1, we summarize key features of these five types of user feedback, as described in Reference 1, to aid researchers in choosing the evaluation method most appropriate for their situation.

In practice, experimenters would likely combine several of the techniques in Table 6.1 to gain a comprehensive evaluation. For instance, one could use statistical and protocol methods to yield objective, evidence-basedquantifiable information and then use information from cognitive walkthroughs, heuristics, or interviews to ensure that subjective user impressions match this empirical evidence. Interviews and heuristic analyses would focus more on high level information insights, while other methods yield insights into the specific effects of different interface features. In terms of necessary resources, statistical data collection and keystroke logging for protocol analysis require the most resources, while cognitive walkthroughs require the most expertise. None of these methods is very intrusive or obvious to users during their interaction with the interface, introducing only minimal bias into the trial results.

Table 6.1 Features of Different Evaluation Methods

Cognitive Heuristic Statistical Interview Protocol
walkthrough (w.t.)
Objectivity No No Yes No Yes
Information Low level High level Low/high level High level Low level
Equipment Low Low Medium Low Medium
Expertise High Medium Medium Low High

In the next three sections, we explore HCI research works that have utilized the above methodologies to investigate SDP and related topics in various contexts. We first discuss lessons on interface design from HCI works in energy markets and then give an overview of HCI studies on Internet usage and data pricing.

6.3 Hci Lessons From the Energy Market

Like broadband networks, electricity grids suffer from periods of peak demand. To relieve this peak demand, schemes such as time-dependent pricing (TDP) and usage visualization have been practiced in the electricity market, and hence, lessons from it can be useful in the context of data networks. HCI works that have studied users’ energy consumption have ranged from power strips that change color to show the energy used by individual electrical sockets [4] to a large-scale media art installation visualizing energy consumption in an office building [5]. These studies on user-friendly interface design bring out a key trade-off in HCI design, which is whether to visualize the information on energy usage pictorially or numerically.

The energy consumption monitoring application WattBot used color coding as a qualitatively simple way to indicate the usage amounts to the users [6]. However, users have also been found to quantitatively track changes in their energy usage behavior by viewing their usage history [7] and to care about the convenience of monitoring their usage. For example, researchers testing a desktop widget that showed computer energy efficiency found that users appreciated the inconspicuous, easy-access nature of the widget [8]. These lessons on using color coding for qualitative feedback and numerical usage history for quantitative feedback can be transferred to the design of client-side GUIs in broadband pricing applications, as discussed in Section 6.8.3.

6.4 User Psychology in Home Networks

In Sections 6.4.1 and 6.4.2, we discuss related research efforts from HCI researchers on understanding the interaction between users and networks. We first explore works that have focused on studying user psychology and its role in interface designs for developing home network management and control tools and then investigate how users respond to nonmonetary penalty mechanisms (e.g., throttling and capping) imposed by ISPs on their home bandwidth usage. In Section 6.5, we cover lessons learned about user behavior and psychology from HCI research that involved incentives and pricing signals from the network to the end users to modify their usage behavior.

6.4.1 Network Management and QoS Control

Today, bandwidth is becoming a new type of resource that has to be shared and managed by multiple agents, often operating in a distributed framework. This effect can be observed in the case of shared data plans offered by wireless network operators in which a data cap is shared across different members and devices. Similarly, in the case of wired networks, multiple household members and Internet-enabled devices need to share the same broadband access link. This sharing burdens users with the task of “digital housekeeping” in which they have to set up, maintain, and troubleshoot their connectivity as well as understand, monitor, and share limited bandwidth speed or a data cap. HCI and networking researchers have been grappling with this issue [9, 10] and have created a few useful tools that can help household users understand, diagnose, and manage their home bandwidth usage. One such domestic tool for bandwidth management is the Home Watcher.

The Home Watcher project [11] provides an appliance that shows who is using the bandwidth as well as how much each user is using. It also allows users to limit each other's bandwidth (and therefore, their Internet-related activities) from a publically situated open access display. This allowed the researchers to study the social consequences of revealing real-time resource usage and contention in the household.

The Home Watcher field study with 24 occupants in six households revealed several interesting insights. Making bandwidth usage activities visible reinforced existing beliefs about other household occupants’ computing activities and increased people's awareness of their routines and activities, for example, alerting people about who else is online at given times of the day. Thus, Home Watcher raised some questions of user privacy: information about household members’ bandwidth consumption patterns can affect daily routines and relationships. As Home Watcher also allows members to control each other's bandwidth, it revealed the politics of visibility and control. Participants desired more control to limit usage based on the time of day and importance of each user's activities on shared machines. They also wanted mechanisms to let others know when and why they were being throttled. However, there are concerns that existing power hierarchies in the home (e.g., between parents and children) may be threatened by the option to control others’ Internet usage. These consequences and desires for finer granular control and communication between users are thus relevant to developing usage control applications in wired and wireless bandwidth and data cap sharing environments.

Other HCI researchers have focused on creating interactive visual tools for home network access as well as bandwidth controls. Eden [12] is an “interactive, direct manipulation home network management system” that aims to make it easier for household users to visualize the physical location of different devices in their home and perform membership management, access authentication, network monitoring, and QoS policy setting for bandwidth prioritization. It uses a customized Linux-based wireless router/access point and provides users withpersonalized visual representations of network devices and settings that can be managed with simple drag-and-drop moves. The Eden system also provides users with an interface element called a badge, which associates particular network control and prioritization properties with individual or groups of devices (e.g., a sites restriction badge, application restriction badge, faster badge, and slower badge). A trial of Eden with 20 participants showed that most users were able to understand the mapping of spatial home boundaries to the logical boundary of their home network and were able to use the badges effectively for parental control and other bandwidth prioritization activities.

While the previous systems worked within the limits of the underlying physical infrastructure by treating it as a preordained fixed set of facilities, the Homework project [13] took a new approach to provide users with access to parts of the home network infrastructure that have traditionally remained closed or hidden. One of the motivating reasons for this was that traditional Internet protocols and architecture were designed for configuration by experienced system administrators. In a home network setting, however, bandwidth access is locally negotiated between household members and requires immediate resolution of problems, so these users sometimes need direct control over their home network. Homework adopts a gateway model to change the home network infrastructure by developing a customized dedicated router that can capture traffic information and make it available to applications. It provides users with information and control over which machines can receive an Internet Protocol (IP) address to join the home network by implementing a Dynamic Host Configuration Protocol (DHCP) sever within a customized router. It also implements a dynamic name service (DNS) proxy into the router to give fine-granular control over which devices can access which specific sites. The DNS proxy intercepts domain name resolution requests for sites and drops requests if the requesting device is not allowed to access that domain, thus enabling parental controls. The Homework system was tested with 12 households in the United Kingdom and confirmed many of the observations in other works, including increases in social discord and privacy concerns. Many users were found to like having access to such control over their network infrastructure, but they viewed these features as mechanisms to issue threats rather than substitutes for negotiations within the family about network usage. Nevertheless, these results show the potential for reshaping the home network by allowing a range of interactive possibilities to manage and control the infrastructure.

Such systems for home network management and control provide an understanding of the features that users want, how such information can be presented to them using effective UIs and how user actions can be mapped onto the underlying physical infrastructure. These studies can inform our design choices for user-friendly applications for wired and wireless network pricing and user-mediated bandwidth prioritization and control.

6.4.2 Implications of Throttling

Many US operators have been throttling user bandwidth in an effort to control network congestion [14]. Only 30% of online Americans are reported to receive the “advertised” speed, even though ISPs charge them higher fees for higher speeds [15]. This issue of throttling, along with the net-neutrality issues involved in creating a tiered Internet service, has led HCI researchers to develop a visual network probe to educate and empower consumers about their data speeds. Kermit [16] is a probe that collects data from users’ routers and calculates bandwidth counts and the online status of each machine. Kermit provides users with a visual interface to estimate their broadband connection speed, identify network bottlenecks, and control and prioritize their usage by displaying real-time and historical information about bandwidth usage for all household devices on the home network.

The field trial of Kermit with 10 households provided many design guidelines for future home broadband tools. It showed that users benefit from visualizations that allow for a less cluttered and personalizable display as more devices are added to the home network, as well as from more context about the network upload, download, bandwidth, and speed they are receiving. Users also need to be able to perceive visual differences to understand how their actions of limiting or prioritizing devices are affecting the network. They also like the options of controlling, scheduling, and limiting actions, as well as tools to remind them to reverse such actions. Thus, designers should incorporate these functionalities into the interface over a system that securely stores and handles the large amounts of traffic measurement data collected. These tools can not only educate and empower consumers but can also help them become broadband speed “watchdogs” to effectively participate in net-neutrality debates.

6.4.3 Response to Capping

Besides throttling and poor performance due to network congestion, users are also facing bandwidth caps imposed by ISPs. How users use and interact with the Internet in a metered bandwidth environment is of interest to web designers, application and service developers, and content providers in customizing their user experience. Many countries such as Africa, India, and Australia have had low bandwidth caps in digital subscriber line (DSL) for years, while most US operators introduced data caps after 2010. In Africa, Chetty et al. [17] conducted studies of users who have only known capped plans for Internet access to find out how data caps change usage patterns and what strategies users use to cope with the limitations. They found that caps affect how, when, and for how long users access the Internet. For example, in Kenya, Internet users deliberately plan their online activities before going online at Internet cafes [18]. In South Africa, some users avoid high bandwidth-consuming sites at thebeginning of the month so as to stretch their caps through the monthly billing cycle. Caps, therefore, effectively force people to put a price on their Internet access, as in usage-based pricing. Additionally, to help navigate their bandwidth caps, users require new visual tools that reveal which applications consume their bandwidth the most and the cost incurred from visiting different sites.

Further studies on the effects of data caps by Chetty et al. [19] revealed that home users face difficulties with three types of uncertainties about their bandwidth usage, namely, invisible balances (inability to track usage and leftover bandwidth), mysterious processes (unidentifiable users and background processes), and multiple users. A qualitative study of 12 households living with data caps in South Africa showed that caps limit browsing activities and users preferred limitless consumption to faster but capped connections. Users generally have inadequate tools and warnings to prevent them from going overboard or to identify freeloaders on their wireless networks. Users also liked parental controls to limit children's access to gaming and multimedia-rich content and were even found to avoid downloading software updates, thus increasing security risks to the network. Moreover, because of the caps many households were found to create informal data sharing networks; families without caps or with workplace connections would download videos, movies, and music to share with their capped friends and relatives through CDs, DVDs, and so on. Family members would sometimes even visit friends and relatives to get online once when their Internet connection was capped. Another interesting behavior was that while users self-censor their usage early in the month, they go on “binge downloads” at the end of the billing cycle to use up most of the data cap because of the “use-it-or-lose-it” nature of today's capped plans, thus potentially increasing network congestion. These behaviors suggest that many usage and pricing assumptions relevant in unlimited bandwidth scenarios may not hold in capped environments, which in turn will affect the design of applications and devices operating in a usage-based, tiered, and capped plans.

Researchers have also developed another system, BISMark (Broadband Internet Service BenchMARK) [20], with similar features that helps users not only understand but also actively control how different devices in a home can share the available bandwidth cap. BISMark is a router firmware running on network gateways that gathers usage statistics and reports them to a controller to display usage information. BISMark provides users with visibility into which users and applications are consuming more bandwidth and policies to control how the cap is allocated across different users, devices, application, time of day, and so on. BISMark implements and enforces the user-specified policies via a secure OpenFlow control channel to each gateway device. It also allows users within a single household to trade caps with each other to increase overall resource utilization efficiency.

6.5 User Psychology in Bandwidth Pricing

Section 6.4 focused on HCI research studying user behavior under unlimited, capped, or bandwidth-throttled data plans without explicitly invoking pricing as a tool to modify user behavior. In this section, we discuss works that have studied user psychology and behavior when the network provides explicit pricing signals to users.

6.5.1 Effects of Variable Pricing

One of the early works on user reaction to pricing was conducted by the Berkeley Internet Demand Experiment (INDEX) [21, 22] in which users were exposed to different usage-based pricing models, including flat-rate and pay-per-byte pricings. INDEX is a prototype to offer differentiated-quality service on demand, with prices that reflect resource cost. The INDEX Control Center is an interface running on a desktop through which customers specify their choices and control access to the Internet. The interface also shows users a “spending meter” to reveal the cost incurred till the present time of an active session. This meter can be toggled to show the cost incurred till that time of the day or the entire month. The spending meter is updated every minute to keep users aware of their spending behavior in real time. INDEX also allows customers to instantaneously shift between different service qualities, even during an ongoing session, by clicking on the appropriate service quality. The INDEX project experiments exposed the participants to various forms of pricing plans, including volume pricing, volume plus capacity charge, self-selecting tariff, and so on. The experimental data showed that users are highly price sensitive. Moreover, in asymmetric bandwidth settings in which users can separately choose and pay for bandwidth in the upstream and downstream directions, it was found that most users are aware of this asymmetry, and they do reduce their bill. The authors also compared user behavior when their costs are determined by megabytes of data transferred versus minutes of connected time and found that the connected time goes up dramatically under volume pricing compared with connect time charges. The most important findings from the INDEX project are [22] that (i) the demand is very sensitive to price and to quality; (ii) differences in demand among users are persistent and large; and (iii) the commodity form of the service (e.g., transport of traffic by volume or time) has a big impact on demand.

6.5.2 Effects of Speed-Tier Pricing

Recent research has focused on creating experimental testbeds for mobile data pricing. SpeedGate [23] from AT&T is one such testbed allowing experimentationwith different dynamic pricing schemes without accessing or modifying the core network elements (e.g., Policy charging and rules function (PCRF) and Policy and charging enforcement function (PCEF)). SpeedGate allows mobile users from any carrier with any type of smart phone to participate in pricing trials (i.e., carrier independence) and maintains persistent connections to smartphones over a virtual private network (VPN) even as users roam between different wireless networks (3G, 4G/LTE, WiFi). Most importantly, it allows dynamic speed-tier assignments: the testbed can redirect smartphone users to different ports on proxy servers in real time and adjust their maximum available bandwidth per user session (i.e., QoS level) dynamically based on the data pricing strategies used.

A trial of Speedgate to understand user's willingness to pay (WTP) for various speed tiers with a total of 29 users revealed many interesting results. It showed that users’ WTP depends on the amount of bandwidth-intensive applications they used, their access to WiFi networks, and the speed of the 3G/4G connections, which depends on the coverage in the user's location. In particular, some users showed less willingness to pay for higher speed tiers, mainly because there was no real guarantee that they would indeed receive the maximum speed, and hence, their WTP depended on their experience because of varying network coverage. The results also suggested that most participants showed a limited dynamic range of WTP for wireless services regardless of the speed tiers offered. A second laboratory study with 12 participants for measuring WTP through rating of applications (stock, Pandora, YouTube) showed that WTP for higher speed tiers was higher for more bandwidth-intensive applications.

The researchers also conducted a separate trial [24] at the Purdue University campus by developing a smartphone application to study users’ responses to incentives and disincentives and to observe the impact of their psychological characteristics. Their results showed that pricing schemes with probabilistic payments of higher incentive amounts have more positive results compared to schemes with deterministic payments of lower incentive amounts, even when the total payout is statistically similar in both schemes. This result is rather interesting given that customers in wired networks have been known to be risk averse, with a preference for flat-rate fees over the uncertainties involved with variable pricing. This new observation may, therefore, indicate that consumers in resource-constrained mobile networks are more risk seeking and want to save more on their monthly bills and, hence, potentially are more responsive to dynamic pricing plans. Furthermore, their study suggested a positive relationship between compliance with incentives and a user's psychological agreeableness and a negative relationship between compliance and a user's neuroticism (i.e., the extent of negative emotions experienced).

6.5.3 Effects of Dynamic Time-Dependent Pricing

The most basic form of TDP in practice is a two-period plan that charges different rates during the daytime and night time. For example, BSNL in India offers unlimited night time (2–8 am) downloads on monthly data plans of Rs 500 ($10) and above. Another variation of TDP is one in which users can choose the time of day when they want higher network performance. For example, in 2010, the European operator Orange introduced a “Dolphin Plan” for £15 ($23.50 USD) per month that allows unlimited web access during a “happy hour” corresponding to users’ morning commute (8–9 am), lunch break (12 noon to 1 pm), late afternoon break (4–5 pm), or late night (10–11 pm) [25, 26]. This plan lets customers self-sort into different times of the day, temporally distributing the peak demand.

More dynamic forms of TDP have been offered for voice calls in India and Africa. In Africa, MTN Uganda's dynamic tariffing plans have shown that fiscally conscious users do respond to dynamic pricing plans that vary the incentives depending on the network conditions. Their customers can check the price discounts available on their handsets and decide on their usage accordingly. At 4 am, when the network would otherwise be little used, usage was found to be as high as 99% if discounts were offered [27]. These dynamic discounts were even found to modify the temporal usage patterns and peak demand periods. For example, in addition to the normal peak hour at 8 am, a new peak hour at 1 am was observed as more people took advantage of cheaper prices for calls after midnight. Customers in developing countries were thus very price sensitive and were willing to stay up late in the night to save money, hinting at similar potential for TDP for mobile data. Another important effect observed with time-dependent price discounts was that incentives can in fact increase the overall demand because of a “sales-day effect” psychology. For example, when Vodacom introduced a similar TDP scheme in Tanzania, overall call volumes increased by 20–30% in areas where dynamic tariffing was practiced.

Similar levels of effectiveness of TDP for voice calls have been reported by networking researchers from TDP field trials conducted in the UC Berkeley student dormitories using a computer-telephony service [28]. The experimental subjects could use the service to make and receive phone calls from their computers or phones. While the trial did not involve real money, users’ budget constraints were simulated through allocating a limited number of tokens per user per week. This experimental setup was then used to conduct a different pricing experiment each week to understand how prices can be used to change user behavior (e.g., entice them to talk less, talk at another time, or use a lower quality connection). The authors found that with their token scheme as a budget constraint, static pricing policies can be used to influence users’ behaviors, but a simple congestion pricing scheme fails to encourage users to talk less. For example, using TDP, they encouraged users to shift 30% of their calls from the peak to the off-peak hours. Similarly, by using call-duration pricing (a variant of usage-based pricing) and charging a higher rate as a call lasts longer, the authors could encourage three times as many calls (18% instead of 6%) to terminate after a price increase. However, when using a simple congestion pricing scheme that charges a ratedepending on the number of people calling (i.e., a real-time dynamic congestion pricing), they found that this pricing plan did not get users to terminate their calls earlier. They conjecture that users did not adjust their behavior because they did not know how long the price increases or decreases would last.

These observations hint that to make a congestion pricing scheme effective, the price changes need to be of relatively longer duration to incentivize users to change their behavior. The results from these experiments hint at two key lessons: (i) networking and HCI researchers do need to jointly identify effective ways to communicate the pricing signals back to the users and (ii) for dynamic TDP to change user behavior, users need to be made aware of the prices (and how long they will last) in advance, through plans such as DDTDP.

Our own experience from field trials of the TUBE system [29], which offered dynamic day-ahead TDP in the case of mobile data traffic, confirmed similar user behaviors, as discussed in Section 6.8. Our experiments also found a 30% reduction in the maximum observed peak-to-average usage ratio and a 130% increase in usage, particularly in the discounted, off-peak hours. These results are promising in that they indicate the feasibility of introducing such plans if they are designed carefully to facilitate users’ understanding of and reactions to pricing signals from the network.

6.6 Day-Ahead Dynamic TDP

As alluded to above, DDTDP has certain features that make it an attractive pricing option to explore.

  1. . Price Guarantees. Users’ general preference for flat-rate pricing is primarily driven by their preference for certainty in their monthly usage bills. Dynamic real-time pricing is, therefore, not very conducive to users as it is harder for them to react to changing prices at such a fine timescale [28]. Instead, DDTDP provides prices to users a day in advance, giving them an opportunity to plan their usage ahead of time.3
  2. . Advance Notice. In Reference 28, the authors noted that unless users are informed of the prices in advance, they usually do not change their usage behavior as they do not know for how long the offered prices are going to last. In DDTDP, the offered prices are for specified durations (e.g., hourly rates), and hence, users can determine how long they need to wait for discounted rates and decide accordingly whether to stop using data for noncriticalapplications.
  3. . Granular Price Offerings. In contrast to simple two-period pricing, DDTDP provides operators with the opportunity to vary prices at a finer time granularity (e.g.,1-h or 30-mi time periods). Many mobile applications that may not wait for half a day can wait for an hour or less (sometimes even without the user being involved, as in the case of downloads, backup, caching, M2M applications, etc.). The operator can take advantage of the inherent time elasticity of demand of such applications to set the time-varying incentives.
  4. . Adaptive Pricing.The operator can adapt the prices offered for each day based on the observed user behavior given the incentives offered previously [29]. This daily adaptation of prices introduces a “dynamic” aspect into the pricing plan. It also helps the operator spread out the demand temporally without shifting the peak demand from one time of the day to another, as has been observed to happen under simpler two-period pricing (e.g., all subscribers waiting to make longer calls at 1 am [27]).
  5. . User Feedback Mechanism. DDTDP will create a control-feedback loop between the network operator and its clients, with the users having GUIs that will help them better understand and react to the prices on offer. It will thus enable end users to easily and conveniently respond to these time-varying pricing signals. DDTDP divides the congestion management functionalities between the network core and the end user devices, thus making these smart devices a part of the network management solution suite.

Despite these advantages of DDTDP, introducing any new pricing scheme must account for its possible impact on the entire Internet ecosystem. In Section 6.7, we, therefore, discuss how various stakeholders of this ecosystem that we interacted with through SDP forums and workshops (www.smartdatapricing.org)—namely ISPs, vendors, content providers, software developers, and regulators—view the potential of DDTDP.

6.7 Perspectives of Internet Ecosystem Stakeholders

Changes in pricing policy can affect multiple stakeholders in the Internet ecosystem, including network operators, consumers, content providers, software developers, and regulators. In this section, we provide the perspectives of these different stakeholders to understand the drivers of current changes inbroadband pricing and the benefits that new plans such as DDTDP can bring.

6.7.1 Operator Perspectives

By 2016, ISPs are expected to carry 18.1 PB per month in managed IP traffic.4 While this growth promises more revenue for ISPs, many ISPs are concerned about handling such traffic volumes on their network, as seen by Comcast's initiative to cap their wired network users at 300 GB per month [14]. Rural local exchange carriers (RLECs) suffer from an especially acute problem: although the cost of providing middle-mile bandwidth to their users has declined over the years, because of an increase in the DSL demand needed to fill the middle mile, the bandwidth requirements of home users have increased quite sharply [30]. Indeed, rural customers’ average home broadband speed is still less than the Federal Communications Commission's (FCC) 4 Mbps target, but the digital expansion required to meet this target is hampered by the high cost of middle-mile upgrades in rural areas [30]. New access pricing mechanisms that bring down middle-mile investment costs by incentivizing users to reduce RLEC peak demand and overprovisioning needs can thus be helpful in bridging the digital divide.

ISPs’ current penalty mechanisms (e.g., overage fees, throttling, and termination) to reduce peak traffic, however, can be harmful to the entire Internet ecosystem and ineffective in bringing down peaks. For example, usage-based pricing does not incentivize users to avoid congested, peak-usage times, and thus does not reduce peak usage relative to average usage over the day [31]. As Clark pointed out, “the fundamental problem with simple usage fees is that they impose usage costs on users regardless of whether the network is congested or not.” [32] Even two-period, for example, daytime and night time, pricing cannot always reduce peak usage, as many applications can be deferred for a few hours but are unlikely to wait for half a day to get discounted rates.

Dynamic day-ahead TDP does not suffer from many of the problems posed by these other pricing plans. Many US ISPs at the National Exchange Carrier Association's Expos in 2011 and 2012 thought that DDTDP could benefit both ISPs and customers, informing us that “Clearly, customers do not want any form of usage measurement or control under traditional definitions. I’d be very interested in seeing a pitch for time-dependent pricing” (Delhi ISP, email communication, September 2011). An Indian telecom executive was more specific, telling us that TDP can benefit ISPs by reducing peak traffic as well as customers by letting them save money during lower price times. Given these advantages to ISPs, it is therefore necessary to examine customer viewpoints on TDP, as user responses to the prices offered will ultimately determine TDP's efficacy.

6.7.2 Consumer Viewpoints

Consumers today face increasing costs of Internet subscriptions as well as harsh penalties from network operators in overage fees. For example, Verizon's move in 2012 to only offer shared data plans for all new consumers increased most customers’ monthly bills [33]. To avoid overage fees from going over their monthly data caps, consumers are increasingly relying on usage-tracking and data compression apps (e.g., Onavo, 3G Watchdog Pro, and DataWiz). These trends are also observed outside the United States; in South Africa, for instance, consumers use ISP-provided usage-tracking tools [19] to stay within their data caps and often plan their Internet activities in advance to avoid wasting time online.

Empowering users to monitor and control their spending on Internet usage is emerging as a new area of research that needs to consider economic incentives and HCI aspects in a holistic manner [34]. For instance, under dynamic TDP users’ interactions with the prices offered can be made more convenient by automatically scheduling applications such as large downloads and cloud synchronization to discounted periods even without user intervention.

To better understand user acceptability of TDP plans, we conducted pretrial surveys in India and the United States. The US surveys were conducted online and by the authors in person in Philadelphia and on college campuses, while the survey in India was conducted by a professional marketing firm in five major cities (Kolkata, Mumbai, Delhi, Madras, and Bangalore), as seen in Figure 6.2. We received 155 responses to the US survey and 546 responses to the survey in India.

In both surveys, we asked respondents for the maximum length of time for which they would delay different types of apps in exchange for a given discount on their data plan. More than 50% told us that they would wait 10 minutes to stream YouTube videos and that they would wait 3–5 hours for file downloads. In India, we also found that individuals without mobile phones hesitated to use mobile data because of the high cost of data plans but did realize the usefulness of having a mobile data plan. Residents in rural areas of the United States express similar sentiments [35], suggesting that lower cost time-dependent broadband access fees would boost user Internet adoption and help to bridge the United States’ digital divide.

image

Figure 6.2 Consumer survey in India.

6.7.3 Content Provider Considerations

Content providers are becoming increasingly sensitive to users’ desire to avoid excessive data usage and avoid paying overage fees for their Internet usage. These concerns have generally manifested as extra options for users to downgrade their quality of experience in exchange for using less data. For instance, Netflix offers such an option for users to stream lower quality videos [36], as well as another option for iPhone users to use only WiFi. ISPs’ measures to penalize demand are thus driving changes to not just their users’ behavior but also the business plans of content providers.

6.7.4 Application Developer Concerns

Incentive-based pricing plans, such as TDP, require a feedback-control loop between the network backend and client-side devices, with the latter requiring new mobile applications that will support such functionalities. In developing these apps, however, developers must contend with the different mobile platforms prevalent today (i.e., iOS, Android, and Windows Mobile). The openness of these different platforms can make developing such applications difficult; for example, until the release of iOS 7 in 2013, such pricing apps could not run in the background and could not access the usage of specific apps [37, 38]. The Android and Windows platforms, however, have generally been more open.

New TDP pricing plans will create an opportunity for developers of nonpricing apps to optimize their apps according to changing pricing conditions. For instance, many apps can preload content during lower price times, for example, magazine apps preloading articles and pictures. Such preloading can not only save users money but can also reduce the delay in displaying content when users open an app. Moreover, the temporal flattening of demand eases congestion, allowing users to achieve higher throughput and better performance. Video applications, which tend to have the highest usage volumes, can especially benefit from such usage shifting. Implementing such measures, however, would require open APIs that allow apps to run in the background and access network prices in real time. Apps and systems that accommodate such features can thus help developers provide a better quality of experience and savings for their users.

6.7.5 Policy Evolution

Changes in pricing policies often lead to politically charged debates on network neutrality in the United States, but academics have cautioned that this ongoingdebate overlooks the need for service providers to have flexibility in exploring different pricing regimes [39]: “Restricting network providers’ ability to experiment with different protocols may also reduce innovation by foreclosing applications and content that depend on a different network architecture and by dampening the price signals needed to stimulate investment in new applications and content.”

However, more recently, US regulators have accepted “the importance of business innovation to promote network investment and efficient use of networks, including measures to match price to cost” [40], particularly for ISPs operating in the wireless sector. In “New Rules for an Open Internet,” the former Federal Communications Commission Chairman J. Genachowski explicitly specified that [41]: “The rules also recognize that broadband providers need meaningful flexibility to manage their networks to deal with congestion... And we recognize the importance and value of business-model experimentation.”

6.8 Lessons From Day-Ahead Dynamic TDP Field Trials

6.8.1 Trial Objectives

The TUBE (Time-dependent Usage-based Broadband price Engineering) system is an experimental testbed used to study user responses to day-ahead dynamic time-dependent usage-based pricing of bandwidth in mobile networks. For this trial, a prototype was developed that can compute optimized prices in the ISP backend (as described in References 29 and 42). These prices were displayed to the users through client-side UIs, which also let them monitor, control, and modify their behavior in response to the offered prices. The key steps in carrying out the trial were the following.

  • Create a “virtual” ISP (i.e., bandwidth reseller) for the purpose of the trial by operating as a middleman between a real mobile operator and its participating clients, who are offered time-varying prices.
  • Develop a communication infrastructure with server-side (ISP) implementation for computing and sending price information to a client-side app running on client (end user) mobile devices.
  • Conduct pretrial surveys and focus group studies to incorporate feedback in the design of client-side GUIs.
  • Validate results based on qualitative posttrial debriefings and quantitative results collected during the trial.

6.8.2 Trial Structure

6.8.2.1 Pretrial Focus Groups

Focus group studies conducted in the design phase helped us to incorporate feedback from users to create intuitive and simple to use UIs. During these five-person sessions conducted at the Princeton University campus, we identified several functionality goals for the TDP app and solicited opinions on our app designs and the form factor of the devices used. We also received feedback on the designs from leading industry experts at AT&T, MTA, NECA, Bell Labs, and Reliance Communications. This iterative process of seeking user feedback and modifying the interfaces accordingly continued for a 4-month period to create a prototype of the mobile application for the client-side devices. Many of the functionalities that we incorporated had been found to be desirable by users in the context of energy markets and home network management solutions (e.g., usage monitoring and history, color coding of information, parental control) as described in Section 6.4.

These pretrial studies suggested that participants liked the idea of day-ahead dynamic time-varying prices because it empowers them to choose not only how much but also when to use their device so as to save on their monthly bills. The focus group studies provided insights about the following features of DDTDP.

  • Time Granularity. Participants typically preferred TDP prices that varied at an hourly granularity rather than on the order of minutes (e.g., real time). Real-time price changes are harder for users to track; moreover, uncertainties in the duration of the offered prices make it harder for users to adapt to price changes [28].
  • Time Horizon. The idea of “day-ahead” prices, as practiced in some electricity markets, was found to be appealing as it allows for advance planning of usage. The feasibility of this behavior is supported by reports that price-conscious Internet cafe users in Kenya plan their online activities before going online [18].
  • Information Visualization. Our participants found it useful to viewsuperimposed usage and price history so as to be able to determine when they used how much of their data cap.
  • Color Coding. The participants found viewing the TDP price discounts in a color-coded form (e.g., red ( < 10%), orange (10–19%), yellow (20–29%), and green ( > 30%)) to be an intuitive way to communicate the prices. This idea for color-coding TDP prices or discounts was inspired by the traffic light signaling system and previous works in electricity sockets [4, 6]. The price discounts were also made available as numerical values to accommodate users who wished to see more details and as a secondary signal for color-blind users.
  • Application Control. The idea to have a parental control feature to block certain apps at certain times of the day when prices are high was well received. Moreover, participants also wanted information on the cost of each application's data usage, or alternatively, a display showing the top bandwidth- (budget) consuming applications on their device.

This feedback from pretrial focus group studies was incorporated in the design of the mobile TDP application.

6.8.2.2 Participants, Billing, and Timeline

To conduct this day-ahead TDP trial, we recruited 10 trial primary participants (the “primary participant” paid the monthly bills) in the Princeton, New Jersey, area through word of mouth and email lists. The primary trial participants were each given a jailbroken iPad2 with a 2-GB data plan from AT&T and our client-side app installed.5 We set the devices’ WiFi connectivity to “off” by default, because the goal of the trial was to determine if the 3G traffic can be made to wait for discounted periods when economic incentives are offered. However, to avoid skewing participants’ usage patterns, we also asked participants to use the devices as they normally would, with the result that many of the primary participants shared the device with their family members.

The iPads were the only tablets owned by the participating families, and they were actively engaged with the iPads throughout the trial. Our trial setup did not impede the devices’ mobility in any way (Section 6.8.2.3 discusses the setup implemented with AT&T to realize this). The demographic information of the primary participants and any secondary users who shared the device is given in Table 6.2.

During the trial period, we paid all participants’ monthly AT&T bills for the iPad's 2-GB data plan and overage fees. Participants paid us according to a DDTDP plan with time-varying price discounts on a baseline usage-based fee of $10/GB. Thus, we effectively became a resale ISP of AT&T's connectivity for theseparticipants, as illustrated in Figure 6.3. To avoid creating multiple sources of incentives, we did not give any additional monetary incentive for participating in the trial.

Table 6.2 Demographics of the DDTDP Trial Participants

Age & Gender Occupation Secondary Participants
P1 Female, 40 Personal assistant None
P2 Male, 33 Graduate student None
P3 Female, 34 Office staff Female, 3
P4 Female, 50 Accounts manager Female, 26 (waitress);
male, 23 (waiter)
male, 18 (student); female, 5
P5 Female, 21 Student None
P6 Female, 48 Admin. assistant Female, 22 (restaurant cook)
P7 Female, 35 Office manager Male, 41 (software engineer)
P8 Male, 31 Graduate student None
P9 Female, 30 HR manager Male, 32 (theater technician)
P10 Female, 43 Office support Male, 43 (systems technician);
male, 14; female, 8
image

Figure 6.3 Money flow diagram for the DDTDP trial.

We performed extensive in-laboratory testing of the TDP system between April and July 2011. The trial was conducted from July 2011 to April 2012, during which we visited participants three to four times. In the first phase of the trial, from July to September 2011, we handed out the iPads without the TDP app6 installed to let our participants familiarize themselves with the device and to monitor their pre-DDTDP usage. In the second phase of the trial, from October to November 2011, we installed the DDTDP client app on participants’ iPads and provided basic operational instructions. We also began offering DDTDP price discounts. From December 2011 to January 2012, we tested participants’ changes in usage in response to different types of price displays. From February to March 2012, we continued to offer DDTDP, visiting the participants again in April for a posttrial debriefing to obtain feedback on their experience during the trial.

Three researchers coded the audio data and transcriptions of all participant interviews. The excerpts presented here are representative of the participants’ interview responses and represent mutually agreed upon themes that emerged from their answers. Where applicable, we support our qualitative findings with observed data on participants’ quantitative usage patterns and pricing history.

6.8.2.3 Implementation and Setup

Becoming a resale ISP requires architecting and prototyping a system with two main interfaces: one with the real network operator (here, AT&T) and one with trial participants’ devices. The first interface consisted of an APN (Access Point Name) between AT&T's mobile network core and our laboratory server facilities, which redirected trial participants’ uplink and downlink data traffic to our DNS-enabled and network address translation (NAT)-enabled servers before rerouting it back to the Internet. Thus, participants’ data traffic, but not voice traffic, flowed through our laboratory severs, as illustrated in Figure 6.4.

image

Figure 6.4 Schematic of the data flow in the DDTDP trial setup.

This APN setup allowed our server to monitor participants’ aggregate usage at different times and adjust the future offered prices accordingly. We assigned a unique IP address to each device and created a Netfilter rule to measure its usage and store it in a MySQL database. In each hour, a price point for the next twenty-fourth hour was calculated using a Matlab-based prediction algorithm that estimated future demand as a function of the prices offered. All prices were announced 24 hours in advance (i.e., DDTDP). To interface with the apps on users’ devices, the client-side apps were configured to pull the new prices each hour, so that at any time, a participant could launch our TDP app to see the prices for the next 24 hours and plan their usage accordingly. The server supports JavaScript Object Notation (JSON)/Hypertext Transfer Protocol (HTTP) and can exchange information with any device with browsing capability. All server modules were implemented on a Linux platform with an Intel Xeon 2.0 GHz CPU and 8 GB of RAM.

The client-side (iPad) system module is the mobile application (DDTDP app) with which trial participants interact. In order to bypass iOS platform restrictions that constrained many of the possible TDP functionalities, we jailbroke the iPads to gain root access to the operating system. The jailbreaking allowed us to implement such features as monitoring per-application usage volumes, blocking apps on users’ request, and displaying a price indicator next to the battery indicator on the iPad home screen. We tracked each application's usage by hooking internal functions and running a daemon process to send requests and display the DDTDP prices, as well as to block apps if needed. The app increased the iPads’ battery consumption by no more than 4% compared to typical usage without the DDTDP app.

6.8.3 Application User Interface

We next discuss the UI features of the DDTDP app, grouped by the following functionality goals.

  1. . Data Monitoring. Finding intuitive ways to communicate price and usage information to users for educating them about their data consumption behavior.
  2. . Consumer Empowerment. Identifying information that will empower consumers to have better control over their usage.
  3. . Automation. Understanding if and when consumers would like to automate their usage and scheduling decisions using an “autopilot” mode of operation.

We first focus on the monitoring features, whose purpose is to display price and usage information.

  1. . Indicator Bar. Participants could conveniently check the current price by glancing at an indicator bar next to the battery icon on the top icon tray of the iPad's home screen, as shown in Figure 6.5.7 The indicator bar is color coded, with red denoting discounts <10% and green discounts >30%, and shows the percentage discount from the baseline price ($10/GB in this trial).
  2. . Price Information. The home screen of the DDTDP app displays price and usage information in a split screen, as shown in Figure 6.6a. The user's current DDTDP bill for the month is displayed on top, followed by a scrollable list of color-coded percentage price discounts and usage information. The list includes data on the past and future 24 hours, highlighting the current hour's information; clicking on the entry for an hour toggles the price display between $/GB prices and percentage discounts from the $10/GB baseline. The bottom of the main screen displays the future prices as an easily understandable color-coded bar graph, helping users compare the discounts in different hours. Using the navigation buttons at the bottom of the screen, users can instead view the past price information, overlaid with their usage history by day, week, or month; and a list of the apps that consume the most data.
  3. . Usage History. As described earlier, users can monitor their usage from the list and graphs on the DDTDP app's main screen. Interested users can also turn the iPad horizontally to activate a larger landscape view of their usage, as seen in Figure 6.6b. Users can swipe left through their superimposed price and usage history, allowing them to quickly determine how much data they consumed at different price levels and at different times. This data can be viewed on a daily, weekly, or monthly granularity.
  4. . Top-Five Apps. In the pretrial focus groups, most people did not know which apps consumed the most data. We thus provided an option on the bottom half of the app's main screen, shown in Figure 6.6c, that displays the data usage volume of the five apps that use the most data.
  5. . High Usage Notifications. We tested users’ responsiveness to pop-up notifications of high prices and accompanying usage-deferral recommendations (e.g., notices of low price future times).
image

Figure 6.5 Color-coded price discount indicator bar on the iPad home screen for easy reference.

image

Figure 6.6 Screen views of information displayed by the app. The main screen (a) is split to show a scrollable history list on top and either the color-coded future prices and history or (c) the top five bandwidth-consuming apps below. The landscape view (b) shows the prices and usage superimposed on the same graph.

We next discuss features designed to empower users in scheduling their usage according to the DDTDP prices.

  1. . Parental Control. As many of the primary participants shared their devices with younger secondary users, we provided a blocking feature to help the primary participant control others’ usage. The user can select an app from a list of all installed apps and then select times (on a half-hour granularity) in which that app should be blocked from using data, as seen in Figure 6.7a. For instance, participants could prevent their children from streaming media-rich content with high data volumes at high price times.
  2. . Budget Adjustment. Users could set a monthly budget for each billing cycle. As seen in Figure 6.7b, they could further control their spending by adjusting weekly budget sliders to divide their monthly budget among the weeks of their monthly billing cycle. By default, the budget was divided equally among the weeks of the billing cycle, with progressive adjustment of the remaining monthly budget across the remaining weeks as the month progresses. Users could also add money to their monthly budget.
  3. . App-Delay Sensitivity Settings. Users could set different delay sensitivities for each installed app by adjusting sliders between “high” and “low” tolerance, as shown in Figure 6.7c. When combined with automated application scheduling, this feature allowed the scheduling to account for individual delay preferences while scheduling enough data in low price periods to keep the user within the specified weekly budget.

To fully exploit the potential of user-specified delay sensitivities and the available budget, we also provided an optional autopilot mode of operation. In this mode, the TDP app computed a usage schedule for different apps to keep users within their specified budgets, based on the given prices for the next day and their predicted usage of different apps. The autopilot mode attempted as much as possible to defer only apps marked as delay insensitive. The autopilot mode provided warnings and blocked apps until their scheduled period and displayed the schedule to users on a schedule screen.

image

Figure 6.7 Screen views of different usage control features in the app. Users can (a) manually schedule apps for different times, (b) allocate their budget to different weeks, and (c) adjust delay tolerances for different apps for autopilot scheduling.

6.8.4 Trial Results

We now discuss users’ reactions to the DDTDP prices offered in our trial setup. We evaluated the trial results in two ways: quantitatively, by measuring the change in usage in response to prices offered, and qualitatively, by conducting posttrial interviews with the trial participants. In these interviews, we sought to understand participants’ general attitudes toward pricing before questioning them about their experience with the trial app and how the trial affected their opinions of TDP's potential impact on the Internet ecosystem.

6.8.4.1 Opinion on Smart Devices

About 70% of our participants had used some form of smart devices before (e.g., Android and iPhones) and were positive about the convenience and usefulness of these devices. One participant (P7) described how the iPad is very useful for leisure-time activities: “in the morning when I exercise I just have Netflix going,” while another (P3) told us the educational purpose that it serves for her toddler: “She loves apps, and now she chooses her own YouTube videos, like nursery rhymes and stuff.” But in spite of these benefits, not everyone views such smart devices as a necessary gadget: “Do I like having them, are they convenient? Absolutely, but it is definitely a luxury item.” (P6).

6.8.4.2 Understanding of Data Plans and Usage

Before participating in our DDTDP trial, most participants were relatively unaware of their data usage and data plans. About 80% reported that they did not know how much data they used or which apps consumed the most data. A few, however, had some idea: “I would think any kind of video and music sites” (P6) and “I assume that anything that's running constantly or streaming, you know, like a movie is going to use more than just browsing the Internet I would think” (P9). Another participant knew that her tablet consumed more data than her phone: “I also know I've used the iPad a lot more liberally for things like that [TV shows] than I would on my iPhone.”

Despite their relative lack of knowledge about mobile data usage, most of our participants were concerned about recent data plan changes. When asked whether current data plans are reasonable, participants expressed concern over ISPs’ shift from unlimited flat rate to tiered plans with $10/GB overages. One of them explained: “To me, I think it's a fair price, but I know when my children uses it, if they use Netflix or something, then it gets to be too expensive.”(P10). These pricing changes prompted some participants to pay more attention to their datausage: “I didn't really have to pay attention to that until this iPad came up because the data plan I have for the family phones is unlimited data for one price.”

6.8.4.3 Effectiveness of DDTDP

DDTDP's goal is to use economic incentives to even out network usage over the day by exploiting the different degrees of time sensitivity of different applications. Such shifting can greatly benefit ISPs, as traffic data from our partner ISPs shows that the network traffic can vary by a factor of 10 over the day and by a factor of 2 within a few minutes. However, its success depends on users’ willingness to pay attention and respond to the prices offered. Our trial results indicate that such economic incentives when offered through intuitive GUIs can indeed help in modifying usage behavior and shifting traffic to off-peak hours. Moreover, the incentives offered can further improve spectrum utilization in these off-peak hours by generating additional new demand.

All of our participants thought DDTDP would be viable “as long as the interface is simple to use” (P2). Some of our participants, however, were concerned about planning for future prices: participant P3, a mother of a toddler, thought that “[I]t's definitely useful, but for families it really needs to be predictable.” Day-ahead pricing, in which prices for each hour are announced 1 day in advance, can provide some predictability, which was enough for some participants. Price-sensitive users “made a conscious effort to look for the discounts” (P10), adapting their usage in order to save money: “I think it is a nice option to have where I can get a discount per month depending on when I use it, and I can schedule my day that way” (P10). Moreover, they did not feel that paying attention to the prices was overly burdensome: “I go to my bank account everyday, so I would think that this would just become a natural thing” (P6). In fact, the DDTDP app helped users avoid unnecessary usage at high price periods: “Yes, and [I] was less likely to goof off and waste more time and data” (P5).

By measuring usage with DDTDP, we found that participants’ efforts to pay attention to the prices do flatten out the traffic level over the day. To measure the reduction in peak traffic, we calculated the peak-to-average ratio (PAR), that is, the ratio of usage in the peak period to average per-period usage, for each day. We then compared the PARs from a control period of pre-DDTDP, that is, no variation in prices over the day, to those observed with DDTDP. We find that DDTDP reduces the PAR from pre-DDTDP usage, with a 30% reduction in the maximum observed PAR: Figure 6.8a shows the distribution of peak-to-average ratios on pre-DDTDP and DDTDP days of the trial.

In addition to reducing the PAR, we observed that participants’ awareness of high discounts induced a “sales-day” effect among several participants; that is, theystarted using more than they otherwise would have. Figure 6.8b shows the average daily traffic for pre-DDTDP and DDTDP traffic; we see that traffic with DDTDP is much larger than pre-DDTDP, with a 130% overall increase. Combined with Figure 6.8a's result that the PAR decreases, we conclude that participants consumed much more traffic during off-peak periods when DDTDP discounts were offered. When asked about this, participant P9 told us “[laughs] Kind of! But that also goes toward my personality of if it's on sale I must buy it!” This result benefits ISPs: with DDTDP, they can offer discounts to shift traffic from peak to off-peak periods, as well as increase demand in off-peak hours. The result is mutually beneficial, as ISPs benefit from “valley filling” and users gain by consuming more at the discounted rates in off-peak times.

We also measured the types of applications shifted to lower price times. Participants generally delayed streaming and media services: Figure 6.8c shows the distribution of application usage among movies, web traffic, downloads, music, news and magazines, and other types of apps. We see a very significant increase in movie traffic with TDP, and smaller increases in the “music” and “other” categories. When asked about the apps they were willing to shift, participants’ responses were consistent with these measurements, with most participants willing to defer social networking and streaming. Participants P6 gave the examples of “Social networking, emailing, or Skyping, I would definitely wait for that,” while participant P9 said that “If I'm surfing the web, I'm doing it now, I am not waiting, but for my movie usage it's kind of specific, so yeah.” Similarly, participant P1 was willing to shift nonurgent activities: “If I'm trying to look up directions [GPS], it probably couldn't wait. But if I were trying to research on buying something and the discounts weren't good, I could wait a couple of hours to do it later.”

image

Figure 6.8 Usage statistics for TIP and TDP. (a) Peak-to-average ratios, (b) average daily usage, and (c) app distribution.

We found that TDP helps address consumers’ concerns about the rising cost of mobile data plans; although many users are not fully aware of their monthly usage, they do want to save on their data plans. Given the right economic incentives, users are willing to modify their usage behavior in order to save money, which can also lower network congestion for ISPs. Our DDTDP trial yielded “flatter” traffic usage over the day, with lower peaks and higher off-peak demand.

6.8.4.4 Effectiveness of the User Interface

We next examine (i) the effectiveness of different features of our DDTDP app's UI, (ii) a comparison of the effectiveness of different ways of displaying prices to users (e.g., color coding vs numerical values), and (iii) the effect of showing their usage history. Although the app's main function was to display prices, our participants viewed the app as a more general informational tool: “To me, this application educates you” (P4). Participants unanimously agreed on the usefulness of three key features: (i) a price indicator bar on the iPad's home screen, (ii) color coding the price discounts, and (iii) displaying usage history over time.

We first consider participants’ responses to the “passive” price indicator displays (i.e., when no explicit warning or notification message is sent to the users). For this part of the trial, we used a color-coding scheme with orange for a price discount of less than 29% and green for discounts of 30% or more. To quantify the actual effects of the price indicator, we analyzed the usage in periods in which we intentionally varied the discounts by 1% to switch the color from orange to green. We defined two experimental stages and three period types: in the first experimental stage, all period types had a 10% discount and orange price indicator. In the second stage, type 1 periods switched to a higher (28%) discount with orange indicator, type 2 periods to a slightly higher (30%) discount with green indicator, and type 3 periods to a lower 9% discount with orange indicator. We then compared type 2 and type 3 periods to type 1 periods to see if users paid attention to the range of numerical discount values or just the color coding. Table 6.3 summarizes the three period types.

Table 6.3 Period Types in the Price Display Experiment

Period Type Second Stage Color Second Stage Discount, %
1 Orange 28
2 Green 30
3 Orange 9

Figure 6.9a plots each participant's average change in usage in Type 2 versus Type 1 periods. The price discounts in both periods increased by comparable amounts in the second experiment stage (28% and 30% respectively), but the indicator color changed from orange to green only for Type 2 periods. We found that participants increased their usage more when the color changed (Type 2 periods): each point on the graph in Figure 6.9a represents one participant, and the size of the point is proportional to their usage amount. We see that most participants’ data points lie above the reference line, so that usage increased more (or decreased less) in Type 2 when compared to Type 1 periods. Wilcoxon's signed rank test yields only a 9.8% probability that the observed differences between the percent changes come from a symmetric distribution about zero. These results indicate that participants changed their usage because of the color of the indicator, despite the comparable numerical discounts.

Figure 6.9b shows the average percentage change in usage for each user in Type 1 periods versus Type 3 periods. For both period types, the color did not change, but the discount in Type 1 periods increased significantly (i.e., 28% in Type 1 versus 9% in Type 3 periods). Thus, if participants had reacted to the numerical price discounts, we would expect usage to increase in Type 1 and decrease in Type 3 periods: participants’ data points should lie above the reference line. However, participants’ changes in usage were comparable for both periods, with only half of users’ data points above the reference line in Figure 6.9b. Some participants increased their usage dramatically in both types of periods, while most decreased their usage in both types of periods. Thus, participants were mostly agnostic to the numerical values of the discounts, likely because they only paid attention to the indicator color, which remained the same for both types of periods.

Our posttrial interviews confirmed that the observed usage changes represent conscious decision making on the part of the trial participants. Participants generally did pay attention to the price indicator color and chart of future prices: participant P4 “paid attention to the little color icon that you guys have up on the top, and I tried to tell everyone not to use it unless it was in green [high-discount periods]... I do look at the chart and see at what point the discounts might come in,” while participant P7 checked the price indicator before using data: “I would see if it's a good color for me” (P7). A few participants also checked the numerical discount: “I do look at the percentage. I will go to the app and just see where it's gonna be over the next few hours” (P10).

In a separate phase of the trial, we explicitly reminded participants of high price periods with pop-up notifications that were sent every 10 min if participants used a lot of data in high price periods. While many participants liked being notified, for example, “As a bill-payer I would think it is very useful” (P4), others thought that the notifications interfered with using their devices: “the pop-ups were a bit annoying. I think it would be nice if the warning messages appear on the top of the screen” (P2). The notifications, however, were generallysuccessful: 90% of participants either decreased or did not increase their usage after one notification. About 60–80% of participants decreased their usage in response to multiple consecutive notifications. Figure 6.10 shows the distribution of observed changes in usage in response to the notifications sent; multiple notifications were only sent if participants consumed more than 2 MB of data in consecutive 10-min intervals. We see that only a small minority of participants increased their usage after receiving a notification.

image

Figure 6.9 Changes in usage for period types 1, 2, and 3. (a) Period types 1 and 2 and (b) period types 1 and 3.

Some participants suggested that prediction-based pop-ups with usage suggestions, for example, warnings such as “[i]f you watched this movie, you would be over your usage quota” (P9) would be more useful than reminders of high prices. Even alerting users to future low price times would be useful, as that would allow users to plan ahead: “I think it's a great idea, when the iPad would say ‘If you wait for half an hour, you can have 25% discount.’ I thought it was incredibly useful for my decision” (P6).

image

Figure 6.10 Changes in participant usage volumes in response to notifications sent.

In fact, participants not only paid attention to the prices offered but also used the usage history displays to track their usage and amount spent over time. A mother of three explained how the app allowed her to monitor and control her children's usage when she was at work: “I can go back and see when they were using it!” (P4). She also gave an anecdote on having used this feature to check whether people at an iPad repair shop had used her data plan while fixing her broken screen. Another user (P3) suggested including a usage indicator on the home screen, similar to the price indicator, showing the remaining monthly budget. Our results thus imply that users find icons on this top task bar convenient for monitoring the current price and amount spent but prefer notifications to contain explicit suggestions for better ways to use mobile data.

6.8.4.5 Empowering User Choice

We finally turn to app features that help users decide how to use mobile data so as to save money while still enjoying the use of their device. In addition to the price displays and usage history discussed earlier, we showed users which apps consumed the most data, allowing them to better choose which apps to stop using so as to use less data. We also allowed users to control the usage of secondary users such as children with an explicit scheduling functionality and to control their own usage with an autopilot feature that automatically delayed time-insensitive apps to lower price periods.

We displayed the data used by the top five data-consuming apps to educate our participants about which apps use the most data. Participants found this information useful for keeping track of their usage and controlling how much total data they consumed: “Absolutely. I mean it would be nice if there were an indicator on the app icon themselves.” Even the displays within our TDP app increased their awareness of per-app data usage: “usually it's always the same type of thing—the Internet, YouTube, Facebook, those type of things” (P4). This awareness prompted participant P4 to pay more attention to her usage in general, including that not shown on our app: “If I connect to my email and work and I have a thousand emails in my mailbox and it downloads all that stuff that's a big consumer as well.”

While looking at their usage history allowed participants to monitor the usage of secondary users, many participants wanted to control this usage as well. Most were concerned about their children's usage; as participant P6 stated: “I don't think they realize how long they've been on there.” Participants liked the idea of explicitly blocking certain apps at certain times in order to curtail their children's usage: “I think there's a lot of parents that would use it” (P4). Indeed, as discussed in Section 6.4, some studies have found that parents liked the idea of controlling their children's data usage even on wired networks [11–13].

Finally, we allowed our participants to control their own usage with an “autopilotmode” in which the DDTDP app scheduled usage based on users’ predicted behavior, their usage history, and the future prices. Although this autopilot mode was designed to help users save money by delaying the least time-sensitive apps, our participants preferred to manually control their usage: “I like to control my device manually” (P2). Some were hesitant to use the autopilot mode because of concerns about the loss of control: “It's really annoying to be put in the autopilot mode. I would not want a computer to tell me what to use and when to use it” (P3). As participant P8 pointed out, an autopilot mode would only be accepted by users “if it is sufficiently intelligent to figure out the importance of each application's usage.” In fact, although our app allowed users to input and adjust different apps’ delay sensitivities within the scheduling algorithm, participants were still concerned that automatic scheduling would interfere with their desired usage: “[T]hat might not be the way you want it scheduled” (P4). Participants were more accepting of automated scheduling of noncritical apps: “If you could put it in a queue and let the system figure out when the cheapest time to do it is” (P7). Effectively implementing TDP thus requires HCI researchers to balance users’ desire to control their usage decisions with their willingness to automatically schedule noncritical apps.

6.9 Discussions And Conclusions

Our investigations reveal that consumers today are concerned about the increasing cost of data plans but are neither fully aware of their monthly usage nor equipped to effectively manage and control it to stay within their budget. DDTDP offers a potential “win—win” for both network operators and consumers, but its efficacy rests on if and how it can modify user behavior. We develop a system that offers DDTDP and conduct a trial that provides an initial validation of DDTDP's promise as a future direction of Internet pricing, following similar trends in other markets such as electricity and transport networks. We find that when economic incentives are offered to users in a simple and comprehensible format, many will be willing to wait for such a feedback signal from the network and change their behavior for noncritical mobile applications.

In our trial, the maximum PAR of data demand decreased by 30% while the total traffic increased by up to 130% with DDTDP, mostly because of the filling up of discounted valley periods from the sales-day effect. This observation bolsters the intuition that DDTDP can lead to a “flatter” demand profile with lower peaks and better spectrum utilization in off-peak hours. Additionally, the availability of such a client-side DDTDP application not only enables users to make deferral decisions but also helps them to self-educate, monitor, and control their usage and expenses. Our trial participants liked UI information displays, such as the color-coded price discounts, the indicator bar on the home screen, usage history, and so on. The trial also revealed thatwhile users in general are not willing to wait for critical applications, they are quite willing to delay noncritical applications when TDP incentives are provided.

The trial also revealed interesting insights on the user-control aspects of DDTDP. First, our participants viewed parental control at the granularity of apps as being useful in managing their usage in a TDP regime. Second, they showed a desire to control their usage manually instead of delegating control to an autopilot mode. When coupled with the desire for parental control, we see that users want to take charge of their consumption behaviors, for themselves and their families, in ways that require transparency and flexibility. Therefore, realizing automated DDTDP would require careful consideration of this trade-off. This also leads to a direction for future research on the trade-off between users’ willingness to implicitly reveal their delay tolerances to network operators and the privacy and trust aspects related to such disclosures.

As HCI tackles increasingly complex sociotechnical ecosystems, for example, mobile Internet, cloud computing, and smart grids, incorporating economic analysis as a part of user behavior studies is becoming important. Out work on investigating user behavior in a new TDP regime is an initial building block in this area of HCI research. Our work also introduces a framework for realistic experimentation with the HCI aspects of network economics. By interposing between ISPs and their customers, we act as a resale ISP with user-facing apps and ISP-side economic analysis. Future research on Internet pricing will need to address related issues on how similar HCI design principles can be effective in enabling other pricing variants, such as app-based or QoS-based pricing. Additionally, HCI within SDP will need to address the latest challenges brought about by the introduction of shared data plans in the United States, which requires both real-time rating and charging for all subscribers and new mechanisms for users to allocate and control the shared data caps. Many of the findings and design principles laid out in this chapter will be relevant in addressing these new challenges as SDP continues to evolve.

Acknowledgments

The authors would like to acknowledge Mr. R. Rill and Ms. D. Butnariu for their assistance with application development and Ms. J. Bawa for her help during the trial participant debriefing process. We are also thankful to all our industry partners, including AT&T, MTA, Reliance, and NECA. The work has also immensely benefited from extensive discussions with participants of the Workshop on Smart Data Pricing (SDP 2012, Princeton, US & SDP 2013, Turin, Italy).

References

  1. 1. A. Dix, J. Finlay, G. D. Abowd, and R. Beale. Human–Computer Interaction. Pearson Prentice-Hall, Harlow, UK, 3rd edition, 2004.
  2. 2. J. Nielsen and T. K. Landauer. A mathematical model of the finding of usability problems. In Proceedings of the INTERACT and CHI conference on Human factors in computing systems, pp. 206–213. ACM, 1993.
  3. 3. R. Langley. Practical Statistics Simply Explained, Dover Books Explaining Science Series. Dover Publications, Mineola, New York, 1971.
  4. 4. F. Heller and J. Borchers. PowerSocket: towards on-outlet power consumption visualization. In CHI Extended Abstracts, pp. 1981–1986. ACM, 2011.
  5. 5. T. Grace Holmes. Eco-visualization: combining art and technology to reduce energy consumption. In Proceedings of C&C, pp. 153–162. ACM, 2007.
  6. 6. D. Petersen, J. Steele, and J. Wilkerson. WattBot: a residential electricity monitoring and feedback system. In CHI Extended Abstracts, pp. 2847–2852. ACM, 2009.
  7. 7. M. Chetty, D. Tran, and R. E. Grinter. Getting to green: understanding resource consumption in the home. In Proceedings of ACM UbiComp, pp. 242–251. ACM, 2008.
  8. 8. T. Kim, H. Hong, and B. Magerko. Design requirements for ambient display that supports sustainable lifestyle. In Proceedings of DIS, pp. 103–112. ACM, 2010.
  9. 9. R. E. Grinter, W. K. Edwards, M. Chetty, E. S. Poole, J. Y. Sung, J. Yang, A. Crabtree, P. Tolmie, T. Rodden, C. Greenhalgh, and S. Benford. “The ins and outs of home networking: the case for useful and usable domestic networking,” Transactions on Computer-Human Interaction, 16, 2009, 8.
  10. 10. E. S. Poole, M. Chetty, R. E. Grinter, and W. K. Edwards. More than meets the eye: transforming the user experience of home network management. In Proceedings of ACM DIS, pp. 455–464. ACM, 2008.
  11. 11. M. Chetty, R. Banks, R. Harper, T. Regan, A. Sellen, C. Gkantsidis, T. Karagiannis, and P. Key. Who's hogging the bandwidth? The consequences of revealing the invisible in the home. Proceedings of ACM SIGCHI, pp. 659–668. ACM, 2010.
  12. 12. J. Yang, W. K. Edwards, and D. Haslem. Eden: supporting home network management through interactive visual tools. In Proceedings of ACM UIST, pp. 109–118. ACM, 2010.
  13. 13. R. Mortier, T. Rodden, P. Tolmie, T. Lodge, R. Spencer, A. Crabtree, A. Koliousis, and J. Sventek. Homework: putting interaction into the infrastructure. In Proceedings of ACM UIST, pp. 197–206. ACM, 2012.
  14. 14. Comcast. About excessive use of data, Oct. 2012. Available at: http://customer.comcast.com/help-and-support/internet/data-usage-what-are-the-different-plans-launching.
  15. 15. FCC. Internet access services: status as of June 30, 2009, 2010.
  16. 16. M. Chetty, D. Haslem, A. Baird, U. Ofoha, B. Sumner, and R. Grinter. Why is my Internet slow? Making network speeds visible. In Proceedings of ACM SIGCHI, pp. 1889–1898. ACM, 2011.
  17. 17. M. Chetty, R. Banks, A. J. Bernheim Brush, J. Donner, and R. E. Grinter. While the meter is running: computing in a capped world. Interactions, 18(2), 2011, 72–75.
  18. 18. S. P Wyche, T. N. Smyth, M. Chetty, P. M. Aoki, and R. E Grinter. Deliberate interactions: characterizing technology use in Nairobi, Kenya. In Proceedings of ACM SIGCHI, pp. 2593–2602. ACM, 2010.
  19. 19. M. Chetty, R. Banks, A. J. Brush, J. Donner, and R. Grinter. You're capped: understanding the effects of bandwidth caps on broadband use in the home. In Proceedings of ACM SIGCHI, pp. 3021–3030. ACM, 2012.
  20. 20. H. Kim, S. Sundaresan, M. Chetty, N. Feamster, and W. K. Edwards. Communicating with caps: managing usage caps in home networks. In Proceedings of ACM SIGCOMM (Posters and Demos), pp. 470–471. ACM, 2011.
  21. 21. H. Varian. The demand for bandwidth: evidence from the INDEX project, 2002. Available at: http://people.ischool.berkeley.edu/ ∼hal/Papers/brookings.pdf.
  22. 22. R. Edell and P. Varaiya. Providing internet access: What we learn from INDEX, 1999. Keynote Speech, IEEE INFOCOM.
  23. 23. Y.-F. R. Chen and R. Jana. SpeedGate: a smart data pricing testbed based on speed tiers. In Proceedings of the Workshop on Smart Data Pricing (SDP) at INFOCOM 2013, pp. 3195–3200. IEEE, 2013.
  24. 24. J. M. Dyaberi, B. S. Parsons, V. S. Pai, K. Kannan, Y.-F. R. Chen, R. Jana, D. Stern, A. Varshavsky, and B. Wei. “Managing cellular congestion using incentives,” IEEE Communications Magazine, 50(11), 2012, 100–107.
  25. 25. S. Sen, C. Joe-Wong, S. Ha, and M. Chiang. “Incentivizing time-shifting of data: a survey of time-dependent pricing for internet access,” IEEE Communications Magazine, 50(11), 2012, 91–99.
  26. 26. S. Sen, C. Joe-Wong, S. Ha, and M. Chiang. “A survey of broadband data pricing: past proposals, current plans, and future trends,” ACM Computing Surveys, 46(2), 2014, to appear.
  27. 27. The Economist. The Mother of Invention: Network Operators in the Poor World Are Cutting Costs and Increasing Access in Innovative Ways, Sept. 2009. Special Report.
  28. 28. The Economist J. Shih, R. Katz, and A. D. Joseph. Pricing experiments for a computer-telephony-service usage allocation. In Proceedings of IEEE GLOBECOM, pp. 2450–2454. IEEE, New York, 2001.
  29. 29. S. Ha, S. Sen, C. Joe-Wong, Y. Im, and M. Chiang. TUBE: time-dependent pricing for mobile data. In Proceedings of ACM SIGCOMM, pp. 247–258. ACM, 2012.
  30. 30. V. Glass, J. Prinzivalli, and S. Stefanova. Persistence of middle mile problems for rural exchanges local carriers. In Smart Data Pricing Workshop, July 2012. Available at: http://scenic.princeton.edu/SDP2012/Talks-VictorGlass.pdf.
  31. 31. A. Odlyzko, B. St. Arnaud, E. Stallman, and M. Weinberg. Know Your Limits: Considering the Role of Data Caps and Usage Based Billing in Internet Access Service. Public Knowledge, 2012. White Paper.
  32. 32. D. D. Clark. “Internet cost allocation and pricing” in L. W. McKnight and J. P. Bailey, eds., Internet Economics, MIT Press, Cambridge, Massachusetts, USA, 1997, pp. 215–252.
  33. 33. B. X. Chen. Shared mobile data plans: who benefits? New York Times, Bits Blog, July 2012.
  34. 34. S. Sen, C. Joe-Wong, S. Ha, J. Bawa, and M. Chiang. When the price is right: enabling time-dependent pricing of mobile data. In Proceedings of ACM SIGCHI, pp. 2477–2486. ACM, 2013.
  35. 35. N. Anderson. US rural broadband: you can get it, but you can't afford it. Ars Technica, 2008. Available at: http://arstechnica.com/uncategorized/2008/03/us-rural-broadband-you-can-get-it-but-you-cant-afford-it/.
  36. 36. J. Newman. Netflix has bandwidth cap sufferers covered. PC World, June 2011. Available at: http://www.pcworld.com/article/230982/Netflix_Has_Bandwidth_Cap_Sufferers_Covered.html.
  37. 37. Sergey Lossev. iOS 7 feature gem: cellular data usage statistics. For Techies Only, 2013. Available at: http://www.fortechiesonly.com/2013/07/ios-7-feature-gem-cellular-data-usage.html.
  38. 38. S. Perez. The best features of iOS 7. TechCrunch, 2013. Available at: http://techcrunch.com/2013/06/10/the-best-features-of-ios-7/.
  39. 39. C. S. Yoo. Network neutrality, consumers, and innovation. University of Chicago Legal Forum, 25, 2009, 179–262.
  40. 40. A. Schatz and S. E. Ante. “FCC chief backs usage-based broadband pricing,” December 2 2010. Wall Street Journal.
  41. 41. J. Genachowski. New rules for an open internet. FCC, 2010.
  42. 42. C. Joe-Wong, S. Ha, and M. Chiang. Time-dependent broadband pricing: feasibility and benefits. In Proceedings of IEEE ICDCS, pp. 288–298. IEEE, 2011.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset