FATIH KOCAK, GEORGE KESIDIS, and SERGE FDIDA
The continuing network (net) neutrality debate (e.g., [1–3], 26) involves several different entities, such as Internet service providers (ISPs),1 content providers (CPs), users, and governments (including partnerships). Although there are many different perceptions for the definition and the coverage of net neutrality, one succinct definition is provided in [4]: “[net neutrality] usually means broadband service providers charge consumers only once for Internet access, do not favor one CP over another, and do not charge CPs for sending information over broadband lines to end users.”
CPs, such as Amazon, Google, Yahoo!, and eBay, typically support net neutrality because under nonneutral conditions, they expect additional access-networking expenses and additional limitations or exclusions on their access to their customers [5]. In contrast to CPs, ISPs (particularly residential ISPs) such as AT&T, Verizon, Comcast, and Deutsche Telekom typically believe that neutrality regulations threaten the profitability of their enormous infrastructure investments and maintenance costs [1, 5] and that CPs do not pay a fair share of these costs while profiting from advertising that is arguably not requested by consumers.2 Also, flat-rate pricing frameworks leading to “all-you-can-eat” consumer behavior result in high transport costs and congestion in the ISPs’ access networks, e.g., [6], which makes ISPs complain about this and leads them to take blocking (e.g., Comcast blocking peer-to-peer (P2P) applications [7]) or pricing (e.g., [8]) measures. It has been argued that some of these problems can be compensated by side payments between CPs and ISPs [9–13]. Alternatively, the introduction of premium service classes for applications has been suggested for critical applications such as health monitoring and home security (which are being increasingly used [4]); streamed spectacle events such as sports activities or newly released movies [5]; and interactivereal-time video-conferencing/video-phone sessions. Applications engaged in premium services will obviously receive a higher quality of service (QoS) than applications under best-effort network-access service and will need to pay usage-based costs (perhaps after a quota). Such payments are (content/application) neutral in nature [14] because of the willingness of the users to pay for the premium content [9]. Under net neutrality with flat-rate priced access,3 ISPs may not have the incentive to improve their existing infrastructure by increasing capacity [15] (particularly the router/switch infrastructure to drive fiber to the home (FTTH)) or by improved security measures such as virus and spam filtering [5]. (Note that such usage-based costs may need to be authenticated to the human subscriber/end user.)
Regarding QoS management, the physical location of requested content is obviously important to the goal of decreasing delay experienced by the users [16]. This in turn underscores the importance of caching data proximal to the users, including their ISP. Some large CPs, such as eBay and Google, cache their content around the world on their own servers, while smaller CPs often use intermediary content distributors, such as Akamai, who have caching agreements with local ISPs at different locations [5]. Such agreements or more dedicated partnerships between ISPs and CPs (i.e., “eyeball” ISPs) lead to scenarios wherein ISPs may cache each other's content, which raises issues of transit pricing between them; see Figure 3.1. To achieve end-to-end QoS, Bornstaedt et al. [17] argued that a sending ISP should pay for the transport traffic over an interconnection between ISPs.
Notwithstanding arguments for and against side payments, the necessity for providing a single interface (single contract including mutual services) to the end user is emphasized by several presenters in [6]. Product offers to the end users are assumed to be made in mainly two different ways: pull (on-demand) or push. Product offers can be prepared in distributed (among ISPs), partially centralized (by any of the ISPs), or fully centralized (by an external single facilitator entity) ways [18]. We herein primarily consider the “pull” demand model for content product where content requested in the recent past is cached in anticipation of similar demand locally.
In the following, we first give a background on our problem setting in Section 3.2. A model involving two different eyeball ISPs connected at peering point(s), where revenue is generated corresponding to net traffic transmitted, is initially considered in Sections 3.3 and 3.4. We consider a caching model captured by a single parameter, Φ, affecting the revenue generated by transit traffic. We assume that there is no limit on the throughput downstream to the users of each ISP. In Section 3.5, we modify the model so that there is an upper bound on the throughput that the users can receive via their ISP. So, two possible mechanisms to distribute the allowed throughput among the types of demands (local or remote content) are introduced. We next consider the scenario where there are multiple providers competing for the same group of users (without the throughput limit condition, as in the initial model). User/customer migration among competing ISPs because of the price difference between them is modeled by their “loyalties” to the ISPs. In Section 3.6, consideration of two ISPs competing for the same set of users is added to the model described in Section 3.4. We provide the results of numerical experiments on performance at Nash equilibrium in Section 3.7. We conclude with a discussion of future work in Section 3.8.
Suppose there is an eyeball-ISP provider whose revenue from its subscribers due to its local content is
where p is a usage-based price and D is the total demand at that price. Note that ISPs are continuing to depart from pure flat-rate pricing (based on access bandwidth) for unlimited monthly volume, for example, [19, 20].
Following [21], suppose that there are two broad classes of applications, one of which is significantly sensitive to congestion of access bandwidth, for example, delay-sensitive interactive real-time applications. Assume that applications of the other, best-effort type are unlikely to engage in usage-based pricing for access bandwidth. As pricing reduces, the demand for access-bandwidth reservation increases, causing additional congestion so that best-effort service will be increasingly inadequate for congestion-sensitive applications. Therefore, the demand for usage-priced access-bandwidth reservation may accelerate with reduced price. More specifically, say there is positive threshold
such that overall demand sensitivity to price is greater when than when . That is, for
a convex, piecewise-linear model for access bandwidth would be
where
so that , see Figure 3.2.
So, in this model, the price range (equivalently, demand range ) corresponds to low demand sensitivity to price, . The pricing range (demand range ), when delay-sensitive applications typically need to adopt usage-priced (reserved or priority) access-bandwidth service, corresponds to higher demand sensitivity to price, .
Alternatively, suppose a convex, differentiable demand model that can approximate (3.2 ), specifically
Here, and given and , may be found using and . The specific forms of demand in Eqs. (3.2 ) and (3.3 ) are studied herein because they are tractable.
In [21], we explored the interior Nash equilibria resulting from such convex demand responses. Note how the above models reduce to linear demand response (e.g., by taking ), that is, revenue quadratic in prices, as assumed in many prior papers, for example, [22].
Again, we consider a game focusing on two different eyeball ISPs, indexed a and b, on a platform of users and CPs, that is, the ISPs also serve as CPs so no separate pricing by CPs is modeled. For , the demand for ISP 's content is when it is based on ISP j's access-bandwidth price . In the following, the same price will be used by ISP j irrespective of content source; that is, content is neutrally priced in this sense.
Suppose there are peering points between these two ISPs where the net transit traffic flow in one direction will correspond to the net revenue for the (net) receiving ISP at rate from the (net) transmitting ISP. For example, France telecom charges $3 per megabit, whereas pricing from the digital subscriber line access multiplexer (DSLAM) to core, that is, access bandwidth, for their CPs is $40 per megabit [23]. This said, many existing peering agreements among nontransit ISPs have no transit pricing, that is, . See [24,25] for recent studies of models of transit pricing for a network involving a transit ISP between the CPs and end user ISPs.
Without caching, transit traffic volume is obviously maximal and remote content may be subject to additional delay possibly increasing demand (reducing demand sensitivity) for usage-priced bandwidth reservations. However, poorer delay performance may instead reduce demand for remote content or cause subscribers to change to ISPs that cache remote content. So, caching will result in reduced demand for premium services by transit traffic; in the following, we model this with a caching factor . We assume fixed caching factors for each of the ISPs, which means the selected caching factors by the ISPs do not change no matter how their demand changes. The case where the caching factor is adapted because of the demand changes is among the future work.
By simply separately accounting for the demand for premium-access service by two different user populations with similar content preferences, we take the utilities as
where in the second (transit revenue) terms. Note that will be chosen by ISP k at its minimal value, which we here assume to be strictly positive again because an ISP that does not cache any remote content may lose subscribers or demand for remote content may be reduced owing to poor delay performance, cf. Section 3.5. We will also assume that is fixed and, by volume discount, . Also, we have assumed different “upstream” congestion points for local and remote traffic and no revenue from cached (best-effort) traffic. Moreover, for (i.e., not linear demand response) note how this model assumes three different congestion points, one at the peering point, one at the local content source, and one at the cached content source, but not a single one further downstream toward the users, cf. Section 3.5. That is, in this section, we consider three separate congestion points per ISP for an example of convex demand (assumptions that include the linear demand-response scenario as a special case).
Again suppose, for , that
where the maximal price and are also assumed to be common parameters for both ISPs to simplify the following expressions for Nash equilibria. Note that with and is maximized at , i.e., and . Without loss of generality, assume the demand ratio
that is, demand for ISP a's content is generally higher than that for ISP b.
The Nash equilibrium is a “stalemate” pricing point at which neither ISP's utility will improve by a price change, that is,
The first-order Nash equilibrium conditions and the solutions of these for three cases are provided in the following text.
In this scenario, at ISP a, the demands (demand for local content) and (demand for remote content) share a common significant congestion point proximal to the users, for example, in a wireless-access setting. Again, we consider a system where the players (eyeball ISPs) select access prices (plays) .
Given the prices for local content, we want an expression for demand (local content at ISP a) and (remote content at ISP a) that has the following intuitive property:
and similarly for ISP b regarding and as a function of .
The following assumed property is also intuitive because the presence of remotely originated traffic will congest locally originated traffic and vice versa:
and similarly for the other ISP b.
Proportion Rule. Suppose that the throughput limit downstream to the users is for ISP . Then, at ISP a, the demands are as follows:
and
And similarly for ISP b.
Critical Price Rule. Another way to split the throughput among the demands is as follows. For ISP a, when , a new price is chosen so that
If , then congestion will occur.
So, the expressions for the ISP revenues here can be taken as
In this scenario, ISP a in Figure 3.3 is replaced by two ISPs, namely, ISP a1 and a2, which compete for the same group of subscribers. So, we need to consider three utility functions; , , and ; three demand functions, , , and ; and three access prices for each of the ISPs’ own subscribers, , , and . But the number of caching factors increases to four: , ,, and ( meaning willingness of ISP m to cache the content of ISP n). And, there are two transit prices, (for the traffic between ISP a1 and ISP b) and (for ISPs a2 and b).
where
represents customer stickiness (loyalty, inertia) to the ith ISP (e.g., [9]); that is, because , the subscribers will not completely switch to the ISP with the lowest price.
The demand-response model provided in Eq. (3.3 ) is used here, now with .
First, numerical results were obtained for the scenario where there are three congestion points per ISP (with fixed caching factors, as explained in Section 3.4) with , , , , , , and as the selected parametervalues.
By using (Figure 3.4) and (Figure 3.5), the Nash equilibrium point were found in the following way:
It was observed that the Nash equilibrium point found by using the above procedure is the same as the equilibrium point corresponding to the proper case solution provided in Section 3.4 (regardless of the randomly selected starting point) and it was found in just a few iterations (Figs. 3.4–3.7).
It can be observed in Figures 3.8 and 3.9 that and for both values of α. This is intuitive because , which implies that the demand for ISP a's content will be larger than ISP b's at the same price. This immediately implies larger gain for ISP a, which also means that ISP a might have some margin for increasing in order to gain even more utility. Therefore in this setting.
Next, numerical results were obtained for the model defined in Section 3.5, where one congestion point per ISP and fixed caching factors assumptions are used.
Here, the throughput limit is split among the ISPs according to the proportion rule, cf. Section 3.5. , , , , , , , , and are the selected parameters values. Notice that one of the throughput limits () is selected, which is significantly larger than the other one () to analyze the scenario where congestion does not occur downstream to the users of ISP a, whereas it does occur for ISP b. If both of the throughput limits selected are very large, then the problem reduces to the three congestion points scenario (Section 3.4), because there will be no distribution of the throughput limit between the two different kinds of demand at the congestion point (of each ISP).
The Nash equilibrium point was again quickly found by using synchronous best-response updates.
In Figures 3.10 and 3.11, behaviors similar to those with Figures 3.8 and 3.9 are observed. But, it is worth noting that in Figure 3.11, for values of where is increasing (for both ), the capacity is fully utilized. In this region, increasing does not lead to a decrease in the demand, which means that there is a linear increase in the utility of ISP b. But, after the peak, the total demand at ISP b is smaller than ; therefore, the increase in price leads to decreases in both demand and utility.
Finally, numerical results were obtained for the case where there are multiple providers competing for the same group of subscribers (Section 3.6). Again, synchronous best-response updates are used, but for three utility functions (, , and ) depending on the corresponding three access pricing parameters (, , and ). So, generally, for n competing ISPs (n=2 in our case of ISPs a1 and a2), the synchronous best-response update step (n+1 player synchronous updates) is as follows:
where i is the index of the ISP (including the noncompeting ISP (in our case, ISP b)), is the price used by ISP i, and is the set of prices used by the other ISPs.
The parameter values can be selected in various combinations. We used the parameters , , , , , , , , , and . These were selected so as to analyze the effect of (static but different) caching factors of competing ISPs (ISPs a1 and a2) on the utilities. It can observed from Figures 3.12 and 3.13 that the ISP with smaller Φ (a1) also has (again following intuition) a smaller utility compared to its competitor ISP (a2). The effect of α on the utilities and the equilibrium prices are in the same as the previous cases.
In the future, we will extend our models of demand in the presence of a more complex mixture of applications with different service requirements and will do so by using more diverse, though naturally coupled, demand response models for each type of provider. Moreover, we will consider these problems in the context of competition (multiple providers of each type) and collaboration between providers of different types, where we could use, for example, Shapley values to decide how to share revenue in the latter case.
We will also consider the dynamic selection of a caching factor. For example, we can model the sensitivity of customer loyalty σ to the caching factor Φ in a way so as to reflect the loss in demand by user migration because of poor delay performance in the best-effort service class because of inadequate caching. So, the utilities of the ISPs will be affected by this potential loss because of user migration, and hence, they will depend on the caching factors as well as the access prices. As ISPs adjust their caching factors, in addition to their access prices, capturing the trade-off between the user migration and the transit traffic revenue.