8
Packets

Previously, the quantity of data crossing some point in a network has been defined by a cumulative arrival function A, and A(t) represents the amount of data that crossed that point until time t. Cumulative arrival functions are by construction non-decreasing and left-continuous, but no further assumption has been made. However, data is often composed of packets with specific sizes (either fixed, or finitely many possible sizes), which has some impact on the performances of the system:

  1. 1) There exists a specific component called a packetizer at link interfaces that stores bits of data until the full packet is received and then releases it. A packetizer cannot be modeled directly by a service curve, which would guarantee a service rate that does not depend on the amount of backlogged data.
  2. 2) Many scheduling policies are defined in terms of packets instead of the amount of data: when a server is ready for serving a packet, a service policy will select one packet waiting to be served. This packet is given the full bandwidth from the first to the last bit. An example of such a policy is non-preemptive priority, which is similar to the preemptive priority, except that a low-priority packet whose service has begun will be entirely served although higher-priority packets arrive meanwhile. Other policies are derived from the round robin (RR) policy that serves a packet of each flow in a round.

In section 8.1, we will formally define a packetizer, and study the impact on the performances of a server followed by a packetizer. In section 8.2, we will describe several packet-based scheduling policies and derive service curves of individual flows under those scheduling policies.

Most results in the literature model packet sizes with two parameters: the maximal and minimal packet sizes. We will see in section 8.2.4 that refinements are possible.

8.1. Packetizer

Consider a flow composed of a sequence of packets (also sometimes called frames or segments), the i-th packet having length si. A packetizer is a server that groups the data of a flow into its packets: it stores the bits of data of a packet until the whole packet has arrived. When the last bit of the packet arrives, it serves all the bits of the packet simultaneously. This is illustrated in Figure 8.1: A is the cumulative arrival function entering the packetizer, and D the departure function, where data is grouped by packets. At time T1, all s1 bits of the first packet have been received, then the packet is served at time T1. In other words, if A(t) <  s1 for all tT1 and A(T1) = s1, then D(t) = 0 for all images and D(T1+) = s1, and similarly for the next packets. Therefore, D is a staircase function.

images

Figure 8.1. Example of a packetizer: an arrival process A and its packetized version D = PL(A). For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

Similar to flows that are described by their cumulative arrival function, we describe the sequence of the packet length by the cumulative length of the first packets: images images, the total length of the n first packets. Of course, the length of the i-th packet is given by images.

DEFINITION 8.1 (Cumulative packet length sequence).– A cumulative packet length sequence is an increasing sequenceimages, such that L0 = 0 and there exists images, such that images, images.

The values images, images are, respectively, called minimum and maximum packet sizes.

Note that the bound images is often denoted by images. With our notation, we want to emphasize that we do not require that images is the maximum packet size, but only require that images be an upper bound of the packet sizes, and in the same way, images be a lower bound of the packet sizes. Hence, the denomination maximum/minimum packet size is a slight abuse. The minimum packet size is mainly used to ensure the divergence of L, so that every bit of data belongs to a packet. This is not a strong limitation since in practice such a bound always exists.

DEFINITION 8.2 (Packetizer).– Let L be a cumulative packet length sequence. An L-packetizer is the function :images, such that images,

[8.1]eq
[8.2]eq

We first need to prove that this function is well defined, i.e. images, images. In fact, we will prove the stronger statement that images is a server. By a slight abuse of notation, we denote by PL this server:

eq

Let us first prove a preliminary lemma.

LEMMA 8.1.– Let L be a cumulative packet length sequence and PL its associated packetizer. Then, for all images and t > 0, there exists images such that

eq

Since images is diverging, it is obvious that there exists n such that images images for all images. The interesting point is that PL(A)(t) ∈ {Ln, Ln-1}, and the two possibilities exist, as shown in Figure 8.2: PL(A)(u) = L1 and A(u) = L2, whereas PL(A)(v) = L2 and A(v) = L2. Note that for a given images, t may not exist such that PL(A)(t) = Ln: there can be several packets arriving at the same time (at time w, for example).

PROOF.– L0 = A(0) = 0 < L1, so the property is true for t = 0. Now assume that t > 0. Since L is divergent and since PL(A) take values in images, there exists images such that PL(A)(t) = Ln.

By definition of PL(A)(t), there exists ut such that images and since A is non-decreasing, images.

As PL(A)(t) < Ln+1, then for all ut, A(u) < Ln+1. But, since A is left-continuous, images.

images

Figure 8.2. Some special cases of a packetizer. Dashed line: arrival A; solid line: departure D = PL (A). For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

PROPOSITION 8.1 (A packetizer is a server).– If L is a cumulative packet length sequence, then PL is a server: images, PL(A)C and images.

PROOF.– images is a direct consequence of Lemma 8.1.

First, PL (A)(0) = 0 by definition of PL, and images, images so images and PL(A) is non-decreasing. It remains to show that PL(A) is piecewise continuous and left-continuous.

From Lemma 8.1, for all t > 0, there exists images such that PL(A)(t) = n, and from the definition, there exists u < t such that images. But then images, images. Thus, PL = Ln in the interval (u, t], hence it is continuous in this interval. In other words, PL(A) is left-continuous and piecewise continuous.

COROLLARY 8.1 (Maximum buffer size of a packetizer).– Let L be a cumulative packet length sequence with maximum packet length images. The backlog of PL is at most images,

[8.3]eq

PROOF.– This is a direct consequence of Lemma 8.1: for all images, there exists images such that

eq

DEFINITION 8.3 (Packetized cumulative function).– Let PL be a packetizer. The cumulative function AC is said to be PL-packetized if A = PL(A).

LEMMA 8.2.– PL is idempotent: images.

PROOF.– For all AC, PL(PL(A))(0) = 0 = PL (A)(0) and images,

eq

Since the exact values of the cumulative packet length sequence are often unknown beyond its maximum and minimum packet lengths, the superscript L is often omitted, and a PL-packetizer is called a P-packetizer.

THEOREM 8.1 (Arrival curve after a packetizer).– Let P be a packetizer with maximum packet size images, and (A, D) ∈ P. If A has a maximal (respectively minimal) arrival curve αu (respectively images), then D has maximal (respectively minimal) arrival curve images (respectively images).

PROOF.– From Corollary 8.1,images,

eq

hence the result from the equivalent definition of arrival curves in Proposition 5.1.

We now study the system consisting of the concatenation of a server S and a packetizer P, as depicted in Figure 8.3. Figure 8.4 shows an example of cumulative arrival functions. The arrival A is packetized and first crosses server S, the output B is not necessarily packetized, and C is the output of the packetizer, the output cumulative function is packetized again. Note that the cumulative packet length sequence is the same at input and output of the system.

images

Figure 8.3. Concatenation of a server and a packetizer

images

Figure 8.4. Cumulative functions in a combined system. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

THEOREM 8.2 (Server/packetizer system).– Let S be a server and P be a packetizer with maximum packet size images. The combined sequence S;P has the following properties:

  1. 1) if S offers a min-plus minimal service curve images, a maximal service curve images and is a σ-shaper, then S;P offers a minimal service curve images, a maximal service curve images and is a images-shaper;
  2. 2) ∀(A, D) ∈ (S;P), D is P-packetized (the departure cumulative functions are P-packetized);
  3. 3) the maximum backlog of S;P differs from that of S by at most images. More precisely,AC,
    [8.4]eq
  4. 4) if an arrival cumulative function is P-packetized, packetizer P does not introduce any additional delay:
[8.5]eq

There is no equivalent of the first property of the theorem concerning strict service curves. Consider the example of Figure 8.5 and suppose that S offers a strict service curve images and that every packet has length 1. Let A = v1, 1, −1 be an arrival function. We use the notations of Figure 8.3. The cumulative function images is admissible, since (A, B) ∈ S, and C = P(B) = v1, 1, −2. On the one hand, for all t > 0, C(t) < A(t), so the system is never empty. On the other hand, if β — 1 were a strict service curve, [β — 1]+ = β2,1 would also be and, from Theorem 5.5, the backlogged period would be bounded by 2. Thus, images cannot be a strict service curve.

images

Figure 8.5. Combined system and strict service. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

The intuition of that last property is the following: the packetizer serves any packet when its last bit has been served by S. However, if the arrival cumulative function has already been packetized, every bit of this packet arrives at the same time. Therefore, the delay for the packet in the system S;P is the delay of the last bit of the packet in S, which is the worst-case delay for any bit of this packet. Thus, intuitively, the packetizer does not introduce any delay for any packet.

This last property also means that the last packetization does not introduce an additional delay. However, it is not true for the concatenation of such systems: if A is P-packetized, we can only write images. Indeed, although d(A, S1; P) = d(A, S1), the departure cumulative functions are different and the departure cumulative function of S1;P is more bursty than that of S1.

PROOF.– We use the notations of Figure 8.3.

  1. 1) From Corollary 8.1 and Lemma 2.1,images images and images.
  2. 2) This directly follows from Lemma 8.2: PL(C) = PL(PL(B)) = PL (B) = C.
  3. 3) From Corollary 8.1, images. Then, images images.
  4. 4) Assume that the cumulative arrival function A is P-packetized and denote by Ti the arrival time of the i-th packet. More precisely, T0 = 0 and Ti = max {t|A(t) < Li}. Due to the left-continuity of A, ∀t ∈ (Ti,Ti+1], A(t) = Li. Similarly, let images be the departure time of the i-th packet: images, C(t) = Li.

First, images. Indeed, for all images, there exists imagesimages such that images, and imagesimages.

Second, for all images, hDev(A, C, Ti) = hDev(A, B, Ti). Suppose that it is not true and that hDev(A, C, Ti) > hDev(A, B, Ti). It means that there exists > 0, such that images. But, from the definition of a P-packetizer, we should have images, which is in contradiction with the definition of images. Then, images. The reverse is also true as images, and for all images, we have hDev(A, C, Ti) = hDev(A, B, Ti), hence the result.

COROLLARY 8.2 (Packetizer as a delay).– Let S be a server and P be a packetizer. If a cumulative arrival function A is P-packetized, then the system S;P offers the pure-delay min-plus service curve δd(A,s) to A.

PROOF.– From Theorem 6.2, S;P offers the min-plus service curve δd(A,S;P), and from Theorem 8.2, d(A, S;P) = d(A, S).

COROLLARY 8.3 (Arrival curve from a packetizer).– Let S be a server, P be a packetizer of maximal packet size images, and (A, C ) ∈ S ;P. If A is P -packetized and if images, then, the cumulative departure function C has a maximal arrival curve

eq

The term images might in some cases improve the result compared to only using the deconvolution by images.

PROOF.– The first two terms are a consequence of Theorems 5.3 and 8.2, and the third term is a consequence of the previous corollary. The proof concludes by applying Proposition 5.2: the minimum of two arrival curves for C is also an arrival curve for C.

8.2. Packet-based schedulers

In the previous section, we defined the notion of packetizer, which allows us to model servers crossed by a flow made up of packets. A server, as described in the previous chapters, can be crossed by several flows, and then residual service curves can be computed for individual flows.

The aim of this section is then to study different service (or scheduling) policies, taking into account that one packet has to be served entirely before the next. Some policies, like FIFO, behave exactly as in the fluid case: if a packet arrived before another, it is entirely served before the other. However, some policies behave differently than in the fluid case. One representative example is the fixed priority policy. In the fluid analysis of section 7.3.2, when a packet with high priority arrives, it interrupts the service of the low-priority packet. In practice, this does not happen, and the server finishes serving the lower-priority packet before serving the higher-priority packet. This is the object of section 8.2.1. Similarly, the GPS policy presented in section 7.3.3 has different implementations to take into account the packetization: the packetized GPS described in section 8.2.2 and the deficit round robin (DRR) in section 8.2.3.

Some policies are specific to packetized nature of the flows, for example, the weighted round robin (WRR) in section 8.2.4 and the Time Division Multiple Access in section 8.2.5.

This section focuses on some of the most representative policies, but does not aim at exhaustivity. For example, the Audio Video Bridging protocol is a protocol that combines priority and credit-based scheduling. It has been studied in [QUE 12, RUI 14].

8.2.1. Non-preemptive static priority (NP-SP)

The static priority policy presented in section 7.3.2 assumes that, when high-priority flows are backlogged, low-priority flows receive no service. In practice, as a server usually serves packets one after the other, when a packet starts being served, it will be entirely served. Hence, if a low-priority packet has started being served, it is served although a higher-priority packet arrives. We say that the scheduling is non-preemptive.

Figure 8.6 illustrates the difference between these two policies. The server is idle at time 1 and, when busy, serves data at rate 1 exactly. From time 1 to 7, the scheduling is preemptive: a low-priority packet L1 is received at time 1 and a high-priority packet H1 at time 2. Between times 1 and 2, the server serves one unit of L1, and then from time 2 serves packet H1 and suspends service of L1. After the service of H1, it resumes to serve L1. From time 9 to 15, the scheduling is non-preemptive. Low-priority packet L2, that arrived before high-priority packet H2, is entirely served before packet H2 is served.

images

Figure 8.6. Preemptive vs. non-preemptive scheduling. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

The aim of this section is to capture the non-preemption effect in the network calculus framework. We assume total order on the priority of the input flows of an n-server: if i < j, then flow i has higher priority than flow j. First, we can write an equation translating the fact that during any backlogged period [s, t] of the flows with the priorities at least i, at most one packet of the lower-priority flows is served: for A1 ,…, An arrival flows and A1 ,…, An departure flows,

where images, and similar definitions for images.

This implication is sufficient to derive a (min,plus) residual service curve for flow i, but computing a strict residual service curve is a little more involved. Indeed, the higher-priority flows can themselves be stopped by a packet of lower priority. Set images the start of the backlogged period of the higher-priority flows. We know that in the interval of time [u, t], at most one packet of priority strictly lower than i can be served (and its service begins at time at most u), and in the interval [u, s], at most one packet of flow i can be served. Moreover, these two possibilities are exclusive, because [u, s] is a backlogged period for the higher-priority flows. Then, the inequality

must be satisfied. This is now sufficient to derive a strict service curve for flow i.

THEOREM 8.3 (Non-preemptive static priority (NP-SP)).– Let S be an NP-SP n-server taking the natural order as priority order, offering an aggregate minimal strict service curve β. Suppose that, for all i ∈ {1,…,n}, images is an arrival curve for each arrival flow i. Then, βi (respectively images ) is a min-plus (respectively strict) service curve offered to flow i with

PROOF.– To simplify the notations, we consider three flows: flow H is the aggregation of the flows with priority strictly higher than i, and flow L that of flows with priority strictly lower than i and flow i. The arrival curve of flow H is images and the maximum packet length of flow L is images.

For all images, let images, i.e. the last time images such that AH(s) = DH(s) and Ai(s) = Di(s). Then, for all u ∈ [s,t],

eq

First, we used that Ai(s) = Di(s); second, that β is a strict service curve; third, that the server serves at most one packet of the lower-priority flow when high-priority flows are backlogged images, from equation [8.6]); fourth that images and AH(s) = DH(s); and finally, that αH is an arrival curve for the high-priority flows.

Moreover, Di is not decreasing, so images and

eq

As a result, βi is a min-plus service curve for flow i.

We now compute a strict service curve for flow i. Let (s, t] be a backlogged period for this flow and u = Start AH,DH(s) be the start of the backlogged period of s for the flows with strictly higher priority.

As flow i is backlogged during (s, t] and flow H during (u, s], interval (u, t] is a backlogged period for the aggregate flow and for all v ∈ [u, t],

eq

where β is a strict service curve and u is the start of the backlogged period for flow H. We now bound DL(v) — DL(u) + Di(s) — Di(u). During (u, t], at most one packet from flow L can be served. Moreover, it will be served at the start of the backlogged period. Either one such packet is served, and then no data from flow i can be served before time s, as (u, s] is a backlogged period for flow H, or no such packet is served, but in this case, one packet of flow i might be in service at time u. Therefore, imagesimages (equation [8.7]): we obtain

eq

Note that this equation is true although images. As Di is non-decreasing and images,

eq

which completes the proof.

The expressions for the residual service curves for flow i are not optimal. Indeed, they do not take into account the fact that when a packet of lower priority is served, it is served at the guaranteed service rate, as illustrated in Figure 8.7. Consider a server offering a strict service curve images, shared by two flows. Suppose that higher-priority flow has arrival curve images. The residual service curve for the low-priority flow is then [β − αH] + . Now, if all packets of the low-priority flow have length 1, packets will be served at rate 1 when they can be served. More details about this approach can be found in [CHO 08, MAN 11, MAN 12].

images

Figure 8.7. Non-preemptive static priority with a constant rate service: the lowerpriority packets are served at this rate. Solid line: the service curve obtained with Theorem 8.3; dashed line: a better service curve taking into account the service rate of the packets. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

8.2.2. Packetized GPS

The general processor sharing (GPS) policy, presented in section 7.3.3, is an idealized policy that assumes that each piece of information (a packet or even bit of data) can be decomposed into infinitesimal pieces to guarantee proportional sharing at every time and data scale. However, bits cannot be decomposed in practice, and packets are usually not fragmented. Consequently, several packet-based scheduling policies have been developed to approximate the idealized GPS policy.

In this section, we present the packetized GPS (PGPS) policy that has been defined in [PAR 93]. It is also known as weighted fair queueing and is certainly one of the most famous packet-by-packet scheduling policies.

Its behavior is based on the GPS policy: at each time when the server must select a packet (either the start of a backlogged period or the end of the service of a packet), the selected packet is the one that has the smallest departure time in the idealized GPS among the packets present in the system (the packet must not have been served yet, and must have arrived).

The next result is an adaptation of [PAR 93] to our framework.

THEOREM 8.4 (PGPS).– Consider a PGPS n-server with parameters images, which is a λR-greedy shaper. If the corresponding GPS server offers to flow i the strict service curve images, then the PGPS server offers to this flow the min-plus minimal service curve images, where images is an upper bound on the packet length of the flows.

PROOF.– Our aim is to compare two servers, S and images, both of which are λR-greedy shapers (in particular, λR is a strict service curve for the systems), where S uses the GPS policy and images the PGPS policy. We remark that S and images have the same aggregate behavior, so during each interval of time [s, t] they serve exactly the same amount of data.

The proof has several steps. We first compare the departure times under GPS and under PGPS.

Let us order the packets according to their departure time in PGPS: packet k is the k-th to exit the system under PGPS, and we denote by αk its arrival time in the system, tk its departure time and uk its departure time if the service policy were GPS. Two cases can occur: either for all m < k, uk > um, and then images at time tk, images has served packets 1,…k, whereas at time uk, S has served packets 1,…,k plus a part of some other packets.

Or there exists m < k with uk < um: packet k arrives too late to be served according to the GPS order. But when it arrives, it waits for the end of service of the packet in service before being served. Therefore, if packet k has length L, imagesimages. But we also have images, so images.

Thus, in both cases, images.

Second, we compare the departure processes of each flow in S and images. For each arrival process (A1,…, An), let (D1,…, Dn) and images be the departure processes in S and images, respectively. We show that for all i ∈ {1,…,n} and all images, images.

The service rate of the servers is exactly R. Thus, in S, the service rate of flow i is in the interval [0, R] at each time (if images and in images, the service rate is either 0 or R. Then, the difference between Di and images is maximized at a time just before the beginning of a service of a packet of flow i. Let t be such a time and let L be the length of the packet beginning its service at time t. The service of that packet ends at time t + L/R. Let τ be the end of service of that packet in S: images. But we know that images. Then, images and images.

Finally, if images is a strict service curve for flow i in S, we have that for all images, with images,

eq

This, with images, leads to the result.

8.2.3. Deficit round robin

The DRR [SHR 95] is another popular scheduling policy that mimics GPS by offering a proportion of the bandwidth to each flow (or class of service). Compared to the PGPS, it is simpler and algorithmically more efficient: PGPS requires the computation of the theoretical departure time in GPS, and a comparison with the other departure times. It then requires at least O(log(n)) operations, where n is the number of flows. We will see that DRR only requires O(1) operations at each step.

The principle of this scheduling policy is to associate a quantum Qi (playing the role of images) with each flow. Each flow is served in a round, and this quantum approximately represents the amount of data that can be served in each round.

More precisely, DRR is described in Algorithm 1. Assume that there are n input flows, and that when a packet of flow i enters the system, it is put in queue i. When flow i is selected to be served, a deficit counter DC[i] is increased by Qi and as long as DC[i] is at least equal to the length of the head-of-line packet, this packet is served and DC[i] is decreased accordingly. The service of flow i ends when the flow lacks enough credit to send the next packet or when its queue is empty. In the latter case, DC[i] is set to zero.

Note that no assumption is made about the policy inside each queue. The only restriction is that the head of queue is the same, from line 7 to line 9 in an iteration of the loop. However, it can be changed for the next iteration, if, for example, a packet with a higher priority arrives meanwhile.

THEOREM 8.5 (DRR residual service curve).– Let S be an n-server offering a strict service curve β with DRR service policy. If each flow i has a maximum packet size images and a quantum Qi, then flow i is guaranteed the strict service curve images defined by

eq

with images.

Moreover, if the packet lengths and Qi are all multiples of a basic unit ∈, then the residual service curve can be refined as

eq

with images and images.

Note that a basic unit always exists in practice, because the sizes are multiples of bits or bytes ( = 1 or = 8). From a modeling viewpoint, removing this assumption might be more convenient.

This result does not take into account the arrival curves of the flows, so this result is a packetized version of Theorem 7.7, not of Theorem 7.8, that also uses the arrival curves of each flow.

PROOF.– First observe that at line 5, for all images. Indeed, initially DC[i] = 0, and after the loop lines 6–12, either the packet has length strictly larger than DC[i] (hence images) or there is no packet waiting, and DC[i] = 0.

At each execution of the loop 5–12, DC[i] increased by Qi (line 6), so the quantity of data of flow i served during p iterations of loop 5–12 dedicated to flow i is at most images. Moreover, if flow i is always backlogged, line 12 is never executed, and at least images is served.

Let (A1 …,An , D1 …,Dn) ∈ S, let (s, t] be a backlogged period for flow i, and let p the number of complete executions of loop lines 5-12 dedicated to flow i (partial executions at the beginning or at the end of this interval are discarded). There are at most p + 1 (complete or partial) executions of the loop for each other flow. Therefore,

eq

and

eq

hence the result (after straightforward manipulations).

For discrete packet lengths that are multiple of , the proof is exactly the same, however at line 5, images, and we can replace images by images in the proof given above.

8.2.4. Weighted round robin

The RR policy is a very old scheduling policy. It has been already mentioned in [KLE 64]. Similar to DRR, flows are served in rounds. When a queue is selected and is not empty, one packet of this flow is served.

One drawback of the RR policy is that flows composed of long packets will use more bandwidth than the others. One way to overcome this limitation is to use the weighted round robin (WRR) policy with well-chosen weights. A weight images is associated with each flow i. It represents the number of packets of flow i that can be served in a row. This policy is precisely described by Algorithm 2.

To the best of our knowledge, the first computation of a residual service for WRR in network calculus has been done in [GEO 11], for the specific case of a constant rate server.

Since the lengths of the packets have a strong impact on the performance of the algorithm, it might be useful to give a more precise modeling of the sequence of the packet lengths of a flow.

DEFINITION 8.4 (Packet curves).– Let images be a cumulative packet length sequence and images, images be two non-decreasing functions. The sequence allows images, Lu as lower and upper packet curves if

eq

In other words, images(n) and Lu(n) give lower and upper bounds of the total length of n packets that arrive consecutively. Such functions always exist: images = 0 and Lu = ∞ are always valid choices. If images and images are known, a better choice is images and images. Moreover, if additional information is available, for example, that packets are of length 1 or 2, and that there are never more than two packets of length 2 in a row, Lu and images can take into account this information.

THEOREM 8.6 (Residual service curve for WRR servers).– Let S be an n-server offering a strict service curve β with the WRR service policy with parameters images. Suppose that, for each flow i, the packets of this flow have lower and upper packet curves images and images , and have minimum and maximum packet lengths images and images. Then, flow i is guaranteed the strict service curves

[8.10]eq

where images is defined by imagesimages and images.

This theorem gives three residual service curves, with different accuracies and computational complexities, and that depend on knowledge about the packet lengths. The first expression is the most precise, as it uses the packet curves, but it might be complex to compute. The second one takes into account the fact that packets in service use the total bandwidth of the server, whereas the third one can be seen as a linearization. The difference between the last two expressions is illustrated in Figure 8.8, where β = λ1, Qi = 2 and qi = 1.

images

Figure 8.8. Linear versus shaped staircase residual service curves. Solid line: the service curve obtained with equation [8.12]; dashed line: the one obtained using equation [8.11]. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

PROOF.– We call a full service session for flow j the time interval (x, y] during which the loop lines 3–7 are executed and dedicated serving packets of flow j. This interval might be empty if there is no packet of flow j in the server. During each full service session of flow j, at most wj packets are served, and considering a sequence of k full service sessions for flow j, at most kwj packets are served. If those sessions are in the same backlogged period of flow j, then exactly kwj packets are served.

Consider a backlogged period of (s, t] of flow i, and let p denote the number of full service sessions of the flow i in the interval (s, t]. Using the lower packet curve, images. In this interval, there are at most (p + 1) full service sessions of each other flow, so images,images.

We now can give a lower bound of p: by definition of g, images and images. Now, using the definition of a strict service curve,

eq

Thus, we have images. As f is non-decreasing (images are, and so g is), images.

Now, if minimal and maximal packets lengths are known, we can take imagesimages and images, so that images and imagesimages.

As images has the pseudo-inverse images (see Figure 8.9), we obtain images. The non-negative closure comes from images.

We also have images, which leads to the third expression.

images

Figure 8.9. Function images and its pseudo-inverse images

8.2.5. TDMA

Time-division multiple access (TDMA) is a scheduling policy in which each flow has a reserved time slot into a global cycle. Flow i can emit only during its allocated slot.

More formally, a cycle of duration c is repeated infinitely. If a cycle begins at time t0, a slot [t0 + bi, t0 + ei) is allocated to flow i. The n intervals are disjointed and all included in [t0,t0 + c), but are not necessarily a partition of this interval: there might be, e.g. for synchronization reasons, periods of time allocated to none of the flows.

We assume that the server serves data with a guaranteed rate R. Flow i is only served the time interval [t0 + bi, t0 + ei), where t0 is the starting time of a cycle, and if the remaining time for serving the next packet is enough: if images is the length of the packet and t is the time at which the service would start, the inequality images must be satisfied.

Note that TDMA does not specify the service policy inside a flow: in the example of Figure 8.10, flow 2 may have chosen to send packet images instead of packet images after packet images.

images

Figure 8.10. The TDMA principle. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

We present here the results of [DAN 14]. In particular, note that as a packet cannot be fragmented, there is a waste of bandwidth.

THEOREM 8.7 (TDMA residual service).– Let S be an n-server offering a strict service curve λR, images, with a TDMA service policy of cycle length c. Suppose that flow i is allocated a slot i of length si and has a minimum (respectively maximum) packet length images(respectively images), such that images. Then, flow i is guaranteed a strict service curve images defined by

eq

with images and images, and images otherwise .

Note that from Proposition 3.4, images, so we can also write images. However, we want to emphasize the shape of images as illustrated in Figure 3.2, shifted by images. The construction of the residual service curve for flow i is illustrated in Figure 8.11.

images

Figure 8.11. TDMA residual service curve. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

PROOF.– The proof is similar to that of DRR and WRR, as it is also mainly based on the number of packets that can be served during each slot. However, as each flow has a dedicated time interval during which it is served, there is no need to consider any other flow.

Let oi represent the minimal amount of data that can be served for flow i during slot i of a cycle, if flow i is always backlogged. If images, the time needed to serve one packet is images, and then images. If images, at least one packet of minimum length is served, and as flow i is always backlogged, the reason a packet could not be served is that the time remaining is not enough, that is at most images. Then, the service offered to flow i during an interval of length si is at least images.

Suppose that (s,t] is a backlogged period for flow i. Let u ∈ (s,t] be the time from which a first packet of flow i starts its service. From that time, there is a cyclic alternation of services of at most oi bits and idle phases, the total length of a cycle being c. So images.

It now remains bound to us. The longest idle period during a backlogged period is at most images. Indeed, in each cycle, there is a period csi that is not dedicated to flow i. Moreover, at the end of the slot dedicated to flow i, there could be a packet arriving for which there is not enough time remaining for its service. This period of time cannot exceed image and is optimal if s is the start of the backlogged period. Hence, image, and thus imageimage

8.3. Bibliographic notes

The first works on packetizers in the network calculus theory were published in [GEO 96, AGR 96]. The first formal definition is given in [AGR 96, section 6], and the first properties in [GEO 96]. They are also present in the books [CHA00, Definition 1.7.1] and [LEB 01, Definition 1.7.3].

image

Figure 8.12. Illustration of the continuity issues in a packetizer. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

Packetization and continuity. In the first definition, the packetizer is instead defined as image, and the packetized function associated with A is then image, which is a right-continuous function that does not fit with our modeling assumptions. Note that this problem has been already noticed in [PAR 93]1. The difference between image and PL(A) is illustrated in Figure 8.12.

This example also illustrates the fact that such a left-continuous packetizer cannot be defined using the composition operator: since the value of A is constant in the interval [1,3], for any function f, f(A(1)) = f (A(2)), whereas the packetizer PL(A) has different values at 1 and 2. For example, the alternative definition image implies that image is left-continuous, but in the example shown in Figure 8.12, image is null up to t = 3, i.e. the packet is not sent to output just after 1, but stored up to 3, and such a function does not model what a packetizer is.

Toward a general model of packets. Most results only take into account minimal and maximal lengths of packets, whereas, when considering aggregate scheduling, more precise modeling could be done by considering the length distribution in a flow (e.g. a flow primarily composed of short packets and a few long packets does not have the same impact as one flow primarily composed of long packets, even when they have the same arrival curves).

A more refined model has been proposed in [BOU 11b, BOU 12a] with the notion of packet curve. This framework has only been applied to the WRR policy (see section 8.2.4), and the benefit of a more general use of this model is still an open question.

8.4. Summary

Packetizer

eq
eq

B has arrival curve image has arrival curve image

eq

Packet-based scheduling policies. For the notations, refer to the exact statement.

Policy Service curve type Residual service curve Residual service curve type
NP-SP (equation [8.8]) strict image min-plus
NP-SP (equation [8.9]) strict image strict
PGPS (Theorem [8.4]) strict λR image min-plus
DRR (Theorem [8.5]) strict image strict
WRR (equation [8.11]) strict image strict
WRR (equation [8.12]) strict image strict
TDMA (Theorem [8.7]) strict λR image strict

1.“We will adopt the convention that a packet has arrived only after its last bit has arrived” [PAR 93].

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset