Previously, the quantity of data crossing some point in a network has been defined by a cumulative arrival function A, and A(t) represents the amount of data that crossed that point until time t. Cumulative arrival functions are by construction non-decreasing and left-continuous, but no further assumption has been made. However, data is often composed of packets with specific sizes (either fixed, or finitely many possible sizes), which has some impact on the performances of the system:
In section 8.1, we will formally define a packetizer, and study the impact on the performances of a server followed by a packetizer. In section 8.2, we will describe several packet-based scheduling policies and derive service curves of individual flows under those scheduling policies.
Most results in the literature model packet sizes with two parameters: the maximal and minimal packet sizes. We will see in section 8.2.4 that refinements are possible.
Consider a flow composed of a sequence of packets (also sometimes called frames or segments), the i-th packet having length si. A packetizer is a server that groups the data of a flow into its packets: it stores the bits of data of a packet until the whole packet has arrived. When the last bit of the packet arrives, it serves all the bits of the packet simultaneously. This is illustrated in Figure 8.1: A is the cumulative arrival function entering the packetizer, and D the departure function, where data is grouped by packets. At time T1, all s1 bits of the first packet have been received, then the packet is served at time T1. In other words, if A(t) < s1 for all t < T1 and A(T1) = s1, then D(t) = 0 for all and D(T1+) = s1, and similarly for the next packets. Therefore, D is a staircase function.
Similar to flows that are described by their cumulative arrival function, we describe the sequence of the packet length by the cumulative length of the first packets: , the total length of the n first packets. Of course, the length of the i-th packet is given by .
DEFINITION 8.1 (Cumulative packet length sequence).– A cumulative packet length sequence is an increasing sequence, such that L0 = 0 and there exists , such that , .
The values , are, respectively, called minimum and maximum packet sizes.
Note that the bound is often denoted by . With our notation, we want to emphasize that we do not require that is the maximum packet size, but only require that be an upper bound of the packet sizes, and in the same way, be a lower bound of the packet sizes. Hence, the denomination maximum/minimum packet size is a slight abuse. The minimum packet size is mainly used to ensure the divergence of L, so that every bit of data belongs to a packet. This is not a strong limitation since in practice such a bound always exists.
DEFINITION 8.2 (Packetizer).– Let L be a cumulative packet length sequence. An L-packetizer is the function :, such that ,
We first need to prove that this function is well defined, i.e. , . In fact, we will prove the stronger statement that is a server. By a slight abuse of notation, we denote by PL this server:
Let us first prove a preliminary lemma.
LEMMA 8.1.– Let L be a cumulative packet length sequence and PL its associated packetizer. Then, for all and t > 0, there exists such that
Since is diverging, it is obvious that there exists n such that for all . The interesting point is that PL(A)(t) ∈ {Ln, Ln-1}, and the two possibilities exist, as shown in Figure 8.2: PL(A)(u) = L1 and A(u) = L2, whereas PL(A)(v) = L2 and A(v) = L2. Note that for a given , t may not exist such that PL(A)(t) = Ln: there can be several packets arriving at the same time (at time w, for example).
PROOF.– L0 = A(0) = 0 < L1, so the property is true for t = 0. Now assume that t > 0. Since L is divergent and since PL(A) take values in , there exists such that PL(A)(t) = Ln.
By definition of PL(A)(t), there exists u < t such that and since A is non-decreasing, .
As PL(A)(t) < Ln+1, then for all u < t, A(u) < Ln+1. But, since A is left-continuous, .
□
PROPOSITION 8.1 (A packetizer is a server).– If L is a cumulative packet length sequence, then PL is a server: , PL(A) ∈ C and .
PROOF.– is a direct consequence of Lemma 8.1.
First, PL (A)(0) = 0 by definition of PL, and , so and PL(A) is non-decreasing. It remains to show that PL(A) is piecewise continuous and left-continuous.
From Lemma 8.1, for all t > 0, there exists such that PL(A)(t) = n, and from the definition, there exists u < t such that . But then , . Thus, PL = Ln in the interval (u, t], hence it is continuous in this interval. In other words, PL(A) is left-continuous and piecewise continuous.
□
COROLLARY 8.1 (Maximum buffer size of a packetizer).– Let L be a cumulative packet length sequence with maximum packet length . The backlog of PL is at most ,
PROOF.– This is a direct consequence of Lemma 8.1: for all , there exists such that
□
DEFINITION 8.3 (Packetized cumulative function).– Let PL be a packetizer. The cumulative function A ∈ C is said to be PL-packetized if A = PL(A).
LEMMA 8.2.– PL is idempotent: .
PROOF.– For all A ∈ C, PL(PL(A))(0) = 0 = PL (A)(0) and ,
□
Since the exact values of the cumulative packet length sequence are often unknown beyond its maximum and minimum packet lengths, the superscript L is often omitted, and a PL-packetizer is called a P-packetizer.
THEOREM 8.1 (Arrival curve after a packetizer).– Let P be a packetizer with maximum packet size , and (A, D) ∈ P. If A has a maximal (respectively minimal) arrival curve αu (respectively ), then D has maximal (respectively minimal) arrival curve (respectively ).
PROOF.– From Corollary 8.1,,
hence the result from the equivalent definition of arrival curves in Proposition 5.1.
□
We now study the system consisting of the concatenation of a server S and a packetizer P, as depicted in Figure 8.3. Figure 8.4 shows an example of cumulative arrival functions. The arrival A is packetized and first crosses server S, the output B is not necessarily packetized, and C is the output of the packetizer, the output cumulative function is packetized again. Note that the cumulative packet length sequence is the same at input and output of the system.
THEOREM 8.2 (Server/packetizer system).– Let S be a server and P be a packetizer with maximum packet size . The combined sequence S;P has the following properties:
There is no equivalent of the first property of the theorem concerning strict service curves. Consider the example of Figure 8.5 and suppose that S offers a strict service curve and that every packet has length 1. Let A = v1, 1, −1 be an arrival function. We use the notations of Figure 8.3. The cumulative function is admissible, since (A, B) ∈ S, and C = P(B) = v1, 1, −2. On the one hand, for all t > 0, C(t) < A(t), so the system is never empty. On the other hand, if β — 1 were a strict service curve, [β — 1]+ = β2,1 would also be and, from Theorem 5.5, the backlogged period would be bounded by 2. Thus, cannot be a strict service curve.
The intuition of that last property is the following: the packetizer serves any packet when its last bit has been served by S. However, if the arrival cumulative function has already been packetized, every bit of this packet arrives at the same time. Therefore, the delay for the packet in the system S;P is the delay of the last bit of the packet in S, which is the worst-case delay for any bit of this packet. Thus, intuitively, the packetizer does not introduce any delay for any packet.
This last property also means that the last packetization does not introduce an additional delay. However, it is not true for the concatenation of such systems: if A is P-packetized, we can only write . Indeed, although d(A, S1; P) = d(A, S1), the departure cumulative functions are different and the departure cumulative function of S1;P is more bursty than that of S1.
PROOF.– We use the notations of Figure 8.3.
First, . Indeed, for all , there exists such that , and .
Second, for all , hDev(A, C, Ti) = hDev(A, B, Ti). Suppose that it is not true and that hDev(A, C, Ti) > hDev(A, B, Ti). It means that there exists ∈ > 0, such that . But, from the definition of a P-packetizer, we should have , which is in contradiction with the definition of . Then, . The reverse is also true as , and for all , we have hDev(A, C, Ti) = hDev(A, B, Ti), hence the result.
□
COROLLARY 8.2 (Packetizer as a delay).– Let S be a server and P be a packetizer. If a cumulative arrival function A is P-packetized, then the system S;P offers the pure-delay min-plus service curve δd(A,s) to A.
PROOF.– From Theorem 6.2, S;P offers the min-plus service curve δd(A,S;P), and from Theorem 8.2, d(A, S;P) = d(A, S).
□
COROLLARY 8.3 (Arrival curve from a packetizer).– Let S be a server, P be a packetizer of maximal packet size , and (A, C ) ∈ S ;P. If A is P -packetized and if , then, the cumulative departure function C has a maximal arrival curve
The term might in some cases improve the result compared to only using the deconvolution by .
PROOF.– The first two terms are a consequence of Theorems 5.3 and 8.2, and the third term is a consequence of the previous corollary. The proof concludes by applying Proposition 5.2: the minimum of two arrival curves for C is also an arrival curve for C.
□
In the previous section, we defined the notion of packetizer, which allows us to model servers crossed by a flow made up of packets. A server, as described in the previous chapters, can be crossed by several flows, and then residual service curves can be computed for individual flows.
The aim of this section is then to study different service (or scheduling) policies, taking into account that one packet has to be served entirely before the next. Some policies, like FIFO, behave exactly as in the fluid case: if a packet arrived before another, it is entirely served before the other. However, some policies behave differently than in the fluid case. One representative example is the fixed priority policy. In the fluid analysis of section 7.3.2, when a packet with high priority arrives, it interrupts the service of the low-priority packet. In practice, this does not happen, and the server finishes serving the lower-priority packet before serving the higher-priority packet. This is the object of section 8.2.1. Similarly, the GPS policy presented in section 7.3.3 has different implementations to take into account the packetization: the packetized GPS described in section 8.2.2 and the deficit round robin (DRR) in section 8.2.3.
Some policies are specific to packetized nature of the flows, for example, the weighted round robin (WRR) in section 8.2.4 and the Time Division Multiple Access in section 8.2.5.
This section focuses on some of the most representative policies, but does not aim at exhaustivity. For example, the Audio Video Bridging protocol is a protocol that combines priority and credit-based scheduling. It has been studied in [QUE 12, RUI 14].
The static priority policy presented in section 7.3.2 assumes that, when high-priority flows are backlogged, low-priority flows receive no service. In practice, as a server usually serves packets one after the other, when a packet starts being served, it will be entirely served. Hence, if a low-priority packet has started being served, it is served although a higher-priority packet arrives. We say that the scheduling is non-preemptive.
Figure 8.6 illustrates the difference between these two policies. The server is idle at time 1 and, when busy, serves data at rate 1 exactly. From time 1 to 7, the scheduling is preemptive: a low-priority packet L1 is received at time 1 and a high-priority packet H1 at time 2. Between times 1 and 2, the server serves one unit of L1, and then from time 2 serves packet H1 and suspends service of L1. After the service of H1, it resumes to serve L1. From time 9 to 15, the scheduling is non-preemptive. Low-priority packet L2, that arrived before high-priority packet H2, is entirely served before packet H2 is served.
The aim of this section is to capture the non-preemption effect in the network calculus framework. We assume total order on the priority of the input flows of an n-server: if i < j, then flow i has higher priority than flow j. First, we can write an equation translating the fact that during any backlogged period [s, t] of the flows with the priorities at least i, at most one packet of the lower-priority flows is served: for A1 ,…, An arrival flows and A1 ,…, An departure flows,
where , and similar definitions for .
This implication is sufficient to derive a (min,plus) residual service curve for flow i, but computing a strict residual service curve is a little more involved. Indeed, the higher-priority flows can themselves be stopped by a packet of lower priority. Set the start of the backlogged period of the higher-priority flows. We know that in the interval of time [u, t], at most one packet of priority strictly lower than i can be served (and its service begins at time at most u), and in the interval [u, s], at most one packet of flow i can be served. Moreover, these two possibilities are exclusive, because [u, s] is a backlogged period for the higher-priority flows. Then, the inequality
must be satisfied. This is now sufficient to derive a strict service curve for flow i.
THEOREM 8.3 (Non-preemptive static priority (NP-SP)).– Let S be an NP-SP n-server taking the natural order as priority order, offering an aggregate minimal strict service curve β. Suppose that, for all i ∈ {1,…,n}, is an arrival curve for each arrival flow i. Then, βi (respectively ) is a min-plus (respectively strict) service curve offered to flow i with
PROOF.– To simplify the notations, we consider three flows: flow H is the aggregation of the flows with priority strictly higher than i, and flow L that of flows with priority strictly lower than i and flow i. The arrival curve of flow H is and the maximum packet length of flow L is .
For all , let , i.e. the last time such that AH(s) = DH(s) and Ai(s) = Di(s). Then, for all u ∈ [s,t],
First, we used that Ai(s) = Di(s); second, that β is a strict service curve; third, that the server serves at most one packet of the lower-priority flow when high-priority flows are backlogged , from equation [8.6]); fourth that and AH(s) = DH(s); and finally, that αH is an arrival curve for the high-priority flows.
Moreover, Di is not decreasing, so and
As a result, βi is a min-plus service curve for flow i.
We now compute a strict service curve for flow i. Let (s, t] be a backlogged period for this flow and u = Start AH,DH(s) be the start of the backlogged period of s for the flows with strictly higher priority.
As flow i is backlogged during (s, t] and flow H during (u, s], interval (u, t] is a backlogged period for the aggregate flow and for all v ∈ [u, t],
where β is a strict service curve and u is the start of the backlogged period for flow H. We now bound DL(v) — DL(u) + Di(s) — Di(u). During (u, t], at most one packet from flow L can be served. Moreover, it will be served at the start of the backlogged period. Either one such packet is served, and then no data from flow i can be served before time s, as (u, s] is a backlogged period for flow H, or no such packet is served, but in this case, one packet of flow i might be in service at time u. Therefore, (equation [8.7]): we obtain
Note that this equation is true although . As Di is non-decreasing and ,
which completes the proof.
□
The expressions for the residual service curves for flow i are not optimal. Indeed, they do not take into account the fact that when a packet of lower priority is served, it is served at the guaranteed service rate, as illustrated in Figure 8.7. Consider a server offering a strict service curve , shared by two flows. Suppose that higher-priority flow has arrival curve . The residual service curve for the low-priority flow is then [β − αH] + . Now, if all packets of the low-priority flow have length 1, packets will be served at rate 1 when they can be served. More details about this approach can be found in [CHO 08, MAN 11, MAN 12].
The general processor sharing (GPS) policy, presented in section 7.3.3, is an idealized policy that assumes that each piece of information (a packet or even bit of data) can be decomposed into infinitesimal pieces to guarantee proportional sharing at every time and data scale. However, bits cannot be decomposed in practice, and packets are usually not fragmented. Consequently, several packet-based scheduling policies have been developed to approximate the idealized GPS policy.
In this section, we present the packetized GPS (PGPS) policy that has been defined in [PAR 93]. It is also known as weighted fair queueing and is certainly one of the most famous packet-by-packet scheduling policies.
Its behavior is based on the GPS policy: at each time when the server must select a packet (either the start of a backlogged period or the end of the service of a packet), the selected packet is the one that has the smallest departure time in the idealized GPS among the packets present in the system (the packet must not have been served yet, and must have arrived).
The next result is an adaptation of [PAR 93] to our framework.
THEOREM 8.4 (PGPS).– Consider a PGPS n-server with parameters , which is a λR-greedy shaper. If the corresponding GPS server offers to flow i the strict service curve , then the PGPS server offers to this flow the min-plus minimal service curve , where is an upper bound on the packet length of the flows.
PROOF.– Our aim is to compare two servers, S and , both of which are λR-greedy shapers (in particular, λR is a strict service curve for the systems), where S uses the GPS policy and the PGPS policy. We remark that S and have the same aggregate behavior, so during each interval of time [s, t] they serve exactly the same amount of data.
The proof has several steps. We first compare the departure times under GPS and under PGPS.
Let us order the packets according to their departure time in PGPS: packet k is the k-th to exit the system under PGPS, and we denote by αk its arrival time in the system, tk its departure time and uk its departure time if the service policy were GPS. Two cases can occur: either for all m < k, uk > um, and then at time tk, has served packets 1,…k, whereas at time uk, S has served packets 1,…,k plus a part of some other packets.
Or there exists m < k with uk < um: packet k arrives too late to be served according to the GPS order. But when it arrives, it waits for the end of service of the packet in service before being served. Therefore, if packet k has length L, . But we also have , so .
Thus, in both cases, .
Second, we compare the departure processes of each flow in S and . For each arrival process (A1,…, An), let (D1,…, Dn) and be the departure processes in S and , respectively. We show that for all i ∈ {1,…,n} and all , .
The service rate of the servers is exactly R. Thus, in S, the service rate of flow i is in the interval [0, R] at each time (if and in , the service rate is either 0 or R. Then, the difference between Di and is maximized at a time just before the beginning of a service of a packet of flow i. Let t be such a time and let L be the length of the packet beginning its service at time t. The service of that packet ends at time t + L/R. Let τ be the end of service of that packet in S: . But we know that . Then, and .
Finally, if is a strict service curve for flow i in S, we have that for all , with ,
This, with , leads to the result.
□
The DRR [SHR 95] is another popular scheduling policy that mimics GPS by offering a proportion of the bandwidth to each flow (or class of service). Compared to the PGPS, it is simpler and algorithmically more efficient: PGPS requires the computation of the theoretical departure time in GPS, and a comparison with the other departure times. It then requires at least O(log(n)) operations, where n is the number of flows. We will see that DRR only requires O(1) operations at each step.
The principle of this scheduling policy is to associate a quantum Qi (playing the role of ) with each flow. Each flow is served in a round, and this quantum approximately represents the amount of data that can be served in each round.
More precisely, DRR is described in Algorithm 1. Assume that there are n input flows, and that when a packet of flow i enters the system, it is put in queue i. When flow i is selected to be served, a deficit counter DC[i] is increased by Qi and as long as DC[i] is at least equal to the length of the head-of-line packet, this packet is served and DC[i] is decreased accordingly. The service of flow i ends when the flow lacks enough credit to send the next packet or when its queue is empty. In the latter case, DC[i] is set to zero.
Note that no assumption is made about the policy inside each queue. The only restriction is that the head of queue is the same, from line 7 to line 9 in an iteration of the loop. However, it can be changed for the next iteration, if, for example, a packet with a higher priority arrives meanwhile.
THEOREM 8.5 (DRR residual service curve).– Let S be an n-server offering a strict service curve β with DRR service policy. If each flow i has a maximum packet size and a quantum Qi, then flow i is guaranteed the strict service curve defined by
with .
Moreover, if the packet lengths and Qi are all multiples of a basic unit ∈, then the residual service curve can be refined as
with and .
Note that a basic unit always exists in practice, because the sizes are multiples of bits or bytes (∈ = 1 or ∈ = 8). From a modeling viewpoint, removing this assumption might be more convenient.
This result does not take into account the arrival curves of the flows, so this result is a packetized version of Theorem 7.7, not of Theorem 7.8, that also uses the arrival curves of each flow.
PROOF.– First observe that at line 5, for all . Indeed, initially DC[i] = 0, and after the loop lines 6–12, either the packet has length strictly larger than DC[i] (hence ) or there is no packet waiting, and DC[i] = 0.
At each execution of the loop 5–12, DC[i] increased by Qi (line 6), so the quantity of data of flow i served during p iterations of loop 5–12 dedicated to flow i is at most . Moreover, if flow i is always backlogged, line 12 is never executed, and at least is served.
Let (A1 …,An , D1 …,Dn) ∈ S, let (s, t] be a backlogged period for flow i, and let p the number of complete executions of loop lines 5-12 dedicated to flow i (partial executions at the beginning or at the end of this interval are discarded). There are at most p + 1 (complete or partial) executions of the loop for each other flow. Therefore,
and
hence the result (after straightforward manipulations).
For discrete packet lengths that are multiple of ∈, the proof is exactly the same, however at line 5, , and we can replace by in the proof given above.
□
The RR policy is a very old scheduling policy. It has been already mentioned in [KLE 64]. Similar to DRR, flows are served in rounds. When a queue is selected and is not empty, one packet of this flow is served.
One drawback of the RR policy is that flows composed of long packets will use more bandwidth than the others. One way to overcome this limitation is to use the weighted round robin (WRR) policy with well-chosen weights. A weight is associated with each flow i. It represents the number of packets of flow i that can be served in a row. This policy is precisely described by Algorithm 2.
To the best of our knowledge, the first computation of a residual service for WRR in network calculus has been done in [GEO 11], for the specific case of a constant rate server.
Since the lengths of the packets have a strong impact on the performance of the algorithm, it might be useful to give a more precise modeling of the sequence of the packet lengths of a flow.
DEFINITION 8.4 (Packet curves).– Let be a cumulative packet length sequence and , be two non-decreasing functions. The sequence allows , Lu as lower and upper packet curves if
In other words, (n) and Lu(n) give lower and upper bounds of the total length of n packets that arrive consecutively. Such functions always exist: = 0 and Lu = ∞ are always valid choices. If and are known, a better choice is and . Moreover, if additional information is available, for example, that packets are of length 1 or 2, and that there are never more than two packets of length 2 in a row, Lu and can take into account this information.
THEOREM 8.6 (Residual service curve for WRR servers).– Let S be an n-server offering a strict service curve β with the WRR service policy with parameters . Suppose that, for each flow i, the packets of this flow have lower and upper packet curves and , and have minimum and maximum packet lengths and . Then, flow i is guaranteed the strict service curves
where is defined by and .
This theorem gives three residual service curves, with different accuracies and computational complexities, and that depend on knowledge about the packet lengths. The first expression is the most precise, as it uses the packet curves, but it might be complex to compute. The second one takes into account the fact that packets in service use the total bandwidth of the server, whereas the third one can be seen as a linearization. The difference between the last two expressions is illustrated in Figure 8.8, where β = λ1, Qi = 2 and qi = 1.
PROOF.– We call a full service session for flow j the time interval (x, y] during which the loop lines 3–7 are executed and dedicated serving packets of flow j. This interval might be empty if there is no packet of flow j in the server. During each full service session of flow j, at most wj packets are served, and considering a sequence of k full service sessions for flow j, at most kwj packets are served. If those sessions are in the same backlogged period of flow j, then exactly kwj packets are served.
Consider a backlogged period of (s, t] of flow i, and let p denote the number of full service sessions of the flow i in the interval (s, t]. Using the lower packet curve, . In this interval, there are at most (p + 1) full service sessions of each other flow, so ,.
We now can give a lower bound of p: by definition of g, and . Now, using the definition of a strict service curve,
Thus, we have . As f is non-decreasing ( are, and so g is), .
Now, if minimal and maximal packets lengths are known, we can take and , so that and .
As has the pseudo-inverse (see Figure 8.9), we obtain . The non-negative closure comes from .
We also have , which leads to the third expression.
□
Time-division multiple access (TDMA) is a scheduling policy in which each flow has a reserved time slot into a global cycle. Flow i can emit only during its allocated slot.
More formally, a cycle of duration c is repeated infinitely. If a cycle begins at time t0, a slot [t0 + bi, t0 + ei) is allocated to flow i. The n intervals are disjointed and all included in [t0,t0 + c), but are not necessarily a partition of this interval: there might be, e.g. for synchronization reasons, periods of time allocated to none of the flows.
We assume that the server serves data with a guaranteed rate R. Flow i is only served the time interval [t0 + bi, t0 + ei), where t0 is the starting time of a cycle, and if the remaining time for serving the next packet is enough: if is the length of the packet and t is the time at which the service would start, the inequality must be satisfied.
Note that TDMA does not specify the service policy inside a flow: in the example of Figure 8.10, flow 2 may have chosen to send packet instead of packet after packet .
We present here the results of [DAN 14]. In particular, note that as a packet cannot be fragmented, there is a waste of bandwidth.
THEOREM 8.7 (TDMA residual service).– Let S be an n-server offering a strict service curve λR, , with a TDMA service policy of cycle length c. Suppose that flow i is allocated a slot i of length si and has a minimum (respectively maximum) packet length (respectively ), such that . Then, flow i is guaranteed a strict service curve defined by
with and , and otherwise .
Note that from Proposition 3.4, , so we can also write . However, we want to emphasize the shape of as illustrated in Figure 3.2, shifted by . The construction of the residual service curve for flow i is illustrated in Figure 8.11.
PROOF.– The proof is similar to that of DRR and WRR, as it is also mainly based on the number of packets that can be served during each slot. However, as each flow has a dedicated time interval during which it is served, there is no need to consider any other flow.
Let oi represent the minimal amount of data that can be served for flow i during slot i of a cycle, if flow i is always backlogged. If , the time needed to serve one packet is , and then . If , at least one packet of minimum length is served, and as flow i is always backlogged, the reason a packet could not be served is that the time remaining is not enough, that is at most . Then, the service offered to flow i during an interval of length si is at least .
Suppose that (s,t] is a backlogged period for flow i. Let u ∈ (s,t] be the time from which a first packet of flow i starts its service. From that time, there is a cyclic alternation of services of at most oi bits and idle phases, the total length of a cycle being c. So .
It now remains bound to u − s. The longest idle period during a backlogged period is at most . Indeed, in each cycle, there is a period c — si that is not dedicated to flow i. Moreover, at the end of the slot dedicated to flow i, there could be a packet arriving for which there is not enough time remaining for its service. This period of time cannot exceed and is optimal if s is the start of the backlogged period. Hence, , and thus
□
The first works on packetizers in the network calculus theory were published in [GEO 96, AGR 96]. The first formal definition is given in [AGR 96, section 6], and the first properties in [GEO 96]. They are also present in the books [CHA00, Definition 1.7.1] and [LEB 01, Definition 1.7.3].
Packetization and continuity. In the first definition, the packetizer is instead defined as , and the packetized function associated with A is then , which is a right-continuous function that does not fit with our modeling assumptions. Note that this problem has been already noticed in [PAR 93]1. The difference between and PL(A) is illustrated in Figure 8.12.
This example also illustrates the fact that such a left-continuous packetizer cannot be defined using the composition operator: since the value of A is constant in the interval [1,3], for any function f, f(A(1)) = f (A(2)), whereas the packetizer PL(A) has different values at 1 and 2. For example, the alternative definition implies that is left-continuous, but in the example shown in Figure 8.12, is null up to t = 3, i.e. the packet is not sent to output just after 1, but stored up to 3, and such a function does not model what a packetizer is.
Toward a general model of packets. Most results only take into account minimal and maximal lengths of packets, whereas, when considering aggregate scheduling, more precise modeling could be done by considering the length distribution in a flow (e.g. a flow primarily composed of short packets and a few long packets does not have the same impact as one flow primarily composed of long packets, even when they have the same arrival curves).
A more refined model has been proposed in [BOU 11b, BOU 12a] with the notion of packet curve. This framework has only been applied to the WRR policy (see section 8.2.4), and the benefit of a more general use of this model is still an open question.
Packetizer
B has arrival curve has arrival curve
Packet-based scheduling policies. For the notations, refer to the exact statement.
Policy | Service curve type | Residual service curve | Residual service curve type |
---|---|---|---|
NP-SP (equation [8.8]) | strict | min-plus | |
NP-SP (equation [8.9]) | strict | strict | |
PGPS (Theorem [8.4]) | strict λR | min-plus | |
DRR (Theorem [8.5]) | strict | strict | |
WRR (equation [8.11]) | strict | strict | |
WRR (equation [8.12]) | strict | strict | |
TDMA (Theorem [8.7]) | strict λR | strict |
1.“We will adopt the convention that a packet has arrived only after its last bit has arrived” [PAR 93].