22

Ping-Pong, Squash, and Difference Equations

22.1 PING-PONG MATH

Everybody has played Ping-Pong at some time in life, usually at a summer camp or at school or at a friend’s home. The scoring rule is simple: a point is made by the winner of each rally (it doesn’t matter who serves), and the first player to reach eleven points wins the game. There is a little caveat about having to be ahead by at least two points—a score of 11 to 9 wins, but 11 to 10 doesn’t—but I’ll ignore that here. The first question in this puzzler is a simple one: if two players, called P and Q, have probabilities p and 1 − p = q, respectively, of winning any given point, what is the probability that P wins the game?

We could get an estimate for this probability as a function of p by using the approach I’ve used a lot in this book, that is, by writing a Monte Carlo simulation. Instead, here I’ll show you a completely different computer solution. It is still based on the sheer number-crunching power of a modern computer, but this alternative technique has the two advantages of speed and accuracy. Its disadvantage is that it can require some pre-coding analysis, but I think you’ll find that effort worthwhile. That’s because the code’s calculations will be as accurate as if we had used a formula: there will be no statistical sampling errors.

To set the problem up mathematically is not difficult, and going through the analysis in detail will show us the way to attack the far more difficult question of the next section. Here’s how to answer our warm-up Ping-Pong puzzle. Define the state of the game to be (i, j), which means that P needs i more points to win the game and Q needs j more points to win the game. If a game is in state (i, j), we’ll further define tt(i, j) to be the probability P wins the game from that state (tt for Ping-Pong’s other name, table tennis). The answer to our warm-up question—what is the probability, as the game is about to start, that P wins the game?—is tt(11, 11).

Now, before I show you how to calculate tt(11, 11), three additional observations are required:

(1) tt(0, j) = 1, 1 ≤ j ≤ 11, because these are the probabilities that P needs zero points to win; that is, P has won;

(2) tt(i, 0) = 1, 1 ≤ i ≤ 11, because these are the probabilities that Q needs zero points to win; that is, Q has won (and so P has lost);

and

(3) tt(0, 0) has no meaning, as it is impossible for the game to be in a state where both P and Q have won.

Now, how do we find tt(11, 11)?

Suppose the game is in state (i, j). Then, with probability p the state changes to (i − 1, j) because P wins the point, and with probability (1 − p) the state changes to (i, j − 1) because Q wins the point. From this we immediately have the equation

tt(i, j) = p tt(i − 1, j) + (1 − p) tt(i, j − 1),

a double-indexed difference equation. At this point I should tell you that there is a beautiful theory for handling state-transitioning problems like this, called Markov chain theory, after the Russian mathematician Andrei Markov (1856–1922), but it’s far more powerful mathematical artillery than we need here. Instead, we’ll work directly with the above difference equation to arrive at a numerical answer.

To start, think of tt(i, j) as the entry in the i th row and the jth column of a matrix array (called a probability state-transition matrix), as shown in Figure 22.1.1. You’ll notice that the first row and column (i = 0 and j = 0, respectively) have been filled in with the values from observations (1) and (2), above. In addition, tt(1, 1) has been given the value p (I’ll show you why in just a moment). Now, to calculate tt(i, j) directly from the difference equation, we need to refer to the previous row to get tt(i − 1, j) and to the previous column to get tt(i, j − 1). For row 1 and column 1 that means we have to look at the zeroth row and column, and alas, MATLAB® doesn’t support zero-indexing into a matrix. If you have software that does allow zero-indexing then you’re all set to go right now, but with MATLAB® we’ll have to precalculate the entries for both row 1 and column 1. Then we can use the difference equation to find the entries for all the other rows and columns in the tt state transition matrix.

Images

Figure 22.1.1. The probability state-transition matrix tt(i, j) for Ping-Pong.

To calculate row 1, set i = 1, and so

tt(1, j) = p tt(0, j) + (1 − p) tt(1, j − 1),

or, using observation (1),

tt(1, j) = p + (1 − p) tt(1, j − 1).

Thus,

tt(1, 1) = p + (1 − p) tt(1, 0) = p;

tt(1, 2) = p + (1 − p) tt(1, 1) = p + (1 − p)p;

tt(1, 3) = p + (1 − p) tt(1, 2) = p + (1 − p)p + (1 − p)2p;

tt(1, 4) = p + (1 − p) tt(1, 3) = p + (1 − p)p + (1 − p)2p + (1 − p)3p;

and so on. It should be obvious that, in general,

tt(1, j) = p[1 + (1 − p) + (1 − p)2 + … + (1 − p) j − 1]

or, recognizing the sum in the brackets as a geometric series,

tt(1, j) = 1 − (1 − p) j.

In particular, as shown in Figure 22.1.1,

tt(1, 1) = 1 − (1 − p) = p.

To calculate column 1, set j = 1 and, so

tt(i, 1) = p tt(i − 1, 1) + (1 − p) tt(i, 0)

or, using observation (2),

tt(i, 1) = p tt(i − 1, 1).

Thus,

tt(1,1) = p tt(0, 1) = p (which we’ve already established);

tt(2, 1) = p tt(1, 1) = p2;

tt(3, 1) = p tt(2, 1) = p3;

and so on. Obviously,

tt(i, 1) = pi.

The code pp.m (pp for Ping-Pong) first implements these precalculations for the first row and column entries in the tt matrix, and then calculates all the remaining entries using the difference equation. The following table shows the results.

p, Probability P wins a point

tt(11, 11), Probability P wins a game

0.3

0.0264

0.35

0.0772

0.4

0.1744

0.45

0.3210

0.46

0.3551

0.47

0.3903

0.48

0.4264

0.49

0.4630

0.50

0.5

0.51

0.5370

0.52

0.5736

0.53

0.6097

0.54

0.6449

0.55

0.6790

0.6

0.8256

0.65

0.9228

0.7

0.9736

One quick check with these numbers adds to our confidence that the code is working correctly: the probability P wins the game for a given value of the point probability p is 1 minus the probability P wins the game for the point probability 1 − p. This is the expected behavior, too, given the symmetry (or perhaps it’s the asymmetry) between P and Q: when P wins Q loses, and vice versa. The probability of P winning a game is fairly sensitive to the value of p: if p < 0.4, P almost always loses, while if p > 0.6, P almost always wins.

pp.m

p=0.51;q=1-p;

for j=1:11

t(1,j)=1-qj;

t(j,1)=pj;

end

for i=2:11

for j=2:11

t(i,j)=p*t(i-1,j)+q*t(i,j-1);

end

end

t(11,11)

22.2 SQUASH MATH IS HARDER!

Consider now a similar but much more computationally demanding problem, from the game of squash. Squash is very much like Ping-Pong in its scoring rule, with one significant difference. Our two players (still P and Q) can each score a point only by winning a rally in which they served. If the server wins the rally, then he keeps the serve; if he loses the rally, then the next serve goes to the other player. The winner of a squash game is the first player to reach nine points. For this second puzzler, we’ll take P and Q as having equal ability; each has probability 1/2 of winning any given rally (but don’t forget that the rally winner gets a point only if he was the server). If the server loses a rally the service changes hand but the point totals for both players remain unchanged.

Here’s our new question: how big an advantage is there for the player who gets the first service?

We can set this problem up mathematically pretty nearly the same way we did for Ping-Pong, but now we have to take into account the exchange of service. To do that, we’ll define two state-transition matrices based on the same definition for state that we used before:

ps(i, j) = probability P wins the game if P served from state (i, j)

and

qs(i, j) = probability P wins the game if Q served from state (i, j).

We now have two double-indexed difference equations, using the same argument we used in Ping-Pong on how the game state changes as a function of who wins a rally in state (i, j):

Images

and

Images

To answer our question, all we need do is compare ps(9, 9) to qs(9, 9). Notice, though, that our two difference equations are cross-coupled. This problem is clearly much more complicated than was Ping-Pong.

As in the Ping-Pong analysis, we can make some preliminary observations:

(1) ps(0, j) = 1, 1 ≤ j ≤ 9;

(2) qs(0, j) = 1, 1 ≤ j ≤ 9;

(3) ps(i, 0) = 0, 1 ≤ i ≤ 9;

(4) qs(i, 0) = 0, 1 ≤ i ≤ 9;

(5) ps(0, 0) and qs(0, 0) have no meaning.

Figure 22.2.1 shows the structures of both the ps and the qs probability state-transition matrices, where you can see that I have used these observations to fill in the values of the zeroth row and column for each matrix, as well as the value of element (1, 1) for each matrix. Now, here’s how I calculated the (1, 1) probabilities, and all the other entries for row 1 and column 1 in each matrix.

Images

Figure 22.2.1. The two probability state-transition matrices for squash.

To calculate the values of column 1 of the ps and qs matrices, set j = 1. Then,

Images

and

Images

Thus,

Images

or,

Images

and so

Images

Thus,

Images

Also,

Images

From ps(i, 1) = 2/3 ps(i − 1, 1) we have

Images

and so on. In general, column 1 of the ps matrix is

Images

And since qs(i, 1) = 1/2 ps(i, 1), column 1 of the qs matrix is

Images

To calculate row 1 of the ps and qs matrices, set i = 1, and so

Images

and

Images

Thus,

Images

or, after a couple of easy algebraic steps,

Images

So,

Images

which we’ve already established (but it’s nice to see it again!),

Images

and so on, In general,

Images

or, recognizing a geometric series, we have (with just a little algebra) that

Images

And finally, since ps(1, j) = 1/2 + 1/2 qs(1, j), we have

Images

or,

Images

So, this pre-coding analysis (prompted, you’ll recall, by MATLAB®’s failure to allow zero indexing into matrices) has been just a bit more involved than it was for Ping-Pong! Even so, we are still not quite ready to start writing code; we have one last task to complete. When calculating an entry in either matrix, the code must use only the values of entries already calculated (I do hope this is obvious!); the difference equations for ps and qs as they stand pose a problem on this score because ps(i, j) uses qs(i, j), and qs(i, j) uses ps(i, j). Each matrix uses the other in a way resembling a “pull yourself up by your bootstraps” paradox!

The way out of this chicken-and-egg conundrum is to express one of ps and qs in terms only of already calculated values and to calculate it first—and then calculate the other. Let’s do this for ps. Thus,

Images

and so

Images

This gives us

Images

where the right-hand side uses only previously determined values of ps and qs to calculate ps(i, j). Then qs(i, j) is calculated.

The code squash.m uses this last equation for ps(i, j) and the original qs(i, j) equation (as well as the row 1 and column 1 pre-coding results) to calculate ps(9, 9) for P serving first, and for the calculation of qs(9, 9) for Q serving first. When run, the code gave ps(9, 9) = 0.534863456 … and qs(9, 9) = 0.465136543. … (The fact that these two winning probabilities for P add to 1 is an indication that the code is working correctly, do you see why? Remember, in this analysis P and Q are of equal ability.) There is a definite advantage to P to get first serve—he wins noticeably more than he loses, whereas if he is the second server he loses noticeably more than he wins. When vastly more powerful analytical methods than I’ve used here are applied, it can be shown that the exact answer for ps(9, 9) is

Images

which is precisely what squash.m gives us.

squash.m

r=2/3;

for i=1:9

ps(i,1)=ri;

qs(i,1)=ps(i,1)/2;

qs(1,i)=1-ps(i,1);

ps(1,i)=1-qs(i,1);

end

for i=2:9

for j=2:9

ps(i,j)=r*(ps(i-1,j)+qs(i,j-1)/2);

qs(i,j)=0.5*(ps(i,j)+qs(i,j-1));

end

end

ps(9,9)

qs(9,9)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset