CHAPTER 11

STOCHASTIC CALCULUS

Stochastic calculus plays an essential role in modern mathematical finance and risk management. The objective of this chapter is to develop conceptual ideas of stochastic calculus in order to provide a motivational framework. This chapter presents an informal introduction to martingales, Brownian motion, and stochastic calculus. Martingales were first defined by Paul Lévy (1886–1971). The mathematical theory of martingales has been developed by American mathematician Joseph Doob (1910–2004). We begin with the basic notions of martingales and its properties.

11.1   MARTINGALES

The martingale is a strategy in a roulette game in which, if a player loses a round of play, then he doubles his bet in the following games so that if he wins he would recover from his previous losses. Since it is true that a large losing sequence is a rare event, if the player continues to play, it is possible for the player to win, and thus this is apparently a good strategy. However, the player could run out of funds as the game progresses, and therefore the player cannot recover the losses he has previously accumulated. One must also take into account the fact that casinos impose betting limits.

Formally, suppose that a player starts a game in which he wins or loses with the same probability of Image. The player starts betting a single monetary unit. The strategy is progressive where the player doubles his bet after each loss in order to recoup the loses. A possible outcome for the game would be the following:

Bet                   1    2    4    8    16     1    1

Outcome          F    F    F    F    W    W   F

Profit              -1   -3   -7   -15   1     2    1

Here W denotes “Win” and F denotes “Failure”. This shows that every time the player wins, he recovers all the previous losses and it is also possible to increase his wealth to one monetary unit. Moreover, if he loses the first n bets and wins the (n + l)th, then his wealth after the nth bet is equal to:

Image

This would indicate a win for the player. Nevertheless, as we shall see later, to carry out this betting strategy successfully, the player would need on average infinite wealth and he would have to bet infinitely often (Rincón, 2011).

In probability theory, the notion of a martingale describes a fair game. Suppose that the random variable Xm denotes the wealth of a player in the mth round of the game and the σ-field Imagem has all the knowledge of the game at the mth round. The expectation of Xn (with nm), given the information in Imagem, is equal to the fortune of the player up to time m. Then the game is fair. Using probability terms, we have, with probability 1:

E(Xn | Imagem) = Xm for all mn.

A stochastic process {Xt;t ≥ 0} satisfying the above equation is called a discrete-time martingale. Formally we have the following definitions:

Definition 11.1 Let (Ω, Image, P) be a probability space. A filtration is a collection of sub-σ-algebras (Imagen)n≥0 of Image such that ImagemImagen for all mn. We say that the sequence {Xn;n ≥ 0} is adapted to the filtration (Imagen)n≥0 if for each n the random variable Xn is Imagen-measurable, that is, {ω Image Ω : Xn(ω) ≤ a} Image Imagen for all a Image Image.

Definition 11.2 Let {Xn;n ≥ 0} be a sequence of random variables defined on the probability space (Ω, Image, P) and (Imagen)n≥0 be a filtration in Image. Suppose that {Xn;n ≥ 0} is adapted to the filtration (Imagen)n≥0 and E(Xn) exists for all n. We say that:

(a) {Xn;n ≥ 0} is a (Imagen)n-martingale if and only if E(Xn | Imagem) = Xm a.s. for all mn.

(b) {Xn;n ≥ 0} is a (Imagen)n-submartingale if and only if E(Xn | Imagem) ≥ Xm a.s. for all mn.

(c) {Xn;n ≥ 0} is a (Imagen)n-supermartingale if and only if E(Xn | Imagem) ≤ Xm a.s. for all mn.

Note 11.1 The sequence {Xn;n ≥ 0} is obviously adapted to the canonical filtration or natural filtration. That is to say that the filtration (Imagen)n≥0 is given by Imagen = σ (X1, X2, · · ·, Xn), where σ (X1, X2, · · ·, Xn) is the smallest σ-algebra with respect to which the random variables X1,X2,· · ·,Xn are Imagen-σ-measurable. When we speak of martingales, supermartingales and submartingales, with respect to the canonical filtration, we will not explicitly mention it. In other words, if we say: “(Xn)n is a (sub-, super-) martingaleand we do not reference the filtration, it is assumed that the filtration is the canonical filtration.

Note 11.2 If {Xn;n ≥ 0} is a (Imagen)n-martingale, it is enough to see that:

E(Xn+1) | Imagen) = Xn for all n Image Image.

Note 11.3 If {Xn;n ≥ 0} is a (Imagen)n-submartingale, then {−Xn;n ≥ 0} is a (Imagen)n-supermartingale. Thus, in general, with very few modifications, every proof made for submartingales is also valid for supermartingales and vice versa.

Image EXAMPLE 11.1

Let {Xn;n ≥ 0} be a martingale with respect to (Imagen)n≥0 and (Imagen)n≥0 be a filtration such that ImagenImagen for all n. If Xn is Imagen-measurable, then {Xn;n ≥ 0} is a martingale with respect to (Imagen)n. Indeed:

Image

Therefore, every (Imagen)n-martingale is a martingale with respect to the canonical filtration.          Image

Image EXAMPLE 11.2 Random Walk Martingale

Let Z1, Z2, · · · be a sequence of i.i.d. random variables on a probability space (Ω, Image, P) with finite mean μ = E (Z1), and let Imagen = σ(Z1, · · ·, Zn), n ≥ 1. Let Xn = Z1 + · · · + Zn, n ≥ 1. Then, for all n ≤ 1,

Image

so that:

Image

Thus, {Xn;n ≤ 1} is a martingale if μ = 0, a submartingale if μ > 0 and a supermart ingale if μ < 0.          Image

Image EXAMPLE 11.3 Second-Moment Martingale

Let Z1, Z2, · · · be a sequence of i.i.d. random variables on a probability space (Ω, Image, P) with finite mean μ = E (Z1) and variance σ2 = Var(Z1). Let Imagen = σ(Z1,· · ·, Zn), n ≥ 1. Let Image and Image. It is easily verified that {Yn;n ≥ 1} is a submartingale and Image is a martingale. Assume:

Image

Image EXAMPLE 11.4

Let X1,X2,· · · be a sequence of independent random variables with E (Xn) = 1 for all n. Let {Yn;n ≥ 1} be:

Image

If Imagen = σ(X1,· · ·, Xn), it is clear that:

Image

That is, {Yn;n ≥ 1} is a martingale with respect to (Imagen)n.          Image

Image EXAMPLE 11.5 Polya Urn Model

Suppose that an urn has one red ball and one black ball. A ball is drawn at random from the urn and is returned along with a ball of the same color. The procedure is repeated many times. Let Xn denote the number of black balls in the urn after n drawings. Then X0 = 1 and {Xn;n ≥ 0} is a Markov chain with transitions

Image

and

Image

Let Image be the proportion of black balls after n drawings. Then {Mn;n ≥ 0} is a martingale, since:

Image

Image EXAMPLE 11.6 Doob’s Martingale

Let X be a random variable with E(|X|) < ∞, and let {Imagen}n≥1 be a filtration. Define Xn = E(X | Imagen) for n ≥ 1. Then {Xn,n ≥ 0} is a martingale with respect to {Imagen}n≥0:

Image

Also,

Image

As we know that every martingale is also a submartingale and a supermartingale, the following theorem provides a method for getting a submartingale from a martingale.

Theorem 11.1 Let {Mn;n ≥ 0} be a martingale with respect to the filtration (Imagen)n≥0. If Image(·) is a convex function with E(|Image(Mn)|) < ∞ for all n, then {Image(Mn);n ≥ 0} is a submartingale.

Proof: By Jensen’s inequality (Jacod and Protter, 2004):

Image

Image EXAMPLE 11.7

Let {Mn;n ≥ 0} be a nonnegative martingale with respect to the filtration (Imagen)n≥0. Then Image and {−log Mn;n ≥ 0} are submartingales.          Image

Image EXAMPLE 11.8

Let {Yn;n ≥ 1} be an arbitrary collection of random variables with E[|Yn|] < ∞ for all n ≥ l. Let Imagen = σ (Y1, · · ·, Yn), n ≥ 1. For n ≥ 1, define

Image

where Image0 = {Image,Ω}. Then, for each n ≥ 1, Xn is Imagen-measurable with E[|Xn|] < ∞. Also, for n ≥ 1:

Image

Hence {Xn;n ≥ 1} is a martingale. Thus, it is possible to construct a martingale sequence starting from any arbitrary sequence of random variables.          Image

Image EXAMPLE 11.9

Let {Xn;n ≥ 0} be a martingale with respect to the filtration (Imagen)n≥0 and let {Yn;n ≥ 0} be defined by:

Yn+1 := Xn+1Xn, n = 0,1,2, · · ·.

It is clear that:

Image

Suppose that {Cn;n ≥ 1} is a predictable stochastic process, that is, Cn is a Imagen−1-measurable random variable for all n. We define a new process {Zn;n ≥ 0} as:

Image

The process {Zn;n ≥ 0} is a martingale with respect to filtration {Imagen}n≥0 and is called a martingale transformation of the process Y, denoted by Z = C · Y. The martingale transforms are the discrete analogues of stochastic integrals. They play an important role in mathematical finance in discrete time (see Section 12.3).          Image

Note 11.4 Suppose that {Cn;n ≥ 1} represents the amount of money a player bets at time n and Yn := XnXn−1 is the amount of money he can win or lose in each round of the game. If the bet is a monetary unit and X0 is the initial wealth of the player, then Xn is the player’s fortune at time n and Zn represents the player’s fortune by using the game strategy {Cn;n ≥ 1}. The previous example shows that if {Xn;n ≥ 0} is a martingale and the game is fair, it will remain so no matter what strategy the player follows.

Image EXAMPLE 11.10

Let ξ1, ξ2 · · · be i.i.d. random variables and suppose that for a fixed t:

m(t) := E(etξ1) < ∞.

The sequence of random variables {Xn;n ≥ 0} with X0 := 1 and

Image

is a martingale.          Image

Image EXAMPLE 11.11

Let ξ12 · · · and Xn (t) be as in the example above. We define the random variables Image as:

Image

We have that Image is a martingale.          Image

Definition 11.3 A random variable Image with values {1,2, · · · }∪{∞} is a stopping time with respect to the filtration (Imagen)n≥1 if {Imagen} Image Imagen for each n ≥ 1.

Note 11.5 The condition given in the previous definition is equivalent to {Image = n} Image Imagen for each n ≥ 1.

Image EXAMPLE 11.12 First Arrival Time

Let X1,X2, · · · be a sequence of random variables adapted to the filtration (Imagen)n≥1. Suppose that A is a Borel set of Image and consider the random variable defined by

τ := min {n ≥ 1 : Xn Image A}

with min (Image) := ∞. It is clear that τ is a stopping time since:

Image

In particular we have that, for the gambler’s ruin case, the time τ at which the player reaches the set A = {0, a} for the first time is a stopping time.          Image

Image EXAMPLE 11.13 Martingale Strategy

Previously we observed that if a player who follows the martingale strategy loses the first n bets and wins the (n + l)th bet, then his wealth Xn+1 after the (n + l)th bet is:

Image

Suppose that τ is the stopping time at which the player wins for the first time. It is of our interest to know what is, on average, his deficit for that time. That is, we want to determine the value E(Xτ−1) from the previous equation. We have:

Image

Therefore, on average, a player must have an infinite capital to fulfill the strategy.          Image

Let {Xn;n ≥ 1} be a martingale with respect to the filtration (Imagen)n≥1. We know that E(Xn) = E(X1) for any n ≥ 1. Nevertheless, if τ is a stopping time, it is not necessarily satisfied that τ. Our next objective is to determine the conditions under which τ, where τ is the stopping time.

Definition 11.4 Let τ be a stopping time with respect to the filtration (Imagen)n≥0 and let {Xn;n ≥ 0} be a martingale with respect to the same filtration. We define the stopped process {XτΛn;n ≥ 0} as follows:

Image

Theorem 11.2 If {Xn;n ≥ 1} is a martingale with respect to (Imagen)n≥0, and if τ is a stopping time with respect to (Imagen)n≥0, then {XτΛn;n ≥ 0} is a martingale.

Proof: Refer to Jacod and Protter (2004).          Image

Theorem 11.3 (Optional Stopping Theorem) Let {Xn;n ≥ 0} be a martingale with respect to the filtration (Imagen)n≥1 and let τ be a stopping time with respect to (Imagen)n≥1. If

1. τ < ∞ a.s.,

2. E(Xτ) and < ∞

3. Image,

then E{Xτ) = E (Xn) for all n ≥ 1.

Proof: Since for any n ≥ 1 it is satisfied that

Image

and since the process {Xn;n ≥ 0} and {XτΛn;n ≥ 0} are both martingales, we have:

Image

On the other hand by the hypothesis

Image

and

Image

it follows that the tail of the series, which is Image, tends to zero as n tends to ∞. Therefore, taking the limit as n → ∞ in (11.2), we obtain:

E(Xτ) = E(Xn) for all n ≥ 1.              Image

Note 11.6 Suppose that {Xn;n ≥ 0} is a symmetric random walk in Image with X0 := 0 and that N is a fixed positive integer and let τ be the stopping time defined by:

τ := min {n ≥ 1 : |Xn| = N}.

It is easy to verify that the process {Xn;n ≥ 0} and the process Image are martingales. Moreover, it is possible to show that the stopping theorem hypotheses are satisfied. Consequently, we get

Image

from which we have:

Image

That is, the random walk needs on average N2 steps to reach the level N.

The following results on convergence of martingales, which we state without proof, provide many applications in stochastic calculus and mathematical finance.

Theorem 11.4 Let{Xn;n ≤ 0} be a submartingale with respect to (Imagen)n≥0 such that supn E(|Xn|) < ∞. Then there exists a random variable X having E(|X|) < ∞ such that:

Image

Note 11.7 There is a similar result for supermartingales because if {Xn;n ≥ 0} is a supermartingale with respect to (Imagen)n≥0, then {−Xn;n ≥ 0} is a submartingale with respect to (Imagen)n≥0. The previous theorem implies in addition that every nonnegative martingale converges almost surely. The following example shows that, in general, there is no convergence in the mean.

Image EXAMPLE 4.14

Suppose that {Yn;n ≥ 1} is a sequence if i.i.d random variables with normal distribution eac having mean 0 and variance σ2. Let:

Image

It is easy to prove that {Xn;n ≥ 0} is a nonnegative martingale. By using the strong law of large numbers we obtain that Image. Nevertheless, Image since E (Xn) = 1 for all n.          Image

Now we present a theorem which gives a sufficient condition to ensure the almost sure convergence and convergence in the r-mean. Its proof is beyond the scope of this text, (refer to Williams, 2006).

Theorem 11.5 If {Xn;n ≥ 0} is a martingale with respect to Image such that Image E(|Xn|r) < ∞ for some r > 1, then there is a random variable X such that

Xn Image X

converges almost surely and in the r-mean.

Next, we give a brief account of continuous-time martingales. Many of the properties of martingales in discrete time are also satisfied in the case of martingales in continuous time.

Definition 11.5 Let (Ω, Image, P) be a probability space. A filtration is a family of sub-σ-algebras (Imaget)tImageT such that ImagesImaget for all st.

Definition 11.6 A stochastic process {Xt;t Image T} is said to be adapted to the filtration (Imaget)tImageT if Xt is Imaget-measurable for each t Image T.

Definition 11.7 Let ImageTImage. A process {Xt;t Image T} is called a martingale with respect to the filtration (Imaget)tImageT if:

1. {Xt;t Image T} is adapted to the filtration (Imaget)tImageT.

2. E(|Xt|) < ∞ for all t Image T.

3. E(Xt | Images) = Xs a.s. for all st.

Note 11.8

a. If condition 3 is replaced by: E(Xt | Images) ≥ Xs a.s. for all st, then the process is called a submartingale.

b. If condition 3 is replaced by: E (Xt | Images) ≤ Xs a.s. for all s ≤ t, then the process is called a supermartingale.

Note 11.9 Condition 3 in the previous definition is equivalent to:

E (Xt − Xs | Images) = 0 a.s. for all s ≤ t.

Note 11.10 The sequence {Xt;tT} is clearly adapted to the canonical filtration, that is, to the filtration (Imaget)tT, where Imaget = σ (Xs, st) is the smallest σ-algebra with respect to which the random variables Xs with s ≤ t are measurable.

Image EXAMPLE 11.15

Let {Xt; t ≥ 0} be a process with stationary and independent increments. Assume Imaget = σ (Xs, s ≤ t) and E (Xt) = 0 for all t ≥ 0. Then:

Image

That is, {Xt;t ≥ 0} is a martingale with respect to (Imaget)t≥0.    Image

Note 11.11 If in the above example we replace the condition “E (Xt) = 0 for all t ≥ 0” by “E(Xt) ≥ 0 for all t ≥ 0” [“E (Xt) ≤ 0 for all t ≥ 0”] we find that the process is a submartingale (a supermartingale).

Image EXAMPLE 11.16

Let {Nt; t ≥ 0} be a Poisson process with parameter λ > 0. The process {Nt;t ≥ 0} has independent and stationary increments and in addition E(Nt) = λt ≥ 0. Hence, {Nt; t ≥ 0} is a submartingale.

However, the process {Nt − λt;t ≥ 0} is a martingale and is called a compensated Poisson process.    Image

11.2 BROWNIAN MOTION

The Brownian motion is named after the English botanist Robert Brown (1773–1858) who observed that pollen grains suspended in a liquid moved irregularly. Brown, as his contemporaries, assumed that the movement was due to the life of these grains. However, this idea was soon discarded as the observations remained unchanged by observing the same movement with inert particles. Later it was found that the movement was caused by continuous particle collisions with molecules of the liquid in which it was embedded. The first attempt to mathematically describe the Brownian motion was made by the Danish mathematician and astronomer Thorvald N. Thiele (1838–1910) in 1880. Then in the early twentieth century, Louis Bachelier (1900), Albert Einstein (1905) and Norbert Wiener (1923) initiated independently the development of the mathematical theory of Brownian motion. Louis Bachelier (1870–1946) used this movement to describe the behavior of stock prices in the Paris stock exchange. Albert Einstein (1879–1955) in 1905 published his paper “Über die von dev molekularischen Theorie der Wärme gefordete Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen” in which he showed that at time t, the erratic movement a particle can be modeled by a normal distribution. The American mathematician Norbert Wiener (1894–1964) was the first to perform a rigorous construction of Einstein’s model of Brownian motion, which led to the definition of the so-called Wiener measure in the space of trajectories. In this section we introduce Brownian motion and present a few of its important properties.

Definition 11.8 The stochastic process B = {Bt,t ≥ 0} is called a standard Brownian motion or simply a Brownian motion if it satisfies the following conditions:

  1. B0 = 0.
  2. B has independent and stationary increments.
  3. For s < t, every increment {Bt − Bs} is normally distributed with mean 0 and variance (t − s).
  4. Sample paths are continuous with probability 1.

Note 11.12

  1. The Brownian motion is a Gaussian process. This is because the distribution of a random vector of the form(Bt1, Bt2, … , Btn) is a linear combination of the vector (Bt1, Bt2Bt1, …, BtnBtn−1) which has normal distribution.
  2. The Brownian motion is a Markov process with transition probability density function

    Image

    for any x, yImage and 0 < s < t.

    Image

    Figure 11.1 Sample path of Brownian motion

  3. The probability density function of Bt is given by:

    Image

In the following algorithm, we simulate the sample path for the Brownian motion. This involves repeatedly generating independent standard normal random variables.

Algorithm 11.1

Input: T, N where T is the length of time interval and N is the time steps.

Output: BM(k) for k = 0(1)N.

Initialization: BM(0) := 0

Iteration: For k = 0(1)N − 1 do:
               Z(k + 1) = stdnormal(rand(0, 1))
               BM(k +1) = BM(K) + Image × Z(k + 1)

where stdnormal(rand(0, 1)) is the value of the standard normal random variable using the random number generated in the interval (0, 1). Using this algorithm, we obtain the sample path of Brownian motion as shown in Figure 11.1 for T = 10 and N = 1000.

Now we will discuss some simple and immediate properties of the Brownian motion:

  1. E (Bt) = 0 for all t ≥ 0.
  2. E Image = t for all t ≥ 0.
  3. The covariance of Brownian motion C (s, t) = min (s, t). This is because, if st, then:

    Image

    Similarly, if ts, we get C(s, t) = t. Hence, the covariance of Brownian motion C(s, t) = min (s, t).

Theorem 11.6 Let {Bt;t ≥ 0} be a Brownian motion. Then the following processes are also Brownian motions:

  1. Shift Property: For any s > 0, Image = Bt+sBs is a Brownian motion.
  2. Symmetry Property: Image = −Bt is a Brownian motion.
  3. Scaling Property: For any constant c > 0, Image is a Brownian motion.
  4. Time Reversal Property: Image for t > 0 with B0 = 0 is a Brownian motion.

Proof: It is easy to check that {Image;t ≥ 0} for i = 1, 2, 3, 4 are processes with independent increments with Image = 0. Also the increments are normally distributed with mean 0 and variance (t − s).          Image

Brownian Motion as a Limit of Random Walks Let {Xt,t ≥ 0} be the stochastic process representing the position of a particle at time t. We assume that the particle performs a random walk such that in a small interval of time of duration Δt the particle moves forward a small distance Δx with probability p or moves backward by a small distance Δx with probability q = 1 − p, where p is independent of x and t. Suppose that the random variable Yk denotes the length of the kth step taken by the particle in a small interval of time Δt and the Yk’s are independent and identically distributed random variables with P(Yk = +Δx) = p = 1 − P(Yk = −Δx).

Suppose that the interval of length t is divided into n equal subintervals of length Δt. Then n · (Δt) = t, and the total displacement Xt of the particle is the sum of n i.i.d. random variables Yk, so that

Image

with n = [n(t)] and n(t) = tt for each t ≥ 0. As a function of t, for each ω, Xt is a step function where steps occur every Δt units of time and steps are of magnitude Δx. We have:

E(Yi) = (p − qx  and  Var(Yi) = 4pqx)2.

Then:

E(Xt) = n(p − qx  and  Var(Xt) = 4npqx)2.

Substituting Image we have:

Image

When we allow Δx → 0 and Δt → 0, the corresponding steps n tend to ∞.We assume that the following expressions have finite limits:

Image

and

Image

where μ and σ are constants. Since the Yk’s are i.i.d. random variables, using the central limit theorem, for large n = n(t) the sum Image is asymptotically normal with mean μt and variance σ2t. That is,

Image

where Z is a standard normal random variable.

Various Gaussian and non-Gaussian stochastic processes of practical relevance can be derived from Brownian motion. We introduce some of those processes which will find interesting applications in finance.

Image EXAMPLE 11.17

Let {Bt;t ≥ 0} be a Brownian motion. The stochastic process {Rt;t ≥ 0} defined by

Image

is called a Brownian motion reflected at the origin. The mean and variance of Rt are given by:

Image

Image EXAMPLE 11.18

Let {Bt;t ≥ 0} be a Brownian motion. The stochastic process {At;t ≥ 0} is defined by

Image

where T0 = inf{t ≥ 0 : Bt = 0} is the hitting time at 0. Then At is called the absorbed Brownian motion.    Image

Image EXAMPLE 11.19

The stochastic process {Ut; 0 ≤ t ≤ 1}, defined as

Ut = BttB1,

is called a Brownian bridge or the tied-down Brownian motion.

The name Brownian bridge comes from the fact that it is tied down at both ends t = 0 and t = 1 since U0 = U1 = 0. In fact, the Brownian bridge {Ut;0 ≤ t ≤ 1} is characterized as being a Gaussian process with continuous sample paths and the covariance function

Cov (Us, Ut) = s(1 − t),     0 ≤ st ≤ 1.

If{Ut;0 ≤ t ≤ 1} is a Brownian bridge, then it can be shown that the stochastic process

Image

is the standard Brownian motion.    Image

Image EXAMPLE 11.20

Let {Bt;t ≥ 0} be a Brownian motion. For μ ∈ Image and σ > 0, the process

Image

is called a Brownian motion with drift μ. It is easy to check that Image is a Gaussian process with mean fit and covariance C(s, t) = σ2 min(s, t).    Image

Image EXAMPLE 11.21

Let {Bt;t ≥ 0} be a Brownian motion. For μ ∈ Image and σ > 0, the process

Xt = exp(μt + σBt),     t ≥ 0,

is called a geometric Brownian motion.    Image

This process has been used to describe stock price fluctuations (see next chapter for more details). It should be noted that Xt is not a Gaussian process. Now we will give the mean and covariance for the geometric Brownian motion.

Using the moment generating function of the normal random variable (4.2), we get:

Image

Similarly we obtain the covariance of the geometric Brownian motion for s < t,

Image

and the variance is given by:

Image

The previous section discussed continuous-time martingales. Presently we will see a Brownian motion as an example of a continuous-time martingale.

Theorem 11.7 Suppose that {Bt;t ≥ 0} is a Brownian motion with respect to filtration Imaget, where Imaget := σ(Bs; s ≤ t). Then

  1. {Bt} is a martingale,
  2. {Imaget} is a martingale and
  3. for Image, {expBt − (σ2/2)t)} is a martingale (called an exponential martingale).

Proof:

  1. It is clear that, for every t ≥ 0, Bt is adapted to the filtration Image and E(Bt) exists. For any s, t ≥ 0 such that s < t:

    Image


  2. Image

    Thus:

    Image

  3. The moment generating function of {Bt; t ≥ 0} is given by:

    Image

    Therefore Image and Image is integrable. Now:

    Image

Note 11.13 Let {Xt; t ≥ 0} be a stochastic process with respect to filtration Image. Then {Xt; t ≥ 0} is a Brownian motion if and only if it satisfies the following conditions:

  1. X0 = 0 a.s.
  2. {Xt; t ≥ 0} is a martingale with respect to filtration Image.
  3. Image is a martingale with respect to filtration Image.
  4. With probability 1, the sample paths are continuous.

The above result is known as Lévy’s characterization of a Brownian motion (see Mikosh, 1998).

The possible realization of a sample path’s structure and its properties play a crucial role and are the subject matter of deep study. Brownian motion has the continuity of the sample path by definition. Another important property is that it is nowhere differentiable with probability 1. The mathematical proof of this property is beyond the scope of this text. For rigorous mathematical proof, the reader may refer to Karatzas and Shreve (1991) or Breiman (1992).

Now we will see an important and interesting property of a Brownian motion called quadratic variation. In the following, we define the notion of quadratic variation for a real-valued function.

Definition 11.9 Let f (t) be a function defined on the interval [0, T]. The quadratic bounded variation of the function f is

Image

where Image is a partition of the interval [0,T],

Image

with:

Image

Theorem 11.8 The quadratic variation of the sample path of a Brownian motion over the interval [0, T] converges in mean square to T.

Proof: Let Image be a partition of the interval [0, T] :

Image

Let

Image

Then for each n we have:

Image

Also:

Image

We conclude that:

Image

Thus we have proved that Qn converges to T in mean square.     Image

We can also prove Qn converges to T with probability 1. This proof can be found in Breiman (1992) and Karatzas and Shreve (1991) (see Chapter 8 for different types of convergence of random variables).

As we have seen in this section, the sample path of Brownian motion is nowhere differentiable. Because the stochastic processes which are driven by Brownian motion are also not differentiable, we cannot apply classical calculus. In the following section we introduce the stochastic integral or Itô integral with respect to Brownian motion and its basic rules. We will do so using an intuitive approach which is based on classical calculus. For a mathematically rigorous approach on this integral see Karatzas and Shreve (1991) or Oksendal (2006).

11.3 ITÔ CALCULUS

The stochastic calculus or Itô calculus was developed during the year 1940 by Japanese mathematician K. Itô and is similar to the classical calculus of Newton which involves differentials and integrals of deterministic functions. In this section, we will study the stochastic integral of the process {Xt; t ≥ 0} with respect to a Brownian motion, that is, we adequately define the following expression:

Image

In the classical calculus, the equations which consist of the expressions of the form dx are known as differential equations. If we replace the term dx by an expression of the form dXt, the equations are known as stochastic differential equations. Formally, a stochastic differential equation has the form

Image

where μ(x, t) and σ(x, t) are given functions. Equation (11.10) can be written in integral form:

Image

The first integral is a Riemann integral. How can we interpret the second integral? Initially we could take our inspiration from ordinary calculus in defining this integral as a limit of partial sums, such as

Image

provided the sum exists. Unlike the Riemann sums, the value of the sum here depends on the choice of the chosen points ti’s. In the case of stochastic integrals, the key idea is to consider the Riemann sums where the integrand is evaluated at the left endpoints of the subintervals. That is:

Image

Observing that the sum of random variables will be another random variable, the problem is to show that the limit of the above sum exists in some suitable sense. The mean square convergence (see Chapter 8 for the definition) is used to define the stochastic integral. We establish the family of stochastic processes for which the Itô integral can be defined.

Definition 11.10 Let L2 be the set of all the stochastic processes {Xt; t ≥ 0} such that:

(a.) The process X = {Xt; t ≥ 0} is progressively measurable with respect to the given filtration Image. This means that, for every t, the mapping (s, ω) → Xs(ω) on every set [0, t] × Ω is measurable.

(b.) Image for all T > 0.

Now we give the definition of the Itô integral for any process {Xt; t ≥ 0} ∈ L2.

Definition 11.11 Let {Xt; t ≥ 0} be a stochastic process in L2 and T > 0 fixed. We define the stochastic integral or Itô integral of Xt with respect to Brownian motion Bt over the interval [0, T] as

Image

where Image is a partition of the interval [0, T] such that

Image

with:

Image

Notation:

Image

Image EXAMPLE 11.22

Consider the stochastic integral

Image

where Bt is a Brownian motion. Let 0 = t0 < t1 < t2 < … < tn = T be a partition of the interval [0, T]. From the definition of the stochastic integral, we have:

Image

By the use of the identity

Image

We get:

Image

The stochastic integral (11.12) for all T > 0 satisfies the following properties:

  1. Zero mean:

    Image

  2. Itô isometry:

    Image

  3. Martingale: For tT,

    Image

  4. Linearity: For {Xt; t ≥ 0}, {Yt; t ≥ 0} ∈ L2,

Image

Proof: We now prove only the martingale property of the Itô integral. For proofs of the remaining properties, the reader may refer to Karatzas and Shreve (1991). Consider

Image

where the above equality follows by the zero mean property.     Image

Image EXAMPLE 11.23

Let Image be an Itô integral. We have E (Xt) = 0 by property (11.22). The variance is calculated by use of the mgf of Brownian motion and Itô isometry. We have:

Image

In the context of ordinary calculus, the Itô formula is also known as the change of variable or chain rule for the stochastic calculus.

Theorem 11.9 (Itô’s Formula) Let Image be a twice-differentiable function and let B = {Bt; t ≥ 0} be a Brownian motion that starts at x0, that is, B0 = x0. Then

Image

or in the differential form:

Image

Proof: Fix t > 0. Let Image be a partition of [0, t]. By Taylor’s theorem, we have:

Image

Taking the limit n → ∞ when Δt → 0, we find that the first sum of the right- hand side converges to the Itô integral and the second sum on the right-hand side converges to Image because of mean square convergence. We get:

Image

Thus:

Image

Image EXAMPLE 11.24

Let f (x) = x2 and B = {Bt; t ≥ 0} be a standard Brownian motion. The Itô formula establishes that:

Image

That is:

Image

Image EXAMPLE 11.25

Let f (x) = x3 and B = {Bt; t ≥ 0} be a standard Brownian motion. The Itô formula establishes that:

Image

That is:

Image

Image EXAMPLE 11.26

Let Image for a Brownian motion {Bt; t ≥ 0} with B0 = 0. Prove that

Image

and hence find Image and Image.

Solution. By the Itô’s formula, we have:

Image

Taking expectation we have:

Image

Since β2(t) = t, we get:

Image

Definition 11.12 For a fixed T > 0, the stochastic process {Xt;0≤ tT} is called an Itô process if it has the form

Image

where X0 is Image-measurable and the processes Yt and Zt are Image-adapted such that, for all t ≥ 0, E(|Yt|) < ∞ and E(|Zt|2) < ∞. An Itô process has the differential form

Image

We now give the Itô formula for an Itô process.

Theorem 11.10 (Itô’s Formula for the General Case) Let {Xt; t ≥ 0} be an Itô process given in (11.14). Suppose that f (t, x) is a twice continuously differentiable function with respect to x and t. Then f(t, Xt) is also an Itô process and:

Image

Proof: See Oksendal (2006).     Image

Note 11.14 We introduce the notation

Image

which is computed using the following multiplication rules:

Image

The Itô formula then can be expressed in the following form:

Image

Note 11.15 Itô’s formula can also be expressed in differentials as:

Image

Image EXAMPLE 11.27

Let Xt = t and f(t, x) = g (x) be a twice-differentiable function. It is easy to see that:

Image

Thus, applying Itô’s formula, we get:

Image

That is, the fundamental theorem of calculus is a particular case of Itô’s formula.     Image

Image EXAMPLE 11.28

Let Xt = h (t) where h is a differentiable function and let f (t, x) = g (x) be a twice-differentiable function. It is easy to check that:

Image

Applying Itô’s formula, we obtain :

Image

In this case also, the substitution theorem of calculus is a particular case of Itô’s formula.     Image

Image EXAMPLE 11.29

Let {Bt; t ≥ 0} be a Brownian motion and consider the following differential equation:

Image

Let Zt = log(yt). Then, by Itô’s formula, we have:

Image

Thus:

Image

Integrating we get

Image

so that the solution of equation (11.15) is:

Image

Image EXAMPLE 11.30

Consider the Langevin equation

dXt = −βXtdt + αdBt

where Image and β > 0. The process {Xt; t ≥ 0} with X0 = x0 can be written as:

Image

Let f (t, x) = eβtx. Applying Itô’s formula, we get:

Image

Integration of the above equation gives for st:

Image

The solution of the Langevin equation with initial condition X0 = xo is called an Ornstein-Uhlenbeck process.     Image

We complete this chapter with the Itô formula for functions of two or more variables.

Multidimensional Itô Formula

We now give the Itô formula for functions of two variables. Consider a two-dimensional process

Image

Image

where Image and Image are two Brownian motions with their covariances given by

Image

where ρ is the correlation coefficient of the two Brownian motions. Let g(t, x, y) be a twice-differentiable function and let Zt = g(t, Xt, Yt). Then Zt is also an Itô process and satisfies:

Image

For the proof, the reader may refer to Karatzas and Shreve (1991).

Note 11.16 For any two Itô processes, {Xt; t ≥ 0} and {Yt; t ≥ 0}, we have the following product rule for the differention:

Image

Theorem 11.11 Let Xt and Yt be two Itô processes such that Image and Image. Then:

Image

Proof: Let Image and Image.

By using the identity

Image

and taking expectation, we get:

Image

By use of Itô’s isometry property we get the desired result.     Image

Image EXAMPLE 11.31

Suppose that Xt = tBt. Use of product rule (11.19) gives us:

dXt = tdBt + Btdt.     Image

Image EXAMPLE 11.32

Suppose that Xt = tBt and Yt satisfies the stochastic differential equation

Image

We know that Yt = eBt is a geometric Brownian motion. Then the use of product rule (11.19) gives us:

d (XtYt) = XtdYt + YtdXt + tYtdt.

This is because:

Image

Image EXAMPLE 11.33

Suppose that

Image

with X0 = 0, α, β Image Image and {Bt; t ≥ 0} and {Wt; t ≥ 0} are two Brownian motions. Let f(t, x) = x2. Then, from Itô’s formula,

Image

with Image. Note that Xt = αBt + βWt and:

Image

From equations (11.21) and (11.22), we get:

Image

Using the relation

Image

we have the following interesting result:

Image

Without recourse to measure theory, we have presented various tools necessary in dealing with financial models with the use of stochastic calculus. This chapter does not make a full-fledged analysis and is intended as a motivation for the further study. For a more rigorous treatment, the reader may refer to Grimmett and Stirzaker (2001), Oksendal (2005), Mikosch (2002), Shreve (2004), and Karatzas and Shreve (1991).

EXERCISES

11.1   In Example 11.11 verify

Image

and

Image

11.2   Let {Xn; n ≥ 0} be a martingale (supermartingale) with respect to the filtration Image. Prove that

Image

for all k ≥ 0.

11.3   Let {Xn; n ≥ 0} be a martingale (supermartingale) with respect to the filtration Image. Prove that:

E(Xn) = E(Xk)   (≤ for supermantingale)

for all 0 ≤ kn

11.4   Let {Xn; n ≥ 0} be a martingale with respect to the filtration Image and assume f to be a convex function. Prove that {f(Xn); n ≥ 0} is a submartingale with respect to the filtration Image.

11.5   If {Xt; t ≥ 0} is a martingale with respect to Image if Image is a convex function such that E(|h(Xt)|) < ∞ for all t ≥ 0, show that {h(Xt); t ≥ 0} is a submartingale with respect to Image.

11.6   Let ξ1, ξ2, … be i.i.d. random variables, such that Pn = 1) = p and Pn = −1) = 1 − p for some p in (0,1). Prove that {Mn; n ≥ 0} with

Image

is a martingale with respect to Image, where Image and Image for n ≥ 1.

11.7   Let X1, X2, … be a sequence of i.i.d. random variables satisfying

Image

Let M0 := 0, Mn := X1X2Xn and Image. Is {Mn; n ≥ 0} a martingale with respect to Image? Explain.

11.8   Let X1, X2, … be a sequence of random variables such that E (Xn) = 0 for all n = 1,2, … and suppose E (eXn) exists for all n = 1,2, … .

a) Is the sequence {Yn; n ≥ 1} with Image a submartingale with respect to Image, where Image for n ≥ 1? Explain.

b) Find (if possible) constants αn such that the sequence {Zn; n ≥ 1} with Image is a martingale with respect to Image, where Image for n ≥ 1.

11.9   (Doob’s descomposition) Let {Yn; n ≥ 0} be a submartingale with respect to the filtration Image. Show that

Image

for n = 1,2, … is a martingale with respect to Image and that the sequence An := YnMn, n = 1,2,…, satisfies 0 ≤ A1A1 ≤ …. Is An measurable with respect to Image? Explain.

11.10   Let X1, X2, … be a sequence of independent random variables such that Image exists for all n = 1,2, … and suppose Sn := X1+…+Xn, n = 1, 2, …. Is Image a submartingale? If it is so, then determine the process {An; n ≥ 1} as in the exercise above.

11.11   Let {Xn; n ≥ 1} be a sequence of random variables adapted to the filtration Image. Suppose that Image is the time at which the process {Xn; n ≥ 1} reaches for the first time the set A and let:

Image

Show that Image is a stopping time. What does Image represent?

11.12   Let τ be a stopping time with respect to the filtration Image and k be a fixed positive integer. Show that the following random variables are stopping times: Image.

11.13   Let {Xn; n ≥ 1} be the independent random variables with E[Xn] = 0 and Var(Xn) = σ2 for all n ≥ 1. Set M0 = 0 and Image, where Sn = X1 + X2 + … + Xn. Is {Mn; n ≥ 1} a martingale with respect to the sequence Xn?

11.14   Let {Nt; t ≥ 0} be a Poisson process with rate λ and Image is a filtration associated with Nt. Write down the conditional distribution of Nt+sNt given Image, where s > 0, and use your answer to find Image.

11.15   (Lawler, 1996) Consider the simple symmetric random walk model Yn = X1 + X2 + … + Xn + with Y0 = 0, where the steps Xis are independent and identically distributed with P[Xk = 1] = 1/2 and P[Xk = −1] = 1/2 for all k. Let T := inf{n : Yn = −1} denote the hitting time of −1. We know that P[T < ∞] = 1. Show that if s > 0, then Image with M0 = 1 is a martingale, where Image.

11.16   Let X1, X2, … be independent random variables such that

Image

where a1 = 2 and Image. Is Image a martingale?

11.17   Let Bt be a Brownian motion. Find E ((BtBs)4).

11.18   Let {Bt; t ≥ 0} and Image be two independent Brownian motions. Show that

Image

is also a Brownian motion. Find the correlation between Bt and Xt.

11.19   Let Bt be a Brownian motion. Find the distribution of B1 + B2 + B3 + B4.

11.20   Let {Bt; t ≥ 0} be a Brownian motion. Show that e−αtBe2αt is a Gaussian process. Find its mean and covariance functions.

11.21   Let {Bt; t ≥ 0} be a Brownian motion. Find the distribution for the integral

Image

11.22   St has the following differential equations:

dSt = μStdt + σStdBt.

Find the equation for the process Image.

11.23   Use the Itô formula to write down the stochastic differential equations for the following equations. {Bt; t ≥ 0} is a Brownian motion process.

a) Image.

b) Yt = tBt.

c) Zt = exp(ct + αBt).

11.24   Let:

Image

Find E(It(B)) and E(It(B)2).

11.25   Evaluate the integral

Image

11.26   Evaluate the integral

Image

11.27   Suppose that Xt satisfies:

Image

Let Yt = f (t, Xt) = (2t + 3)Xt + 4t2. Find Yt.

11.28   Use Itô’s formula to show that:

Image

11.29   Consider the stochastic differential equation

Image

with X0 = 0.

a) Find Xt.

b) Let Zt = eXt. Find the stochastic differential equation for Zt using Itô’s formula

11.30   Find the solution of the stochastic differential equation

dZt = Ztdt + 2ZtdBt.

11.31   Solve the following stochastic differential equation for the spot rate of interest:

drt = (brt)dt + σdBt

where rt is an interest rate, Image and σ ≥ 0.

11.32   Suppose that Xt follows the process dXt = 0.05Xtdt + 0.25XtdBt. Using Itô’s lemma find the equation for process Image.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset