Chapter 11

Introduction to Continuous-Time Stochastic Calculus

11.1 The Riemann Integral of Brownian Motion

11.1.1 The Riemann Integral

Let f be a real-valued function defined on [0, T]. We now recall the precise definition of the Riemann integral of f on [0, T] as follows.

  • For n ∈ ℕ, consider a partition Pn of the interval [0, T]:

    Pn={t0,t1,...,tn},0=t0<t1<<tn=T.

    Define Δti = titi−1, i = 1, 2, . . . , n.

  • Introduce an intermediate partition Qn for the partition Pn:

    Qn={s1,s2,...,sn},ti1siti,i=1,2,...,n.

  • Define the Riemann (nth partial) sum as a weighted average of the values of f:

    Sn=Sn(f,Pn,Qn)=i=1nf(si)Δti.

  • Suppose that the mesh size δ(Pn) ≔ max1≤in Δti goes to zero as n → ∞. If the limit limn→∞ Sn exists and does not depend on the choice of partitions Pn and Qn, then this limit is called the Riemann integral of f on [0, T], denoted as usual by 0Tf(t)dt. The function f is called the integrand. If the Riemann integral exists, then f is said to be Riemann integrable on [0, T]. For instance, if f is continuous on [0, T] (or the set of discontinuities of f is finite), then f is Riemann integrable on [0, T]. In fact, the Riemann integral exists if f is m–a.e. (i.e., Lebesgue almost everywhere) continuous on [0, T].

11.1.2 The Integral of a Brownian Path

Our goal is now to consider computing the Riemann integral of a Brownian sample path w.r.t. time t ∈ [0, T], i.e., for an outcome ω∈Ω:

I(T,ω)0TW(t,ω)dt.(11.1)

Recall that, with probability one (i.e., for almost all ω ∈ Ω), Brownian paths are continuous functions of time t. Hence, almost any sample path (t, W(t, ω)), 0 ≤ tT, is continuous. The Riemann integral (11.1) of such a sample path hence exists and is given by

0TW(t,ω)dt=lim⁡ δ(Pn)0i=1nW(si,ω)(titi1).

It hence suffices to consider a uniform partition Pn of [0, T] with step size Δt=Tn and time points ti = i Δt for 0 ≤ in. Let Qn be chosen so that si = ti, 1 ≤ in. We then have the nth Riemann sum Sn(ω)Δti=1nW(iΔt,ω) and its limit converging to the Riemann integral of the Brownian path:

I(T,ω)=lim⁡ nSn(ω)=lim⁡ nTni=1nW(iTn,ω)

for almost all ω ∈ Ω. Hence, as a random variable, the Riemann integral of Brownian motion is (a.s.) uniquely given by

I(T)0TW(t)dt=lim⁡ nSn=lim⁡ nTni=1nW(iTn).

We now show that the nth Riemann sum Sn is a normally distributed random variable.

Proposition 11.1.

The Riemann sum Sn is normally distributed with mean and variance

E[Sn]=0andE[(Sn)2]=T(T+Δt)(T+Δt/2)3.

Proof. Since Sn is a linear combination of jointly normal random variables, it is normally distributed. The expected value is

E[Sn]=E[Δti=1nW(ti)]=Δti=1nE[W(ti)]=0.

Then, Var(Sn) = E[(Sn)2] is given by [note: titj = (Δt)(ij)]

E[(Sn)2]=E[(Δti=1nW(ti))2]=(Δt)2E[(i=1nW(ti))(j=1nW(tj))]=(Δt)2i=1nj=1nE[W(ti)W(tj)]=(Δt)2i=1nj=1ntitj=(Δt)3i=1nj=1nij=(Δt)3k=1nk2=(Δt)3n(n+1)(2n+1)6=(Δtn)(Δtn+Δt)(Δtn+Δt/2)3=T(T+Δt)(T+Δt/2)3

since Δt · n = T.

The Reimann sums {Sn}n ≥ 1 hence form a sequence of normally distributed random variables. The limit of such a sequence is hence normally distributed and this gives us that I(T) = limn→∞ Sn is a normal random variable. This is true for all time values T > 0. As n → ∞ (and hence Δt = T/n → 0), we obtain the mean and variance of I(T):

E[Sn]E[I(T)]=0,E[Sn2]E[I2(T)]=T33.

Thus, the stochastic process {I(t)}t>0 is a Gaussian process where I(t) ~ Norm(0, t3/3).

Alternatively, to obtain the moments of I(t) it is instructive to apply the Fubini Theorem that allows for changing the order of the time integral and expectation integral. For all t and s with 0 ≤ st, we have

E[i(s)I(t)]=E[(0sW(u)du)(0tW(v)dv)]=0t0sE[W(u)W(v)]dudv=0t0smin⁡ {u,v}dudv=0s0smin⁡ {u,v}dudv+st(0smin⁡ {u,v}du)dv=0s0smin⁡ {u,v}dudv+st(0sudu)dv=s33+(ts)s22.

The last line follows by computing each integral separately. The second integral follows trivially. The first integral is readily computed by writing min {u, v} = min {u, v} ?uv + min {u, v}?vu and by symmetry:

0s0smin⁡ {u,v}dudv=0s0smin⁡ {u,v}Iuvdudv+0s0smin⁡ {u,v}Ivudvdu=0s(0vudu)dv+0s(0uvdv)du=20s(0vudu)dv=0sv2dv=s3/3.

Hence, applying the above formula for s = t gives the variance E[I2(t)]=E[I(t)I(t)]=t33. The mean function of the integral process I is zero, mI(t) = E[I(t)] = 0 and the covariance function is

cI(s,t)=E[I(s)I(t)]=(st)33+|ts|(st)22.

Example 11.1

Show that Y(t)W3(t)30tW(u)du,t0, is a martingale w.r.t. any filtration for Brownian motion.

Solution. First we note that Y(t) ≔ W3(t) − 3I(t), where the integral I(t)0tW(u)du is a function of the history of the Brownian motion up to time t and is hence t–measurable. That is, the integral process {I(t)}t≥0 is adapted to the filtration and hence so is the process {Y(t)}t⩾0. The process is also integrable since E[|Y(t)|]E[|W3(t)|]+3E[|I(t)|]=E[|W3(t)|]+30tE[|W(u)|]du<. Now, for times t, s ≥ 0 we consider E[I(t+s)|t]Et[I(t+s)]:

Et[I(t+s)]=Et[I(t)+tt+sW(u)du]=I(t)+Et[tt+sW(u)du]=I(t)+tt+sEt[W(u)]du=I(t)+tt+sW(t)du=I(t)+W(t)tt+sdu=I(t)+sW(t),

where we used the martingale property of Brownian motion and Fubini's theorem in one of the terms. Note that in explicit integral form we have shown

Et[0t+sW(u)du]=0tW(u)du+sW(t).

Using the fact that {W3(t) − 3tW(t)}t≥0 and {W(t)}t≥0 are martingales, we obtain

Et[W3(t+s)]=Et[W3(t+s)3(t+s)W(t+s)]+Et[3(t+s)W(t+s)]=W3(t)3tW(t)+3(t+s)W(t)=W3(t)+3sW(t).

Therefore,

Et[Y(t+s)]=Et[W3(t+s)]3Et[I(t+s)]=W3(t)+3sW(t)3I(t)3sW(t)=W3(t)3I(t)=Y(t).

11.2 The Riemann–Stieltjes Integral of Brownian Motion

Since it is possible to integrate Brownian paths (and functions of Brownian motion) w.r.t. time, it is interesting to find out what other integrals can be calculated for Brownian motion. The Riemann–Stieltjes integral generalizes the Riemann integral. It provides an integral of one function w.r.t. another appropriate one. So our goal is to define the integral of one stochastic process w.r.t. another one (say, w.r.t. Brownian motion).

11.2.1 The Riemann–Stieltjes Integral

The construction of the Riemann–Stieltjes integral goes as follows. Let f be a bounded function and g be a monotonically increasing function, both defined on [0, T].

  • For n ∈ ℕ, introduce partitions Pn and Qn in the same manner as is done for the Riemann integral.
  • If the limit of the partial sum over any (shrinking) partition,

    limnδ(Pn)0i=1nf(si)(g(ti)g(ti1)),

    exists and is independent of the choice of Pn and Qn, then it is called the Riemann–Stieltjes integral of f w.r.t. g on [0, T] and is denoted by

    0Tf(t)dg(t).

    The function g is called the integrator.

  • By taking g(x) = x, the Riemann integral is simply seen to be a special case of the Riemann–Stieltjes integral.

Let us take a look at some important examples, as follows.

  1. (1) Consider the Heaviside (unit) step function, H, defined by

    H(x)=I[0,)(x)={0ifx<0,1ifx0.

    Let f be continuous at an interior point s ∈ (0, T), c be a nonnegative constant, and let g(x) = cH(xs). Then,

    0Tf(t)dg(t)=cf(s).

    Hence, when integrator g is a simple step function, the integral simply picks out one value of the continuous function f and this value corresponds to the value of f at the point of discontinuity of g. This is a sifting property of the step function integrator g.

  2. (2) The first example extends into the more general case of a step function g(x) assumed as a mixture of Heaviside unit step functions: g(x)=n=1cnH(xsn), where cn ≥ 0 for n = 1, 2, 3, . . . are chosen such that n=1cn converges and {sn}n≥1 is a sequence of distinct points in (0, T). If f is continuous on [0, T], then

    0Tf(t)dg(t)=n=1cnf(sn).

    We see that the integral is a sum over f evaluated at all points of discontinuity of g within the integration interval [0, T]. This extends the sifting property in the above first example.

  3. (3) Suppose that f and g′ are Riemann integrable on [0, T]. In that case

    0Tf(t)dg(t)=0Tf(t)g(t)dt.

    Hence, when the integrator is differentiable, the Riemann–Stieltjes integral is simply the Riemann integral of fg′, i.e., we formally have the differential dg(t) = g'(t) dt.

  4. (4) Consider a CDF F that is a mixture of a discrete CDF F1 and a continuous CDF F2:

    F(x)=w1F1(x)+w2F2(x),F1(x)=n=1pnH(xxn),F2(x)=xp(t)dt,

    where w1 and w2 are nonnegative weights summing to one, {pn}n≥1 and {xn}n≥1 are, respectively, the mass probabilities and mass points of the discrete distribution, and p(x) = F2(x)is the PDF of the continuous distribution. Then, for a bounded f:

    f(x)dF(x)=w1f(x)dF1(x)+w2f(x)dF2(x)=w1n=1pnf(xn)+w2f(x)p(x)dx.

In the first integral, with F1 as integrator, we used the result in example (2). This is an example in which the Riemann–Stieltjes integral gives us the expected value of a function f(X) of a random variable X having a mixture distribution given by F1 and F2 with respective mixture probabilities (weights) w1 and w2. In particular, the above equation can be read as E[f(X)] = w1E(1)[f(X)] + w2E(2)[f(X)].

The Riemann–Stieltjes integral 0Tf(t)dg(t) can be extended on a larger class of functions. Recall that the p−variation of f: [0, T] → ℝ is

V[0,T](p)(f)=lim⁡ sup⁡ δ(Pn)0i=1n|f(ti)f(ti1)|p,

where the limit is taken over all possible partitions 0 = t0 < t1 < · · · < tn = T, shrinking as n → ∞. The following result (stated without proof) shows that we can consider the Riemann–Stieltjes integral on a fairly extensive combination of functions f and integrator g whose combined variational properties satisfy a certain condition.

Proposition 11.2.

Assume that f and g do not have discontinuities at the same points within the integration interval [0, T]. Let the p-variation of f and the q-variation of g be finite for some p, q > 0, such that 1p+1q>1. Then, the Riemann–Stieltjes integral 0Tf(t)dg(t) exists and is finite.

For example, if the integrator g is a function of bounded variation on [0, T], and f is a continuous function, then both functions have finite (first) variation. Hence, we may use p = q = 1 in the above proposition and this confirms that the Riemann–Stieltjes integral of f w.r.t. g is defined.

11.2.2 Integrals w.r.t. Brownian Motion

It is known that (a.s.) the p-variation of a Brownian sample path on [0, T] is finite for p ≥ 2 and infinite for p < 2. In particular, we proved that the quadratic variation of Brownian motion is bounded but the first variation is unbounded. Applying Proposition 11.2 for q = 2 to the Riemann–Stieltjes integral 0Tf(t)dW(t) gives us that such an integral w.r.t. Brownian motion is well-defined if the p-variation of f is finite for some p ∈ (0, 2). For example, the integral exists if f is a function of bounded variation (p = 1) such as a monotone function or a continuously differentiable function. Thus, for example, the integrals

0TetdW(t),0TtαdW(t)(α1)

exist as Riemann–Stieltjes integrals. However, the integral

0TW(t)dW(t)(11.2)

does not (a.s.) exist as a Riemann–Stieltjes integral. First, note that Proposition 11.2 is not applicable to the integral in (11.2) since V[0,T](p)(W) is finite iff p ≥ 2. Hence, or f(t) = g(t) = W(t) we have p = q and 1p+1p 1 for p ≥ 2. Second, let us show we can obtain different values of the integral in (11.2) for different intermediate partitions Qn.

Consider the Riemann–Stieltjes sum Sn=i=1nW(ti1)(W(ti)W(ti1)), i.e., with the intermediate nodes si = ti−1 for i = 1, 2, . . . , n. We rewrite Sn as follows, upon using the algebraic identity a(ba)=12(ab)2+12(b2a2):

Sn=12i=1n{(W(ti)W(ti1))2(W2(ti)W2(ti1))}=12i=1n(W(ti)W(ti1))2+12i=1n(W2(ti)W2(ti1))=W2(tn)W2(t0)=W2(T)W2(0).

As n → ∞ and δ(Pn) → 0, we have i=1n(W(ti)W(ti1))2[W,W](T) = T, i.e., the quadratic variation of Brownian motion. Therefore,

lim⁡ nSn=12(W2(T)W2(0))T2.

This limit is called the Itô integral of BM:

0TW(t)dW(t)=12(W2(T)W2(0))T2=12(W2(T)T).

Recall that for a differentiable function f we would have obtained (the ordinary calculus result)

0Tf(t)df(t)=f2(T)f2(0)2.

Consider another choice of the intermediate partition with upper endpoint si = ti for every i = 1, 2, . . . , n. The Riemann–Stieltjes sum is then

Sn*=i=1nW(ti)(W(ti)W(ti1))=12i=1n(W(ti)W(ti1))2[W,W](T)=T,asn+12i=1n(W2(ti)W2(ti1))=W2(T)W2(0)12(W2(T)W2(0))+T2,asn.

For 0 ≤ α ≤ 1, consider a weighted average of Sn and Sn*:

αSn+(1α)Sn*=i=1n(αW(ti1)+(1α)W(ti))(W(ti)W(ti1))12(W2(T)W2(0))+T2αT,asn.

An interesting case is when the midpoint is used, i.e., α=12. The respective limit is called the Stratonovich integral of BM:

0TW(t)dW(t)=12(W2(T)W2(0)).

The Stratonovich integral satisfies the usual rules of (nonstochastic) ordinary calculus such as the chain rule and integration by parts. The two types of stochastic integrals are related as

0TW(t)dW(t)=0TW(t)dW(t)+T2.

For a continuously differentiable function f : ℝ → ℝ, it can be shown that the following conversion formula applies:

0Tf(W(t))dW(t)=0Tf(W(t))dW(t)+120Tf(W(t))dt,

where the respective integrals correspond to the Stratonovich and Itô integrals of a differentiable function f(W(t)) of BM. We note, however, that we have yet to give a precise general definition of such stochastic integrals. This is the topic of the next section.

11.3 The Itô Integral and Its Basic Properties

11.3.1 The Itô Integral for Simple Processes

Our goal is to give a construction of the Itô stochastic integral w.r.t. standard Brownian motion. Generally, we shall assume that the integrand is some other stochastic process that is adapted to a chosen filtration F={t}t0 for Brownian motion. We can, for instance, choose ? as the natural filtration for Brownian motion, FW={tW}t0. In what follows, we shall consider all processes defined on some time interval [0, T], for any T > 0, and we begin by considering a simple case with a piecewise-constant integrand.

Definition 11.1.

A continuous-time stochastic process {C(t)}0≤tT defined on the filtered probability space (Ω, ℱ, ℙ, ?) is said to be a simple process (or step-stochastic process) if there exists a time partition Pn = {t0, t1, . . . , tn} of [0, T], where t0 = 0 and tn = T, such that the process C is constant on each subinterval [ti, ti+1), 0 ≤ in − 1. In other words, there exists random variables ξ0, ξ1, . . . , ξn−1 such that:

  1. ξi is ti-measurable for i = 0, 1, . . . , n − 1 (i.e., the process C is adapted to ?);
  2. C(t) = i=0n1ξiI[ti,ti+1)(t),i.e.,C(t)=ξifort[ti,ti+1).

The simple process C is said to be square integrable if

E[0TC2(s)ds]<E[ξi2]<fori=0,1,...,n1.

The process {C(t)}0≤tT is defined as a right-continuous step process. For instance, a piecewise-constant approximation of Brownian motion is the simple process:

C(t)=ξiW(ti)fort[ti,ti+1).

Note that the process C(t) on the interval t ∈ [ti, ti+1) is fixed to BM at time ti (it is a Norm(0, ti) random variable). For any path ω, the graph of C(t, ω) as function of time t ∈ [0, T] is piecewise constant (step function) with fixed value C(t, ω) = W(ti, ω) ≡ ξi (ω) on every time interval t ∈ [ti, ti+1). Figure 11.1 depicts a sample path of BM and an approximation to it by the path of a simple process on the interval [0, 1].

Figure 11.1

Figure showing a Brownian sample path and its approximation by a simple process.

A Brownian sample path and its approximation by a simple process.

The Itô integral of a simple process can be defined as a Riemann–Stieltjes sum evaluated at the left endpoint of subintervals [ti, ti+1). Hence, the simplest case of an Itô integral of an indicator function ?[a, b](t), 0 ≤ a < bT, is the Riemann–Stieltjes integral w.r.t. W:

0TI[a,b](t)dW(t)=abdW(t)=W(b)W(a).

The Riemann–Stieltjes integral of a step function (i.e., a linear combination of indicator functions) gives us the working definition of the Itô integral of a simple process as follows.

Definition 11.2.

The Itô integral I(t) of a simple process C(s)=i=0n1ξiI[ti,ti+1)(s) on any interval [0, t], 0 ≤ tT, is

I(t)0tC(s)dW(s)=i=0k1ξi(W(ti+1)W(ti))+ξk(W(t)W(tk)),(11.3)

for tkttk+1. For the integration interval [0, T], we obtain

I(T)=0TC(s)dW(s)=i=0n1ξi(W(ti+1)W(ti)).

The Itô integral of a general process X ≡ {X(t), t ≥ 0} adapted to a filtration ? for BM is defined as the mean-square limit of integrals of simple processes that approximate X. Consider a square-integrable1 continuous-time process {X(t)}0≤tT adapted to ?:

E[0TX2(t)dt]<andX(t)ist-measurablefor0tT.

The process X on [0, T] can be approximated by a sequence of simple processes as follows:

  • select a partition Pn = {t0, t1 , . . . , tn} of [0, T], e.g., ti=iTn for i = 0,1, . . . , n;
  • set ξi = X(ti) for i = 0, 1, . . . , n − 1;
  • set C(n)(t)=i=0n1ξiI[ti,ti+1)(t).

As the maximum step size δ(Pn) goes to 0 as n → ∞, the sequence of simple processes {C(n)(t)}n≥1 gives a better and better approximation of the continuously varying process X. The precise convergence condition is specified by requiring that

lim⁡ nE[0T(X(t)C(n)(t)2)dt]=0.(11.4)

Given an adapted process X satisfying the above square integrability condition, it can be proven that there exists such a sequence of square-integrable and adapted simple processes such that (11.4) holds. The corresponding sequence is said to approximate the process X. Then, the Itô integral I(t), 0 ≤ tT, of a general process X is defined as the mean-square limit of integrals of an approximating sequence of simple processes:

I(t)0tX(s)dW(s)lim⁡ n0tC(n)(s)dW(s)lim⁡ nI(n)(t).(11.5)

The above mean-square limit really means that the sequence of Itô integral random variables {I(n)(t)}n≥1 converges to the random variable I(t) in the sense of L2(Ω), i.e., for each t ≥ 0 we have

lim⁡ nE[(I(t)I(n)(t))2]=0.(11.6)

The assumed square integrability condition on X ensures that I(t) exists and is given uniquely (a.s.). That is, for any approximating sequence satisfying the condition in (11.4), it can be shown that E[(I(m)(t) − I(n)(t))2] → 0, as m, n → ∞. This implies that {I(n)(t)}n≥1 is a Cauchy sequence in L2(Ω) and therefore has a unique limit in the L2 sense.

11.3.2 Properties of the Itô Integral

The Itô integral I(t)IX(t)0tX(s)dW(s) of a continuous-time stochastic process {X(t)}t≥0, which is adapted to a filtration ? = {ℱt}t≥0 for Brownian motion and assumed to satisfy the square-integrability condition E[0TX2(t)dt]<, has the following properties.

  1. (1) Continuity. Sample paths {I(t; ω)}0≤tT are continuous functions of time t (a.s.).
  2. (2) Adaptivity. I(t) is ℱt-measurable for all t ∈ [0, T].
  3. (3) Linearity. Let I1(t)=0tX1(s)dW(s) and I2(t)=0TX2(s)dW(s) and assume that processes X1 and X2 meet the same requirements as those specified for process X above. Then, c1I1(t)+c2I2(t)=0t(c1X1(s)+c2X2(s))dW(s) for constants c1, c2 ∈ ℝ.
  4. (4) Martingale. {I(t)}0≤tT is a martingale w.r.t. filtration ?.
  5. (5) Zero mean. E[I(t)] = 0 for 0 ≤ tT.
  6. (6) Itô isometry. Var(I(t))=E[I2(t)]=E[(0tX(s)dW(s))2]=0tE[X2(s)]ds, i.e., the variance of an Itô integral on [0, t] is equal to a Riemann integral (w.r.t. time variable s) of the second moment of the integrand process as a function of time s ∈ [0, t].

Proof.

To simplify the proof, we suppose that {X(t), t ≥ 0} is a simple process having the form X(t)=i=0n1ξiI[ti,ti+1](t) with each ξi as ti-measurable.

  1. (1) Fix ω ∈ Ω. Then I(t, ω) is a Riemann–Stieltjes integral of a piecewise-constant step function X(s, ω) with respect to the continuous integrator function W(s, ω) on the interval s ∈ [0, t]. Such an integral is a continuous function of the upper limit t.
  2. (2) Let t ∈ [tk, tk+1]. Then, it is clear from the expression in (11.3) that I(t) is ℱt-measurable since it is a function of only Brownian motions up to time t and all ξi, 0 ≤ ik, random variables are ti-measurable and hence ℱt-measurable.
  3. (3) Suppose that X1 and X2 are defined on the same partition Pn (otherwise we combine the partitions for X1 and X2) and are given by:

    Xj(t)=i=0n1ξi(j)I[ti,ti+1)(t),j=1,2.

    Then, c1X1(t)+c2X2(t)=i=0n1(c1ξi(1)+c2ξi(2))I[ti,ti+1)(t) is a simple process. Integrate it on [0, T] to obtain

    0T(c1X1(s)+c2X2(s))dW(s)=i=1n(c1ξi1(1)+c2ξi1(2))(W(ti)W(ti1))=c1i=1nξi1(1)(W(ti)W(ti1))+c2i=1nξi1(2)(W(ti)W(ti1))=c10TX1(s)dW(s)+c20TX2(s)dW(s).

  4. (4) Fix s and t such that 0 ≤ stT. Let us show that E[I(t) | ℱs] = I(s). Suppose that s ∈ [tm−1, tm] and t ∈ [tk−1, tk] for some 1 ≤ mkn. Represent the integral of X on [0, t] as follows:

    I(t)=i=0m2ξi(W(ti+1)W(ti))+ξm1(W(s)W(tm1))=I(s)iss-measurable+ξm1(W(tm)W(s))+j=mk2ξj(W(tj+1)W(tj))+ξk1(W(t)W(tk1)).

    By taking the expectation of I(t) conditional on ℱs and applying properties of conditional expectations, we obtain

    Es[I(t)]=I(s)+ξm1Es[W(tm)W(s)](sinceξm-1iss-measurable)+j=mk2Es[ξj(W(tj+1)W(tj))]+Es[ξk1(W(t)W(tk1))].

    Since W(tm) − W(s) is independent of ℱs, we have

    Es[W(tm)W(s)]=E[W(tm)W(s)]=0.

    By using the tower property and the independence property, we obtain

    Es[ξj(W(tj+1)W(tj))]=Es[ξjEtj[W(tj+1)W(tj)]]=Es[ξjE[W(tj+1)W(tj)]]=0

    for j = m + 1, . . . , k − 2. A similar step can be applied to the last expectation Es[ξk1Etk1[W(t)W(tk1)]]=Es[ξk1E[W(t)W(tk1)]]=0. Therefore, we have Es [I(t)] = I(s) for 0 ≤ stT. Since the Itô integral is adapted to ? and is assumed integrable, E[|I(t)|] < ∞, then it is a martingale w.r.t. ?.

  5. (5) Since the integral process I is a martingale, the expected value E[I(t)] is constant and equal to E[I(0)] = 0 for all 0 ≤ tT. We note that the martingale property also gives us the identity

    E[stX(u)dW(u)|s]=E[I(t)I(s)|s]=E[I(t)|s]I(s)=I(s)I(s)=0.

  6. (6) For ease of presentation, we assume that t = tk, for some k = 1, . . . , n, since the proof follows similarly for any value of t ≥ 0. Then,

    I(tk)=i=0k1ξi(W(ti+1)W(ti))=i=0k1ξiZi,

    where ZiW(ti+1) − W(ti) ~ Norm(0, ti+1ti), i = 0, . . . , k − 1, are i.i.d. random variables. Note that each Zi is independent of ξi since ξi is ti-measurable and the Brownian increment W(ti+1) − W(ti) is independent of ti. Squaring I(tk) and taking its expectation gives

    E[I2(tk)]=i=0k1E[ξi2Zi2]+20i<jk1E[ξiξjZiZj].

    The second summation involves expectations of products with i < j, i.e., ji + 1, and hence ξi, ξj and Zi are tj-measurable. Applying the tower property by conditioning on tj then gives

    E[ξiξjZiZj]=E[E[ξiξjZiZj|tj]](ξiξjZiistj-measurable)=E[ξiξjZiE[Zj|tj]]=E[ξiξjZiE[Zj]]=0.

    Here we used the fact that ZjW(tj+1) − W(tj) is independent of tj and E[Zj] = 0. [We note that the result is also more simply derived by using the independence of Zj and ξiξjZi.] Therefore, the above second summation is zero and we have from the first sum (upon using the independence of ξi and Zi):

    E[I2(t)]=i=0k1E[ξi2Zi2]=i=0k1E[ξi2]E[Zi2]=i=0k1E[ξi2](ti+1ti)=0tkE[X2(s)]ds=E[0tkX2(s)ds],

    where we used the step function E[X2(s)]=i=0k1E[ξ12]I[ti,ti+1)(s),for0stk.

It is important to note that not all properties that are valid for Riemann integrals are necessarily true for Itô integrals. For example, suppose that two processes X and Y satisfy X(t) ≤ Y(t) (a.s.), i.e., ℙ(X(t) ≤ Y(t)) = 1, for 0 ≤ tT. Then, it is true that 0tX(s)ds0tY(s)ds (a.s.) for 0 ≤ tT. However, this type of integral inequality property is not generally valid or Itô integrals IX(t) = 0tX(s)dW(s) and IY(t)=0tY(s)dW(s). For example, consider the trivial case of constant processes X(t) ≡ 0 and Y(t) ≡ 1. Clearly, ℙ(X(t) ≤ Y(t)) = ℙ(0 ≤ 1) = 1. However, IX(t) ≡ 0 and IY(t)=0tdW(s)=W(t) so that ℙ(IX (t) ≤ IY (t)) = ℙ(0 ≤ W (t)) = 1/2 ≠ 1.

Example 11.2

Show whether or not the following integrals are well-defined.

  1. (a) 01eW(t)dW(t).
  2. (b) 01W(t+1)dW(t).
  3. (c) 0teW2(s)dW(s)fort0. for t ≥ 0.
  4. (d) 01(1t)αdW(t)forα. for a ∈ ℝ.

Solution.

  1. (a) For 0 ≤ t ≤ 1, the integrand eW(t) is ℱt-measurable. So the integrand process, X(t) ≔ eW(t), is adapted to a filtration for Brownian motion. Now, we check if the integrand is square integrable:

    01E[e2W(t)]dt=01e2tdt=e212<.

    Therefore, the integral is defined.

  2. (b) Note, in this case the integrand process X(t) ≔ W(t + 1) is ℱt+1-measurable, but not ℱt-measurable and hence the Itô integral is not defined.
  3. (c) First, find the second moment of the integrand:

    E[e2W2(s)]=e2sz2n(z)dz=12πe(122s)z2dz

    and this has finite value 114siffs<14. Now, the integral of E[e2W2(s)] on s ∈ [0, t] is finite iff 0t14. So the Itô integral is defined for t[0,14].

  4. (d) Note that the integrand X(t) = (1 − t)a is just an ordinary function of time t, and 01E[X2(t)]dt=01(1t)2adt<iffa<12. So the Itô integral is defined iff a<12.

Before discussing further properties of the Itô integral, we now present a useful formula for computing the covariance between two Itô integrals (w.r.t. the same Brownian motion) as follows by Itô isometry. In particular, let X and Y be two adapted processes such that each satisfies the square integrability condition, i.e., assume 0tE[X2(s)]ds< and 0tE[Y2(s)]ds<. Then, IX(t)0tX(s)dW(s) and IY(t)0tY(s)dW(s) have covariance

E[IX(t)IY(t)]E[0tX(s)dW(s)0tY(s)dW(s)]=0tE[X(s)Y(s)]ds.(11.7)

Note that the Itô integrals have zero mean, E[IX(t)] = E[IY(t)] = 0. Hence, their covariance Cov(IX(t), IY(t)) = E[IX(t)IY(t)]. The formula in (11.7) is readily proven by writing the product IXIY=12(IX+IY)212IX212IY2=12IX+Y212IX212IY2. Using linearity of expectations and applying Itô isometry three times gives the result:

E[IX(t)IY(t)]=12(E[IX+Y2(t)]E[IX2(t)]E[IY2(t)])=12(0tE[(X(s)+Y(s))2]ds0tE[X2(s)]ds0tE[Y2(s)]ds)=0tE[12(X(s)+Y(s))212X2(s)12Y2(s)]ds=0tE[X(s)Y(s)]ds.

The result in (11.7) also leads to a formula for the covariance, Cov(IX(t), IY(u)) = E[IX(t)IY(u)], between two Itô integrals at different times, 0 ≤ tu:

E[IX(t)IY(u)]E[0tX(s)dW(s)0uY(s)dW(s)]=0tE[X(s)Y(s)]ds.(11.8)

This follows from the martingale property of an Itô integral and by conditioning on ℱt, with IX(t) as ℱt-measurable, while using the tower property:

E[IX(t)IY(u)]=E[E[IX(t)IY(u)|t]]=E[IX(t)E[IY(u)|t]]=E[IX(t)IY(t)].

11.4 Itô Processes and Their Properties

11.4.1 Gaussian Processes Generated by Itô Integrals

The Itô integral of a nonrandom (ordinary) differentiable function f can be considered as a Riemann–Stieltjes integral with any path of Brownian motion acting as the integrator function w.r.t. time. Thus it can be reduced to a Riemann integral by using the integration by parts formula:

I(t)=0tf(s)dW(s)=f(t)W(t)f(0)W(0)0tf(s)W(s)ds.

We know that the Riemann integral of Brownian motion is a Gaussian process. In a similar way, one can prove that the integral 0tf(s)W(s)ds is a Gaussian process as well (being considered as a function of the upper limit t). Thus, I(t) is a Gaussian process as well. This result is proved below for a more general case.

Theorem 11.3.

Let f: [0, ∞) → ℝ be a nonrandom function such that 0Tf2(t)dt< for some T > 0. Then, the Itô integral I(t)=0tf(s)dW(s),0tT, is a Gaussian process with mean zero and covariance function given by

cI(t,s)Cov(I(t),I(s))=0tsf2(u)du,0t,sT.

Proof. It suffices to show the property for 0 ≤ stT where ts = s. By the zero mean property of Itô integrals, we have mI(t) = E[I(t)] ≡ 0. For the covariance function cI(t, s) we have, upon using the tower property and martingale property of the Itô integral,

cI(t,s)E[0tf(u)dW(u)0sf(u)dW(u)]=E[I(t)I(s)]=E[I(s)E[I(t)|Fs]]=E[I2(s)].

This expectation is evaluated by the Itô isometry formula:

E[I2(s)]E[(0sf(u)dW(u)2)]=0sE[f2(u)]du=0sf2(u)du

where f(u) is nonrandom. Finally, by using the Itô formula presented in the next subsection, we can obtain the moment generating function of I(t):

MI(t)(α)E[eαI(t)]=e12α2f0tf2(s)ds,α.

This is the unique moment generating function of a normal random variable with mean zero and variance 0tf2(s)ds. Therefore, I(t)~Norm(0,0tf2(s)ds).

It should be remarked that the above result can be stated as

0tf(s)dW(s)~Norm(0,0tf2(s)ds)=dW(g(t))

where g is a function of time t as defined by g(t)0tf2(s)ds. That is, the Itô integral of the ordinary function f on [0, t] has the same distribution as standard Brownian motion at a time given by g(t). This is a simple type of time-changed Brownian motion where in this case the time change, g(t), is an ordinary function of time t.

Example 11.3

The process X(t)=0tsdW(s) is a Gaussian process with mean zero and variance Var (X(t))=0ts2ds=t3/3,i.e.,X(t)=dW(g(t)) where g(t) = t3/3.

11.4.2 Itô Processes

The sum of an Itô integral of a stochastic process and an ordinary (Riemann) integral generates another stochastic process called an Itô process.

Definition 11.3.

Let {µ(t)}t≥0 and {σ(t)}t≥0 be adapted to a filtration {ℱt}t≥0 for standard Brownian motion and satisfying

0TE[|μ(t)|]dt<and0TE[σ2(t)]dt<.

Then, the process

X(t)=X0+0tμ(s)ds+0tσ(s)dW(s)(11.9)

is well-defined for 0 ≤ tT. It is called an Itô process. The processes {µ(t)}t≥0 and {σ(t)}t≥0 are respectively called the drift coefficient process and the diffusion or volatility coefficient process.

The Itô process X can also be described by its so-called stochastic differential equation (SDE) which is obtained by “formally differentiating” (11.9) w.r.t. the time parameter t:

dX(t)=μ(t)dt+σ(t)dW(t).(11.10)

We note that this SDE, along with the initial condition X(0) = X0, is a shorthand way of writing the stochastic integral equation in (11.9). We interpret (11.10) through (11.9), where the latter has proper mathematical meaning as a sum of a Riemann integral and an Itô stochastic integral. That is, the Itô process X ≡ {X(t)}t≥0 can be viewed as a solution to the SDE in (11.10) with the initial condition X(0) = X0. The differential representation in (11.10) only has rigorous mathematical meaning by way of the respective integral representations in (11.9).

Some examples of Itô processes are as follows.

  1. (a) Let X(0) = x0, µ(t) ≡ µ and σ(t) ≡ σ be constants. Then, we obtain a drifted BM (i.e., BM with constant drift µ):

    X(t)=x0+0tμds+0tσdW(s)=x0+μt+σW(t).

  2. (b) Let µ = µ(t, X(t)) and σ = σ(t, X(t)) be functions of both time t and the process value X(t) at time t. The Itô process implicitly defined by the stochastic integral equation

    X(t)=X0+0tμ(s,X(s))ds+0tσ(s,X(s))dW(s)

    is called a diffusion process.

  3. (c) Let µ(t) and σ(t) be nonrandom (ordinary) functions of time t. Then,

    X(t)=X0+0tμ(s)ds+0tσ(s)dW(s),t0,

    with constant X0 ∈ ℝ, is a Gaussian process with mean and covariance functions

    mX(t)=X0+0tμ(u)duandcX(t,s)=0tsσ2(u)du.

The Itô process defined in (11.9) is given by a sum of a Riemann integral of µ and an Itô integral of σ. Both integrals being considered as functions of the upper limit t have continuous sample paths. Therefore, the Itô process has continuous sample paths as well.

So far, we have defined the Itô process as a stochastic integral w.r.t. Brownian motion. More generally, we can also define a stochastic integral w.r.t. an Itô process. Let the process {Y(t)}t≥0 be adapted to a filtration for BM. We define the stochastic integral of Y w.r.t. the Itô process X, defined in (11.9), as follows:

0tY(s)dX(s)0tY(s)μ(s)ds+0tY(s)σ(s)dW(s),t0.

Note that this is like substituting the stochastic differential dX(s) = µ(s) ds + σ(s) dW(s) (given by (11.10)) into the left-hand integral and writing it as a sum of a Riemann and Itô integral. Note that in case the process is standard Brownian motion, i.e., X(t) = W(t) with µ ≡ 0, σ ≡ 1, we simply recover 0tY(s)dW(s), the stochastic integral of Y w.r.t. Brownian motion.

11.4.3 Quadratic (Co-) Variation

An important characteristic of a stochastic process is the quadratic variation that measures the accumulated variability of the process along its path. The quadratic variation is a path-dependent quantity. Recall that for Brownian motion we derived its quadratic variation on a time interval [0, t] as [W, W](t) = t. So Brownian motion accumulates quadratic variation at rate one per unit time. This gives us a simple differential “rule”:

d[W,W](t)dW(t)dW(t)(dW(t))2=dt.

A practical way of thinking about this result is to say that a Brownian increment is of order O((dt)1/2) as dt → 0. We essentially already used this fact in showing the non-differentiability of Brownian paths. We also saw that, formally, the quadratic variation of a continuously differentiable function f is zero. This fact is also realized by noting that d[f,f](t)=(df(t))2=(f(t))2(dt)2=O((dt)2) is negligible as dt → 0.

One can prove that the quadratic variation of the Itô integral IX=0tX(s)dW(s) is

[IX,IX](t)=0tX2(s)ds,t0.(11.11)

So, the integral IX(t) accumulates quadratic variation at the (generally random) rate of X2(t) per unit time at every time t ≥ 0. That is, in differential form (11.11) gives us the “rule”:

d[IX,IX](t)dIX(t)dIX(t)(dIX(t))2=X2(t)dt.

Similarly, we can define the quadratic covariation of two processes:

[X,Y](t)=lim⁡ δ(Pn)0i=1n(X(ti)X(ti1))(Y(ti)Y(ti1)).(11.12)

Clearly, the quadratic covariation is a bilinear functional. Let us consider several examples.

  1. Let X(t) be a continuously differentiable (C1(ℝ)) function that satisfies dX(t) = µX(t) dt and let Y(t) be an Itô process. Then, [X, Y](t) = 0 for t ≥ 0. In differential form, this fact reads as dX(t) dY(t) = 0. Since Brownian motion is itself an Itô process and the function X(t) = t belongs to C1(ℝ), we have [t, W](t) = 0. This last fact is recorded in differential form as the “rule”: dt dW(t) = 0, i.e., an infinitesimal time increment times an infinitesimal Brownian increment gives zero.

    Proof.

    For t ≥ 0,

    |[X,Y](t)|lim⁡ δ(Pn)0|i=1n(X(ti)X(ti1))(Y(ti)Y(ti1))|lim⁡ δ(Pn)0max⁡ 1in|Y(ti)Y(ti1)|=0a.s..lim⁡ δ(Pn)0i=1n|X(ti)X(ti1)|=0=VX(1)(t)<

    Here we applied the Heine–Cantor theorem, which states that a continuous function (in this case Y is a.s. continuous) on a finite interval is uniformly continuous and the fact that sample paths of X have finite first variation.

  2. 2. The covariation of two Itô processes X and Y defined by

    X(t)=X(0)+0tμX(s)ds+0tσX(s)dW(s)

    and

    Y(t)=Y(0)+0tμY(s)ds+0tσY(s)dW(s)

    is given by

    [X,Y](t)=0tσX(s)σY(s)ds.(11.13)

A simple (heuristic) way to arrive at this result is to make recourse to the simple differential rules. In particular, the two processes satisfy

dX(t)=μX(t)dt+σX(t)dW(t)anddY(t)=μY(t)dt+σY(t)dW(t).

Hence, by multiplying out all terms in the differentials, we have

X(t)dY(t)={μX(t)dt+σX(t)dW(t)}{μY(t)dt+σY(t)dW(t)}=μX(t)μY(t)(dt)2+(μX(t)σY(t)+μY(t)σX(t))dtdW(t)+σX(t)σY(t)(dW(t))2.

Now, using the rules (dt)2 ≡ 0, dt dW(t) ≡ 0 and (dW(t))2 = dt gives the differential form of the quadratic covariation in (11.13):

d[X,Y](t)dX(t)dY(t)=σX(t)σY(t)dt.

An important application of quadratic covariation is the integration by parts formula given just below. Consider a sequence of partitions {Pn}n≥1 of [0, t] (such that δ(Pn) → 0, as n → ∞) and rewrite the sum of products of increments of X and Y on the partition Pn as follows:

i=1n(X(ti)X(ti1))(Y(ti)Y(ti1))=i=1n(X(ti)Y(ti)X(ti1)Y(ti1))=X(t)Y(t)X(0)Y(0)i=1nX(ti1)(Y(ti)Y(ti1))0tX(s)dY(s),asni=1nY(ti1)(X(ti)X(ti1)).0tY(s)dX(s),asn

Thus, the quadratic covariation of two Itô processes is

[X,Y](t)=X(t)Y(t)X(0)Y(0)0tX(s)dY(s)0tY(s)dX(s).

Alternatively, we write

X(t)Y(t)X(0)Y(0)=0tX(s)dY(s)+0tY(s)dX(s)+[X,Y](t).(11.14)

In differential form this gives us the important Itô product rule:

d(X(t)Y(t))=X(t)dY(t)+Y(t)dX(t)+dX(t)dY(t).(11.15)

The reader should observe that the stochastic differential of a product of two processes does not obey the same differential product rule as in ordinary calculus. The extra term dX(t) dY(t) = d[X, Y](t) is the product of the two differentials, which is generally nonzero. In particular, if both processes X and Y are driven by a Brownian increment dW(t) then their paths are nondifferentiable and hence the quadratic covariation [X, Y](t) is nonzero. Later we shall see that the above product rule also follows as a special case of the Itô formula derived for smooth functions of two processes.

11.5 Itô's Formula for Functions of BM and Itô Processes

11.5.1 Itô's Formula for Functions of BM

The Itô formula is a stochastic chain rule that allows us to find stochastic differentials of functions of Brownian motion as well as functions of an Itô process. The ordinary chain rule written for two differentiable functions f and g is as follows:

  • ddtf(g(t))=f(g(t))g(t) (derivative form);
  • df(g(t)) = f'(g(t)) dg(t) (differential form);
  • f(g(t))f(g(0))=0tf(g(s))dg(s) (integral form).

However, we cannot immediately apply this rule to f(W(t)) since Brownian motion W has nondifferentiable sample paths. Assume that f has continuous derivatives of first, second, and higher orders. Consider the Taylor series expansion for a smooth function f about the value W(t):

f(W(t+δt))f(W(t))=f(W(t))(W(t+δt)W(t))oforder(δt)12+12f(W(t))(W(t+δt)W(t))2oforderδt+16f″′(W(t))(W(t+δt)W(t))3oforder(δt)32+...,

where δt is a small time increment. A heuristic argument that leads us to the simplest version of the Itô formula goes as follows. In the infinitesimal limit, we take δt → dt and W(t + δt) − W(t) → dW(t), and we neglect all terms of order (δt)3/2 and smaller (of higher power than 3/2 in δt) to obtain

df(W(t))=f(W(t))dW(t)+12f(W(t))(dW(t))2.

By applying the simple rule (dW(t))2 = dt, we obtain the Itô formula for f(W(t)), which can be stated in the respective differential and integral forms:

df(W(t))=12f(W(t))dt+f(W(t))dW(t),(11.16)

0tdf(W(s))f(W(t))f(W(0))=120tf(W(s))ds+0tf(W(s))dW(s).(11.17)

This formula holds for any twice continuously differentiable function fC2(ℝ). The expression (11.17) tells us that f(W) ≔ {f(W(t))}t≥0 is an Itô process.

Only the skeleton of a proof of the Itô formula (11.17) is outlined below:

Proof.

Let Pn = {ti}1≤in be a partition of [0, t]. Write f(W(t)) − f(W(0)) as a telescopic sum over points of Pn:

f(W(t))f(W(0))=i=1n(f(W(ti))f(W(ti1))).

Now, apply Taylor's expansion formula to each term of the above sum:

f(W(ti))f(W(ti1))=f(W(ti1))(W(ti)W(ti1))+12f(θi)(W(ti)W(ti1))2,

where θi lies between W(ti-1) and W(ti) for i = 1, 2, . . . , n. By taking limits as n → ∞ and δ(Pn) → 0, the partial sums converge (in L2(Ω)) to the respective integrals:

i=1nf(W(ti1))(W(ti)W(ti1))0tf(W(s))dW(s)(anIto^  integral),i=1nf(θi)(W(ti)W(ti1))20tf(W(s))ds(aRiemann integral).

Example 11.4

Find the stochastic differential df(W(t)) for functions:

  1. (a) f(x) = xn, n ∈ ℕ;
  2. (b) f(x) = eαx, α ∈ ℝ.

Solution.

  1. (a) Differentiating gives f′(x) = nxn−1, f″(x) = n(n − 1)xn−2. Thus, (11.17) with f(W(t)) = Wn(t), f(W(0)) = Wn(0) = 0 reads

    Wn(t)=n(n1)20tWn2(s)ds+n0tWn1(s)dW(s).

    The differential form of the above representation is

    dWn(t)=n(n1)2Wn2(t)dt+nWn1(t)dW(t).

    By taking n = 2, we now also recover the well-known formula of the Itô integral of Brownian motion that we derived previously:

    W2(t)=0tds+20tW(s)dW(s)0tW(s)dW(s)=12W2(t)t2.

    For n = 3, we have

    W3(t)=30tW(s)ds+30tW2(s)dW(s).

    Note that the Itô integral 0tW2(s)dW(s) is a (square-integrable) martingale with the property 0tE[W4(s)]ds<. Hence, the process defined by Y(t)W3(t)30tW(s)ds,t0, is a martingale w.r.t. any Brownian filtration. We have proven this fact earlier in Example 11.1, but now it follows simply by applying an appropriate Itô formula and from the martingale property of the Itô integral.

  2. (b) Differentiating gives f′(x) = αf(x) and f"(x) = α2 f(x). Denote X(t) = f(W(t)) = eαW(t). Recall that X is a geometric Brownian motion (GBM). Now, by applying the Itô formula in (11.16) we have the stochastic differential of GBM:

    dX(t)=α22X(t)dt+αX(t)dW(t).

There are various important extensions of the Itô formula. In particular, consider the case of a stochastic process defined by X(t) ≔ f(t, W(t)), t ≥ 0, where the function f(t, x) ∈ C1, 2, i.e., we assume that the functions ft(t,x)ft(t,x),fx(t,x)fx(t,x),ftx(t,x)2ftx(t,x), and fxx(t,x)2fx2(t,x) are continuous. Let us heuristically apply a Taylor expansion to the differential df(t, W(t)) = f(t + dt, W(t) + dW(t)) − f(t, W(t)) and keep only terms up to second order in the Brownian increment dW(t) and first order in the time increment dt:

df(t,W(t))=ft(t,W(t))dt+fx(t,W(t))dW(t)+ftx(t,W(t))dtdW(t)+12fxx(t,W(t))(dW(t))2+

By the simple rules we have

(dW(t))2dt,(dt)20,dtdW(t)0.

Collecting the coefficient terms in dt and dW(t), the differential and integral forms of the Itô formula for f(t, W(t)) are then respectively given by

df(t,W(t))=(ft(t,W(t))+12fxx(t,W(t)))dt+fx(t,W(t))dW(t),(11.18)

f(t,W(t))f(0,W(0))=0t(fu(u,W(u))+12fxx(u,W(u)))du+0tfx(u,W(u))dW(u),(11.19)

for all 0 ≤ tT.

Example 11.5

Find the stochastic differential of the GBM process S(t) = S0eαt+σW(t), t ≥ 0, with constants S0 > 0, α, σ ∈ ℝ.

Solution. We represent S(t) = f(t, W(t)), where f(t, x) ≔ S0eαt+σx. Hence,

ft(t,x)=αf(t,x),fx(t,x)=σf(t,x),fxx(t,x)=σ2f(t,x).

Substituting these partial derivatives into the Itô formula (11.18) gives

dS(t)=(αf(t,W(t))+σ22f(t,W(t)))dt+σf(t,W(t))dW(t) =(α+σ22)S(t)dt+σS(t)dW(t).

Note that, in the above example, if we put α = µσ2/2, with parameter µ ∈ ℝ, then S(t)=S0e(μσ2/2)t+σW(t) is a GBM satisfying the SDE

dS(t)=μS(t)dt+σS(t)dW(t)

with initial condition S(0) = S0 and it is an Itô process where

S(t)=S0+μ0tS(u)+σ0tS(u)dW(u).

Hence, S(t)=S0e(μσ2/2)t+σW(t), for t ≥ 0, is an explicit solution to the above stochastic integral (or differential) equation whereby S(t) is explicitly given as (an exponential) function in the BM W(t) at time t. It is a martingale iff the drift coefficient µ = 0. In fact, in the previous chapter, we already proved (using different methods) that M(t)eσ22t+σW(t) is a martingale. It follows that the discounted process by eµt S(t) = M(t), t ≥ 0, is a martingale w.r.t. a filtration for BM.

11.5.2 Itô's Formula for Itô Processes

We are now ready to extend the Itô formula to the case of a process defined in terms of a smooth function of an Itô process and time t. Consider an Itô process {X(t)}0≤tT with the stochastic differential

dX(t)=μ(t)dt+σ(t)dW(t).

As in the previous version of the Itô formula obtained above, assume that f(t, x) ∈ C1, 2. Then, Y(t) ≔ f(t, X(t)), 0 ≤ tT, is also an Itô process. To obtain its stochastic differential we apply a Taylor expansion and keep only terms up to second order in the increment dX(t) and first order in the time increment dt:

dY(t)=ft(t,X(t))dt+fx(t,X(t))dX(t)+12fxx(t,X(t))(dX(t))2.

Note that the mixed partial derivative term ftx(t, X(t))dt dX(t) = 0 since dt dX(t) = µ(t)(dt)2 + σ(t) dt dW(t) = 0 upon using the simple rules (dt)2 = 0, dt dW(t) = 0. Also, (dX(t))2 = σ2(t)dt, and inserting the differential for dX(t) into the above equation and combining all coefficients multiplying dt and dW(t) finally gives us the Itô formula in differential form:

dY(t)df(t,X(t))=(ft(t,X(t))+μ(t)fx(t,X(t))+12σ2(t)fxx(t,X(t)))dt+σ(t)fx(t,X(t))dW(t).(11.20)

The integral form of this is

f(t,X(t))f(0,X(0))=0t(fs(s,X(s))+μ(s)fx(s,X(s))+12σ2(s)fxx(s,X(s)))ds+0tσ(s)fx(s,X(s))dW(s),(11.21)

for 0 ≤ tT.

Note that in case Y(t) = f(X(t)), i.e., f(t, x) = f(x) is not an explicit function of the time variable, then ft(t, x) ≡ 0 and all partial derivatives are simply ordinary derivatives: fx(t, x) = f'(x), fxx(t, x) = f"(x). The differential form of the Itô formula is df(X(t))=f(X(t))dX(t)+12f(X(t))(dX(t))2, i.e.,

f(X(t))=(μ(t)f(X(t))+12σ2(t)f(X(t)))dt+σ(t)f(X(t))dW(t)(11.22)

and in integral form

f(X(t))f(X(0))=0t(μ(s)f(X(s))+12σ2(s)f(X(s)))ds+0tσ(s)f(X(s))dW(s).(11.23)

Observe that (11.16) and (11.17) are recovered by (11.22) and (11.23) in the special case where the Itô process is Brownian motion: X = W where µ ≡ 0, σ ≡ 1. Similarly, (11.18) and (11.19) are special cases of (11.20) and (11.21).

Example 11.6

Let Y(t) ≔ ln X(t), t ≥ 0, where {X(t)}t≥0 is an Itô process with stochastic differential

dX(t)=aX(t)dt+bX(t)dW(t).

Find the SDE for the process Y and then find explicit representations for Y(t) and X(t) in terms of W(t).

Solution. In this case we define f(x) ≔ ln x, where Y(t) = f(X(t)). Differentiating gives f(x)=1x,f(x)=1x2. Applying (11.22) with µ(t) = aX(t), σ(t) = bX(t) gives

dY(t)=(aX(t)1X(t)+12(bX(t))2(1X2(t)))dt+bX(t)1X(t)dW(t)=(ab22)dt+bdW(t).

Integrating this equation therefore shows that the process Y is a drifted Brownian motion starting at Y(0) = ln X(0):

Y(t)=ln⁡ X(0)+(ab22)t+bW(t).

By inverting the transformation, we find the original process X(t) as a closed-form expression in W(t):

X(t)=eY(t)=elnX(0)+(ab22)t+bW(t)=X(0)e(ab22)t+bW(t).

An Itô integral I(t)=0tσ(s)dW(s),t0, is a martingale provided that the stochastic integral is well-defined). However, a time integral 0tμ(s)ds is generally not a martingale. Thus, the Itô formula can be used to verify whether or not a stochastic process that is a function of an Itô process is a martingale.

Example 11.7

Verify whether or not the following processes are martingales w.r.t. a filtration for BM:

  1. (a) X(t)=Z2(t)0tf2(u)du, where Z(t)=0tf(u)dW(u) and f is an ordinary continuous function for t ≥ 0;
  2. (b) Y(t)=V2(t)t22, where V(t)=0tW(u)dW(u).

Solution.

  1. (a) The process Z is Gaussian with the stochastic differential dZ(t) = f(t) dW(t) = µ(t) dt + σ(t) dW(t) where µ(t) ≡ 0 and σ(t) ≡ f(t). The process X is given by X(t) = g(t, Z(t)) with g(t,x)x20tf2(u)du. Taking derivatives of g:

    gt(t,x)=f2(t),gx(t,x)=2x,gxx(t,x)2.

    Applying (11.20) gives a stochastic differential with zero drift,

    dX(t)=[gt(t,Z(t))+0gx(t,Z(t))+12f2(t)gxx(t,Z(t))]dt+f(t)gx(t,Z(t))dW(t)=(f2(t)+12(2f2(t)))dt+2f(t)Z(t)dW(t)=2f(t)Z(t)dW(t).

    In integral form, X(t)=20tf(u)Z(u)dW(u) since X(0) = 0. Thus, X(t) is an Itô integral. It satisfies the square-integrability condition

    E[0t(f(u)Z(u))2du]=0tf2(u)E[Z2(u)]du<

    since E[Z2(u)]=0uf2(s)ds is a continuous function of u ≥ 0. Hence the process X is a martingale.

  2. (b) First, find the stochastic differential of Y:

    dY(t)=2V(t)dV(t)+(dV(t))2tdt=(W2(t)t)dt+2V(t)W(t)dW(t).

    Thus, Y (t) is a sum of an Itô integral (which is a martingale) and a Riemann integral of a function of Brownian motion:

    Y(t)=20tV(u)W(u)dW(u)+0t(W2(u)u)du.

    Note that Y (0) = 0. Let us show that the Riemann integral above is not a martingale. As a first simple check, we can try to verify whether the expected value of I(t)0t(W2(u)u) du is nonconstant over time:

    E[I(t)]=0t(E[W2(u)]uu)du=0

    for t ≥ 0. So the expectation is constant and we cannot yet conclude whether or not the process is a martingale. We hence need to necessarily calculate the conditional expectation to verify whether the process satisfies the martingale property. For t, s > 0, we have, upon using the martingale property of {W2(t) — t}t≥0:

    Et[I(t+s)]=I(t)+Et[tt+s(W2(u)u)du]=I(t)+tt+sEt[(W2(u)u)]du=I(t)+tt+s(W2(t)t)du=I(t)+(W2(t)t)tt+sdu=I(t)+s(W2(t)t)I(t).

    In conclusion, Et [Y (t + s)] ≠ Y (t) and hence the process Y is not a martingale.

11.6 Stochastic Differential Equations

An equation of the form of an Itô stochastic differential

dX(t)=μ(t,X(t))dt+σ(t,X(t))dW(t)(11.24)

where the coefficient drift µ(t, x) and volatility σ(t, x) are given (known) functions and X(t) is the unknown process is called a stochastic differential equation (SDE). Equations of this form are of great importance in financial modelling. In practice, (11.24) is subject to an initial condition X(0) = X0 where X0 is either a random variable or simply a constant X0 = x ∈ ℝ. As was mentioned in a previous section, an SDE of the type in (11.24), with constant X0, is also called a diffusion. We will study diffusions in some depth a little later in the text.

A process X is a so-called strong solution to the SDE in (11.24) if, for all t ≥ 0 (or t ∈ [0, T] if time is restricted to some finite interval [0, T]), the process satisfies

X(t)=X(0)+0tμ(s,X(s))ds+0tσ(s,X(s))dW(s)

where both integrals are assumed to exist. The randomness is completely driven by the underlying Brownian motion. So, in case σ ≡ 0 the equation is simply an ordinary first order ODE. It is important to note that a solution X(t) is an adapted process that is some representation or functional written in terms of the Brownian motion up to time t, i.e., X(t) = F(t, {W(s); 0 ≤ st}). We have in fact already seen some cases (see Examples 11.5 and 11.6) where the solution X(t) = F(t, W(t)) is just a function of the Brownian motion at the endpoint time t. A strong solution hence also gives a path-wise representation of the process {X(t)}t≥0. In most cases strong solutions to SDEs cannot be found explicitly, although we can still compute a number of important properties of the process. An alternative and important type of solution is a so-called weak solution, which is a solution in distribution. We now turn our attention to so-called linear SDEs, as these form the simplest class of SDEs that have some applications in finance and for which a unique strong solution can be found explicitly.

11.6.1 Solutions to Linear SDEs

A linear SDE is an equation of the form

dX(t)=(α(t)+β(t)X(t))dt+(γ(t)+δ(t)X(t))dW(t)(11.25)

where the coefficients α(t), β(t), γ(t), δ(t) are given adapted processes. These are assumed to be continuous functions of time t. We note that they can simply be ordinary (nonrandom) functions of t or may also be random but not functions of the process X(t). When α(t), β(t), γ(t), δ(t) are non-random functions of time, then the process is a diffusion with linear SDE of the form dX(t) = a(t, X(t)) dt + b(t, X(t)) dW(t), with both coefficient functions being linear in the state variable: a(t, x) = α(t) + β(t)x and b(t, x) = γ(t) + δ(t)x. The stochastic equations considered in Examples 11.5 and 11.6 are simple linear SDEs. The nice thing about an SDE of the form (11.25) is that we have explicit solutions, as we now derive.

Equation (11.25) is readily solved by first considering the simpler case when α(t) ≡ γ(t) ≡ 0. Denoting the simpler process by U, the SDE in (11.25) takes the form

dU(t)=β(t)U(t)dt+δ(t)U(t)dW(t).(11.26)

This SDE is now solved by considering the logarithm of the process, Y(t) ≔ ln U(t), and applying Itô's formula (see Example 11.6):

dY(t)=dlnU(t)=dU(t)U(t)12(dU(t)U(t))2=(β(t)12δ2(t))dt+δ(t)dW(t).

Putting this SDE in integral form and using U(t) = eY(t) gives

U(t)=U(0)exp⁡ [0t(β(s)12δ2(s))ds+0tδ(s)dW(s)].(11.27)

This solution is compactly written as a product: U(t)=U(0)e0tβ(s)dst(δW), where we denote the stochastic exponential of an adapted process {δ(s), 0 ≤ st}, w.r.t. BM on the time interval [0, t], by

εt(δW)exp[120tδ2(s)ds+0tδ(s)dW(s)].(11.28)

Note that, by setting β(t) ≡ 0 in (11.26), the solution to the SDE

dU(t)=δ(t)U(t)dW(t)subject to U(0)=1

is the stochastic exponential in (11.28). This type of process plays an important role in derivative pricing theory so we shall revisit it in Section 11.8, as well as its multidimensional version in Section 11.10. Note that when the coefficients β(t) and δ(t) are nonrandom functions of time, and U (0) is taken as a positive constant, the Itô integral 0tδ(s)dW(s)~Norm(0,0tδ2(s)ds), i.e., is a Gaussian process, and InU(t)U(0)~Norm(μt,υt) with mean μt=0t(β(s)12δ2(s))ds and variance νt=0tδ2(s)ds. That is, U(t) is a lognormal random variable and hence {U(t)}t≥0 is a GBM process. In particular, for the case of constant coefficients we recover a GBM process as in Example 11.6. In more complicated general cases where δ(t) and β(t) are random variables (for example, functionals of BM up to time t) then the exponent in (11.27) is not a normal random variable and hence the process {U(t)}t≥0 is not a GBM.

Finally, the solution X(t) for the general linear SDE in (11.25) is now readily derived based on the solution to (11.26). The trick is to write it as a product, X(t) = U(t)V(t) where U(t) is given by (11.27) and hence satisfies the SDE in (11.26), and where V(t) satisfies the SDE:

dV(t)=[α(t)γ(t)δ(t)U(t)]dt+γ(t)U(t)dW(t)(11.29)

with initial conditions chosen as U(0) = 1 and V(0) = X(0). By the Itô product formula (11.15) X(t) = U(t)V(t) satisfies dX(t) = U(t)dV(t) + V(t)dU(t) + dU(t) dV(t). Using (11.26) and (11.29) and by the usual rules we have that X(t) satisfies the SDE in (11.25) with initial condition U(0)V(0) = X(0), i.e., X(t) solves (11.25) with arbitrary initial condition X(0). An explicit representation for V(t) is obtained simply from the integral form of (11.29) with V(0) = X(0):

V(t)=X(0)+0tα(s)γ(s)δ(s)U(s)ds+0tγ(s)U(s)dW(s).(11.30)

Hence, the solution to the general linear SDE (11.25) is given by X(t) = U(t)V(t) where U(t) and V(t) are respectively given by (11.27) and (11.30) with U(0) = 1. That is,

X(t)=U(t)(X(0)+0t[α(s)γ(s)δ(s)]U1(s)ds+0tγ(s)U1(s)dW(s)) (11.31)

where U(t)=e0t(β(s)12δ2(s))ds+0tδ(s)dW(s)=e0tβ(s)dst(δW) and U1(s)1/U(s)=e0sβ(μ)dus(δW),0st.

Example 11.8

Solve the SDE

dX(t)=(αβX(t))dt+σdW(t)

for all t ≥ 0, subject to X(0) = x with constants x, α, β, σ ∈ ℝ.

Solution. Note that the SDE is of the form in (11.25) with constant coefficients α(t) = α, β(t) = −β, γ(t) = σ, δ(t) ≡ 0. The expression in (11.28) simplifies to t(δ · W) = t(0) = 1 and U(t) = eβt, U−1(s) = eβs. Substituting into (11.31) gives the solution

X(t)=eβt(x+α0teβsds+σ0teβsdW(s))=eβtx+αβ(1eβt)+σeβt0teβsdW(s).(11.32)

By letting X(t)f(t,Y(t)),Y(t)=0teβsdW(s), where f(t,y)eβtx+αβ(1eβt)+σeβty, and applying an Itô formula, the reader should verify that X(t) satisfies the above SDE. We note that another simple alternative way to arrive at this solution (without the use of (11.25)) is to define X˜(t):=eβtX(t), which has the effect of eliminating the state variable dependence in the SDE. Indeed, by applying an Itô formula we have dX˜(t)=αeβtdt+σeβtdW(t), with coefficients independent of X˜(t). Integrating, with X˜(0)=X(0)=x, gives the solution to X˜(t):

X˜(t)=x+αβ(eβt1)+σ0teβsdW(s).(11.33)

The solution in (11.32) follows by X(t)=eβtX˜(t).

In Example 11.8, the solution represented in (11.32) is a Gaussian process involving an Itô integral that is a normal random variable, i.e., X(t) = µt + σeβtY(t) where μt=E[X(t)]=eβtx+αβ(1eβt) and Y(t)0teβsdW(s) is a zero-mean normal random variable:

Y(t)~Norm(0,Var(Y(t))=Norm(0,0te2βsds)=Norm(0,12β(e2βt1)).

Hence, X(t)~Norm(μt,σ2e2βtVar(Y(t)))=Norm(eβtx+αβ(1eβt),σ22β(1e2βt)). Applying the formulae in (11.8) to (11.32) also gives us the covariance function cX(t, v) ≔ Cov(X(t), X(v)), 0 ≤ tv:

cX(t,v)=σ2eβ(t+v)Cov(0teβsdW(s),0veβsdW(s))=σ2eβ(t+v)0te2βsds=σ22βeβ(t+v)(e2βt1)=σ22β(eβ(vt)eβ(v+t)).

We can also represent the solution as a functional of the Brownian motion up to time t by making use of the Itô product rule: d(eβtW(t))=βeβtW(t)dt+eβtdW(t)eβtW(t)=β0teβsW(s)ds+0teβsdW(s)0teβsdW(s)=eβtW(t)β0teβsW(s)ds. Putting this last expression into (11.32) gives X(t) = F(t, {W(s);0 ≤ st}) with functional F of the Brownian path defined by

F(t,{W(s);0st})eβtx+αβ(1eβt)+σW(t)σβ0teβ(ts)W(s)ds.(11.34)

In the chapter on interest rate modelling we shall see that the above process, referred to as the Vasicek model for α, β, σ > 0, is among the simplest one used to model the instantaneous (short) interest rate. A natural extension of this model is to allow the coefficients to be time dependent functions. The resulting linear SDE is explicitly solved in the following example.

Example 11.9

Solve the SDE

dX(t)=(a(t)b(t)X(t))dt+σ(t)dW(t)

subject to X(0) = x ∈ ℝ, where a(t), b(t), σ(t) are nonrandom continuous functions of time t ≥ 0.

Solution. The SDE is of the form in (11.25) with coefficient functions α(t) = a(t), β(t) ≡ −b(t), γ(t) ≡ σ(t), δ(t) = 0. Hence, t(δ · W) = t(0) = 1 and U(t)=e0tb(s)ds,U1(s)=e0sb(u)du. Substituting into (11.31) gives us the explicit solution

X(t)=e0tb(u)du(x+0te0ab(u)dua(s)ds+0te0ab(u)duσ(s)dW(s))=xe0tb(u)du+0testb(u)dua(s)ds+0testb(u)duσ(s)dW(s).(11.35)

The above process is Gaussian since the integrand in the Itô integral is an ordinary (nonrandom) function of time. By applying Itô isometry on the Itô integral in (11.35) we have that X(t) is normally distributed with mean and variance:

E[X(t)]=xe0tb(u)du+0teatb(u)dua(s)ds,(11.36)

Var(X(t))=E[(0teatb(u)duσ(s)dW(s))2]=0te2atb(u)duσ2(s)ds.(11.37)

The covariance function for this process follows by using (11.8) on the first line in (11.35) after subtracting the mean (for tv):

cX(t,v)=e0tb(u)due0vb(u)duCov(0te0sb(u)duσ(s)dW(s),0ve0sb(u)duσ(s)dW(s))=e20tb(u)duetvb(u)du0te20sb(u)duσ2(s)ds=e2tvb(u)du0te2stb(u)duσ2(s)ds.

11.6.2 Existence and Uniqueness of a Strong Solution of an SDE

An important question when finding a strong solution to an SDE of the form given by (11.24) is whether such a solution exists and, if so, whether the solution is unique. The following theorem gives sufficient conditions for the existence and uniqueness of a strong solution to the SDE in (11.24). We omit the proof of this theorem as the technical details can be found in other more specialized textbooks on stochastic analysis. The conditions in the theorem are not necessary, but are rather mild sufficient conditions that guarantee the existence a unique strong solution, i.e., that there is a unique process {X(t)}t≥0 satisfying (11.24).

Theorem 11.4.

Assume the following conditions are satisfied:

  • The coefficient functions µ(t, x) and σ(t, x) are locally Lipschitz in x, uniformly in t. That is, for arbitrary positive constants T and N, there exists a constant K depending possibly only on T and N such that

    |μ(t,x)μ(t,y)|+|σ(t,x)σ(t,y)|<K|xy|,

    whenever |x|, |y| ≤ N and 0 ≤ tT.

  • The coefficient functions µ (t, x) and σ(t, x) satisfy the linear growth condition in the variable x, i.e.,

    |μ(t,x)|+|σ(t,x)|K(1+|x|).

  • The initial value X(0) is independent of the Brownian motion up to arbitrary time T and has a finite second moment, i.e., X(0) is independent of TWσ(W(t);0tT) and E[X2(0)] < ∞.

Then, the SDE in (11.24) has a unique strong solution {X(t)}t≥0 with continuous paths X(t, ω), t ≥ 0.

For a given SDE, the conditions in the above theorem can be readily checked. For example, the first (Lipschitz) condition holds if the coefficient functions have continuous first partial derivatives μx(t,x) and σx(t,x). The second condition is satisfied when the coefficient functions µ(t, x) and σ(t, x) have at most a linear growth in x for large values of x and are also bounded for arbitrarily small values of x. In most cases the SDE is subject to a constant initial condition X(0) ∈ ℝ so that the above third condition is automatically satisfied. In the case of a general linear SDE we have already shown that the solution is given by (11.31). This includes cases when the coefficients α(t), β(t), γ(t), δ(t) are bounded nonrandom functions of time. In these cases, µ(t, x) and σ(t, x) are linear functions of x, for all t, and hence the conditions in the above theorem are indeed satisfied and so a unique strong solution exists. In Examples 11.8 and 11.9 we have solved the SDE and found the unique strong solution as given by (11.32) and (11.35), respectively. We note that unique strong solutions also exist when the coefficient functions in the SDE satisfy milder conditions than those listed in the above theorem. For instance, for a time-homogeneous SDE with (time-independent) drift and volatility coefficients µ(x) and σ(x), it can be shown that there exists a unique strong solution when µ(x) satisfies a Lipschitz condition, |µ(x) − µ(y)| < K|xy|, and σ(x) satisfies a Holder condition, |σ(x) − σ(y)| < K|xy|α, with order α ≥ 1/2, for some constant K.

11.7 The Markov Property, Feynman–Kac Formulae, and Transition CDFs and PDFs

We have already shown that Brownian motion is a Markov process. Generally, a process {X(t)}t≥0 has the Markov property if the probability of the event {X(t) ≤ y}, y ∈ ℝ, conditional on all the past information Fs (i.e., the information about the complete history of the process up to a prior time s) is the same as its probability conditional on only knowing the process endpoint value X(s), for all 0 ≤ st. That is, the Markov property can be formally stated equivalently as

(X(t)y|s)=(X(t)y|X(s))(11.38)

or, for any Borel function h : ℝ → ℝ,

E[h(X(t))|s]=E[h(X(t))|X(s)],(11.39)

for all 0 ≤ st. Note that the specific choice of h(x) ≔ ?xy recovers (11.38). Hence, when the Markov property holds, conditioning on natural filtration Fs = σ(X(u); 0 ≤ us) is equal to conditioning on σ(X(s)). In particular, for the case of a discrete-time process X(t0), X(t1), . . . , X(tn−1), X(tn), . . ., with times t0 < t1 < . . . < tn−1 < tn < . . ., the above property takes the form that is familiar in the theory of discrete-time Markov chains:

(X(tm)y|X(t0),X(t1),...,X(tn1),X(tn))=(X(tm)y|X(tn))(11.40)

for all mn, i.e., when conditioning on the values of the process for a set {ti}0≤in of previous times, the only conditioning that is relevant is the value of the process at the most recent time tn. We note that this follows from (11.38) by setting s = tn, t = tm, i.e., FsFtn = σ(X(t0), X(t1), . . ., X(tn−1), X(tn)) and σ(X(s)) = σ(X(tn)), and then using the usual shorthand notation ℙ(A | σ(Y1,. . . , Yn)) ≡ ℙ(A | Y1, . . . , Yn) for expressing the probability of an event A conditional on a σ-algebra generated by a set of random variables Y1,. . . , Yn.

We are interested in computing conditional expectations involving functions of a process X = {X(t)}t≥0 ∈ ℝ that solves a given SDE as in (11.24) subject to some initial condition X(0) = x. In particular, we will need to compute expectations as in (11.39) for the case that the process X is Markov. Let us now fix some time T ≥ 0. Then, we shall denote the conditional expectation of a function h(X(T)), conditioned on the process having a given value X(t) = x ∈ ℝ at a time tT, by

Et,x[h(X(T))]E[h(X(T))|X(t)=x].

So the subscript x, t is shorthand notation for conditioning on a given value X(t) = x of the process at time t. It should be clear that this conditional expectation is an ordinary (nonrandom) function of the ordinary variables x and t, i.e., Et, x[h(X(T))] = g(t, x) for any fixed T. Therefore, the Markov property in (11.39) is expressible as E[h(X(T)) | Ft] = Et, X(t)[h(X(T))] = g(t, X(t)), for all 0 ≤ tT. This expectation is now the random variable given by the function g(t, X(t)) of the random variable X(t) and evaluates to g(t, x) upon setting X(t) = x. Hence, if we know the conditional probability distribution of random variable X(T), given X(t) = x, then we could compute g(t, x). That is, assume the conditional probability density function (PDF) of X(T), given X(t), exists. Recall from the previous chapter that this PDF is the transition PDF for the process X. From Definition 10.2, we have

Et,x[h(X(T))]=h(y)fX(T)|X(t)(y|x)dy=h(y)p(t,T;x,y)dy.(11.41)

In some cases this integral can be computed analytically. Otherwise, we need to employ a numerical method. For example, if the expression for the PDF is known, then we can compute the integral using an appropriate numerical quadrature algorithm. Monte Carlo methods can generally be used to compute the above integral by sampling (i.e., simulating) the paths of the process at time T given their fixed value x at time t. Different simulation approaches may be applied. One approach is to use a time-stepping algorithm for simulating the paths according to the SDE. Alternatively, if the transition PDF is known, then we can sample the (path endpoint) value X(T) according to its distribution. For details on these techniques, we refer the reader to the chapter on Monte Carlo methods.

The next theorem tells us that solutions to an SDE are Markov processes.

Theorem 11.5.

Let {X(t)}t≥0 be a solution to the SDE in (11.24) with some given initial condition. Then,

E[h(X(T))​ ​ ​ ​ |t]=g(t,X(t))(11.42)

where g(t, x) = Et, X[h(X(T))], for all 0 ≤ tT and Borel function h.

A rigorous proof of this result is beyond the scope of this text. The important content of this result is that the expectation of any function of the process at a future time Tt, conditional on the filtration (or path history) at time t, is given simply by its expectation conditional only on the path value at time t. In practice, we can apply this theorem by first computing Et, x[h(X(T))], and then putting X(t) in the place of variable x.

A simple nonrigorous, yet instructive, argument that leads us to the fact that the solution to an SDE has the Markov property is to let T = t + Δt, for a small time step Δt ≈ 0. Then, by the integral form of the SDE:

X(t+Δt)=X(t)+tt+Δtμ(s,X(s))ds+tt+Δtσ(s,X(s))dW(s).

This expresses the value of the process at future time t + Δt in terms of its value at any current time t plus an ordinary integral and an Itô integral. For small Δt, the integrals are well approximated by holding the integrand coefficient functions constant and evaluated at the left endpoint s = t of the time interval [t, t + Δt]. This gives us the approximation X(t + Δt) ≈ X(t) + µ(t, X(t))Δt + σ(t, X(t))ΔW(t), where ΔW (t) ≡ W(t + Δt) − W(t). Hence, the left-hand side of (11.38), for T = t + Δt, is approximated by

(X(t+Δt)y|t)(X(t)+μ(t,X(t))Δt+σ(t,X(t))ΔW(t)y|t)=g(X(t))

where g(x) = ℙ(x + µ(t, xt + σ(t, xW(t) ≤ y). Here we used the independence proposition where the Brownian increment ΔW(t) is independent of Ft and X(t) is Ft-measurable. We can now put back the conditioning on X(t) in the unconditional probability since ΔW (t) is independent of X(t), so the function g(x) is equally given by the conditional probability: g(x) = ℙ(x + µ(t, xt + σ(t, xW(t) ≤ y | X(t)) ≈ ℙ(X(t + Δt) ≤ y | X(t)). Hence, we recover (approximately) the Markov property in (11.38), ℙ(X(t + Δt) ≤ y | Ft) ≈ ℙ(X(t + Δt) ≤ y | X(t)). In the above theorem this relation holds exactly.

By the Markov property of the process X, we also have the following martingale property for a process defined via an expectation of some function of X(T) conditional on X(t).

Proposition 11.6.

Let {X(t)}t≥0 satisfy the SDE in (11.24) subject to some initial condition. Let ϕ : ℝ → ℝ be a Borel function and define f(t, x) ≔ Et, X[ϕ(X(T))] for fixed T > 0, assuming Et, X[|ϕ(X(T))|] < ∞. Then, the stochastic process

Ytf(t,X(t)),0tT,

is a martingale w.r.t. any filtration {Ft}t≥0 for Brownian motion.

Proof. Let 0 ≤ stT. Note that, based on Theorem 11.5,

E[ϕ(X(T))|t]=Et,X(t)[ϕ(X(T))]=f(t,X(t))=Yt

for any 0 ≤ tT. Using this relation and applying the tower property gives the martingale expectation property:

E[Yt|s]=E[E[ϕ(X(T))|t]|s]=E[ϕ(X(T))|s]=f(s,X(s))=Ys.

Based on the Markov property of any solution to an SDE, we are now ready to discuss the very important connection that exists between an SDE and a PDE. In what follows we will find it very convenient to make use of the differential operator G defined by

Gt,xf(t,x)12σ2(t,x)2fx2(t,x)+μ(t,x)fx(t,x).(11.43)

This differential operator in the variables (t, x) is the so-called generator for the process X and it acts on all functions fC1, 2, i.e., having continuous partial derivatives ft,fx and 2fx2. Using this operator we can now rewrite the differential and integral forms of the Itô formula in (11.20) and (11.21) as

df(t,X(t))=(t+Gt,x)f(t,X(t))dt+σ(t,X(t))fx(t,X(t))dW(t).(11.44)

and

f(t,X(t))=f(0,X(0))+0t(s+Gs,x)f(s,X(s))ds+0tσ(s,X(s))fx(s,X(s))dW(s).(11.45)

Fix a time T > 0. If we assume that the Itô integral in (11.45) is a martingale, for 0 ≤ tT, whereby the square integrability condition holds, i.e., assuming

0TE[(σ(s,X(s))fx(s,X(s)))2]ds<,(11.46)

we have a useful representation of a martingale Markov process. In particular, the process {Mf(t), 0 ≤ tT} defined by

Mf(t)f(t,X(t))0t(s+Gs,x)f(s,X(s))ds(11.47)

is a martingale. To see this, first note that

E[tTσ(s,X(s))fx(s,X(s))dW(s)|t]=0,

since the Itô integral is a martingale. Using the Itô formula (11.45) for times t and T:

f(T,X(T))=f(t,X(t))+tT(s+Gs,x)f(s,X(s))ds+tTσ(s,X(s))fx(s,X(s))dW(s).

Taking expectations, conditional on Ft, on both sides of this equation while using the above (zero expectation) relation gives

E[f(T,X(T))|t]=f(t,X(t))+E[tT(s+Gs,x)f(s,X(s))ds|t].

Now, writing the integral appearing inside the last expectation as the difference

0T(s+Gs,x)f(s,X(s))ds0t(s+Gs,x)f(s,X(s))ds

and using the fact that the [0, t]-integral is Ft-measurable and rearranging terms we obtain

E[f(T,X(T))0T(s+Gs,x)f(s,X(s))ds|t]=f(t,X(t))0t(s+Gs,x)f(s,X(s))ds.

By the definition in (11.47) we therefore have shown the martingale property:

E[Mf(T)|t]=Mf(t),0tT.(11.48)

One important consequence of this martingale property is the following theorem, which shows that a solution to certain parabolic PDEs (which we will later see are closely related to the Black–Scholes PDE) can be represented as a conditional expectation.

Theorem 11.7

(Feynman–Kac). Given a fixed T > 0, let {X(t)}t≥0 satisfy the SDE in (11.24) and let ϕ : ℝ → ℝ be a Borel function. Moreover, assume the square integrability condition (11.46) holds. Let f(t, x) be a C1, 2 function solving the PDE ft+Gt,xf=0, i.e.,

ft(t,x)+12σ2(t,x)2fx2(t,x)+μ(t,x)fx(t,x)=0,(11.49)

for all x, 0 < t < T, subject to the condition f(T, x) = ϕ(x). Then, assuming that Et, x[|ϕ(X(T))|] < ∞, f(t, x) has the representation

f(t,x)=Et,x[ϕ(X(T))]E[ϕ(X(T))|X(t)=x](11.50)

for all x, 0 ≤ tT.

We note that if ϕ(x) is continuous, then f(T−, x) ≡ limtT f(t, x) = f(T, x) = ϕ(x), i.e., we have continuity of the solution at t = T, for all x.

Proof. Assuming the square integrability condition (11.46), then according to the above discussion we have that the process defined in (11.47) satisfies the martingale property in (11.48). Now, let f satisfy the PDE in (11.49). This implies that the integral in (11.47) vanishes since the integrand function is identically zero, i.e., sf(s,x)+Gs,xf(s,x)=0, for all s > 0 and all values of x. Hence, the process Mf(t) : = f(t, X(t)), 0 ≤ tT, is a martingale. Combining this with the Markov property of the process, and finally substituting the terminal condition for the random variable f(T, X(T)) = ϕ(X(T)), we have

f(t,X(t))=E[f(T,X(T))|t]=Et,X(t)[f(T,X(T))]=Et,X(t)[ϕ(X(T))]

so that f(t, x) = Et, x [ϕ(X(T))] for all x, 0 ≤ tT.

This theorem hence shows that the solution at current time t, given by (11.50), is in the form of a conditional expectation of the random variable ϕ(X(T)), where ϕ is the given boundary value function and X(T) is the random variable corresponding to the endpoint value of the process at future (terminal) time T, where the process solves the SDE in (11.24) subject to it having current time-t value X(t) = x. From our previous discussion surrounding (11.41), we see that this theorem gives us a probabilistic representation of the solution to the parabolic PDE in (11.49) subject to the (terminal time) boundary value function f(T, x) = ϕ(x). Alternatively, the theorem can be used in the opposite sense; that is, it provides a PDE approach for evaluating a conditional expectation of the form in (11.50).

We now consider the simplest example of how this theorem is used in the case where the underlying Itô process is just standard Brownian motion. In particular, we solve the simple heat equation on the real line and thereby obtain a probabilistic representation of the solution as an expectation involving the endpoint value of Brownian motion.

Example 11.10

Solve the boundary value problem:

ft+122fx2=0,

for x ∈ ℝ, 0 ≤ tT, subject to f(x, T) = ϕ(x) where ϕ is an arbitrary function. Give the explicit solution for ϕ(x) = x2.

Solution. Observe that this PDE is of the same form as in (11.49) with coefficient functions σ(t, x) = 1, µ(t, x) = 0. The corresponding SDE in (11.24) is then

dX(t)=dW(t).

This trivial linear SDE has the solution X(t) = x0 + W(t). As seen below, the end result does not depend on x0 since X(T) − X(t) = W(T) − W(t). Using (11.50) and the fact that W(T) − W(t) and W(t) are independent, the solution takes the equivalent forms

f(t,x)=E[ϕ(X(T))|X(t)=x]=E[ϕ(W(T)W(t)+X(t))|X(t)=x]=E[ϕ(W(T)W(t)+x)]=E[ϕ(W(T))|W(t)=x]=ϕ(y)p(t,T;x,y)dy,fort<T,

where p(t,T;x,y)e(yx)22(Tt)2π(Tt) is the transition PDF of standard Brownian motion. The last line follows immediately from (10.13). Note that, for t = T, the boundary value condition is satisfied where f(T, x) = E[ϕ(W(T) − W(T) + x)] = E[ϕ(x)] = ϕ(x).

For ϕ(x) = x2, we simply have

f(t,x)=E[(W(T)W(t)+x)2]=E[(W(T)W(t))2]+2xE[W(T)W(t)]+x2=(Tt)+x2,

for all 0 ≤ tT. Note that the terminal condition is satisfied, f(T, x) = x2, and this function satisfies the above PDE since ft+122fx2=1+12(2)=0.

We note that the square integrability condition in Theorem 11.7 can be shown to hold in the above example. Moreover, f(t, x) is a C1, 2 function since p(t, T; x, y) is a C1, 2 function in the (t, x) variables for t < T. In the case ϕ(x) = x2, we see that this follows trivially. More generally, assuming an arbitrary ϕ function such that the above y-integral exists, we can verify that the above integral represents a solution to the PDE. Indeed, the corresponding linear differential operator in (11.43) is now Gt,xf122fx2, and reversing the order of differentiation and integration (w.r.t. the y variable) we have, for all t < T:

(t+Gt,x)f(t,x)=ϕ(y)(t+Gt,x)p(t,T;x,y)dy=0,

since (as can be verified explicitly and directly) the above PDF p = p(t, T; x, y) solves the PDE pt+Gt,xp=0. For continuous ϕ(x), the solution is also continuous w.r.t. time t ∈ [0, T] where f(T−, x) = f(T, x) = ϕ(x). This is the case as the transition PDF approaches the Dirac delta function centred at zero, denoted by δ(·), as tT, i.e.,

p(T,T;x,y)lim⁡ tTp(t,T;x,y)=lim⁡ T0e(yx)22T2πT=δ(xy).

Here we have used one representation of the Dirac delta function as the limit of an infinites-imally narrow Gaussian PDF. The Dirac delta function is even, δ(xy) = δ(yx), and has the defining (sifting) property:

f(T,x)=p(T,T;x,y)ϕ(y)dy=δ(xy)ϕ(y)dy=ϕ(x),

for any function ϕ that is continuous at x. The above delta function terminal condition is a general property of any transition PDF, as shown in Proposition 11.8 below. The above sifting property arises naturally by the Dirac measure defined in (9.29). Viewed as a distribution over ℝ, the only outcome that occurs with probability one is the single point x. The Dirac delta function can then be related to the Dirac (singular) measure δx(y) for a given point x ∈ ℝ, dδx(y) = δ(yx)dy. Formally, the Dirac delta function δ(x) is also related to the Heaviside unit step function H(x), where H(x) equals 1 for x > 0, equals 0 for x < 0 and equals 1/2 at x = 0. In particular, its derivative is the delta function, H'(x) = δ(x). As a function of y, we write the differential dH(yx) = H'(yx) dy = δ(yx) dy. Hence, when considered as a Riemann–Stieltjes integral with integrator H(yx), a function ϕ(y) that is continuous at the point y = x will exhibit the sifting property:

Iϕ(y)δ(yx)dyIϕ(y)dH(yx)=ϕ(x)

if x is in any interval I ⊂ ℝ and the integral is zero if xI. Note that when I = ℝ the integral equals ϕ(x). The reader will note that exactly the same property is satisfied if we use the unit indicator function ?{yx as integrator in the place of H(yx). The two are equivalent for all yx and they are used as alternate definitions of the unit step function.

We can now use Theorem 11.7 to obtain the backward Kolmogorov PDE (in the so-called backward-time variables t, x) that is solved by any transition CDF, P(t, T; x, y) ≔ ℙ(X(T) ≤ y | X(t) = x), and hence, its corresponding transition PDF for a diffusion process with SDE in (11.24).

Proposition 11.8.

Assume the square-integrability condition in Theorem 11.7 holds. Then, a transition PDF, p = p(t, T; x, y), for the process {X(t)}t with the generator in (11.43) solves the backward Kolmogorov PDE:

(t+Gt,x)p=0,(11.51)

where lim⁡ tTp(t,T;x,y)p(T,T;x,y)=δ(xy).

Proof. We begin by writing the transition probability function as a conditional expectation:

P(t,T;x,y)=(X(T)y|X(t)=x)=E[I{X(T)y}|X(t)=x].

By the above Feynman–Kac theorem, then P (for fixed T, y) solves the PDE

(t+Gt,x)P(t,T;x,y)=0,

with terminal condition P(T, T; x, y) = ℙ(yx) = ?{yx}ϕ (x). Taking partial derivatives w.r.t. y on both sides of the above PDE, and using the fact that the order of the differential operators (t+Gt,x) and y can be reversed, gives

(t+Gt,x)yP(t,T;x,y)=0.

This is exactly (11.51) since the transition PDF p(t,T;x,y)yP(t,T;x,y). The delta function terminal condition is seen to arise as follows, since the transition function approaches the unit step function as tT,

dP(T,T;x,y)=dH(yx)=δ(yx)dy=p(T,T;x,y)dy.

We remark that any function p (or P) that is a solution to the backward Kolmogorov PDE and is a conditional density (or distribution) function of some Markov process, as a diffusion or Itô process, is a transition PDF (or CDF). In fact, a transition PDF p = p(t, T; x, y) is called a fundamental solution to the Kolmogorov PDE in (11.51) and its defining properties are that: (i) p is nonnegative, jointly continuous in the variables t, T; x, y, twice continuously differentiable in the spatial variables and continuously differentiable in the time variables; (ii) for any bounded Borel function ϕ then the function defined by u(t, x) : = ∫ ϕ(y)p(t, T; x, y) dy is bounded and also satisfies the same Kolmogorov PDE; (iii) for continuous ϕ, limtT u(t, x) = u(T−, x) = ϕ(x) for all x. Property (iii) is equivalent to the Dirac delta function limit, limtT p(t, T; x, y) = p(T−, T; x, y) = δ(xy). Hence, generally, if given a transition PDF p, the conditional expectation in (11.50), i.e.,

f(t,x)=ϕ(y)p(t,​ T;x,y)dy,(11.52)

solves the backward Kolmogorov PDE in (11.49) with terminal condition f(T, x) = ϕ(x). We showed this specifically for the simple case of Brownian motion in the above example.

Example 11.11

Consider a GBM process {S(t)}t≥0 ∈ ℝ+ with SDE

dS(t)=μS(t)dt+σS(t)dW(t),

where µ, σ > 0 are constants.

  1. (a) Provide the corresponding backward Kolmogorov PDE and obtain the transition CDF and PDF.
  2. (b) Solve the PDE

    ft+12σ2x22fx2+μxfx=0,

    for x > 0, tT, subject to f(T, x) = ϕ(x) where ϕ is an arbitrary function. Give the explicit solution for ϕ(x) = x?{x>a}, with constant a > 0.

Solution.

  1. (a) The drift and diffusion coefficient functions are time independent linear functions: µ(t, x) = µx and σ(t, x) = σx. According to (11.43), the generator Gt, x = Gx is the differential operator

    Gx12σ2x22x2+μxx.

    The transition CDF P = P(t, T; x, y) hence solves the PDE in (11.51):

    Pt+12σ2x22Px2+μxPx=0,

    for all t < T, x, y > 0 with P(T, T; x, y) = ?{xy}. The transition CDF is given by the conditional expectation:

    P(t,T;​ x,y)=(S(T)y|S(t)=x)=E[I{S(T)y}|S(t)=x].

    We have already computed this in the previous chapter by substituting the strong solution for GBM in the form

    S(T)=S(t)e(μ12σ2)(Tt)+σ(W(T)W(t)),

    giving

    P(t,T;x,y)=E[I{ln⁡ (S(T)/y)0}|S(t)=x]=(W(T)W(t)Tt ln⁡ (x/y)+(μ12σ2)(Tt)σTt)=N(ln⁡ (y/x)(μ12σ2)(Tt)σTt).(11.53)

    Here we used the fact that W(T) − W(t) is independent of W(t), and hence independent of S(t), where W(T)W(t)d¯¯TtZ,Z~ Norm(0, 1). Differentiating the above CDF with respect to y gives the known lognormal density (see (10.27) with the drift replacement μμ12σ2):

    p(t,T;x,y)=1yσ2π(Tt)exp⁡ ([ln⁡ (y/x)(μ12σ2)(Tt)2σ2(Tt))2,(11.54)

    for all x, y > 0, t < T, and zero otherwise. The reader can verify that the transition CDF in (11.53) has limit P(T−, T; x, y) = H(yx).

  2. (b) The solution f(t, x) can be obtained by the Feynman–Kac Theorem 11.7 or, alternatively, directly from (11.52). Let's solve for f(t, x) using both equivalent approaches. In the first approach, we use the above strong solution to the SDE. By (11.50), and the independence of W(T) − W(t) and S(t), we have

    f(t,x)=E[ϕ(S(T))|S(t)=x]=E[ϕ(S(t)e(μ12σ2)(Tt)+σ(W(T)W(t)))|S(t)=x]=E[ϕ(xe(μ12σ2)(Tt)+σ(W(T)W(t)))]=E[ϕ(xe(μ12σ2)(Tt)+σTtZ)]=ϕ(xe(μ12σ2)(Tt)+σTtz)n(z)dz.

    This integral, assuming it exists, represents the solution to the PDE for arbitrary function ϕ. In particular, for ϕ(y) = y?{y>a} = y?{ln(y/a)>0} we have

    f(t,x)=xe(μ12σ2)(Tt)E[eσTtZI{ln⁡ (x/a)+(μ12σ2)(Tt)+σTtZ>0}]=xe(μ12σ2)(Tt)E[eσTtZI{Z>A}]

    with constant Aln⁡ (x/a)+(μ12σ2)(Tt)σTt, for all t < T. For t = T, we simply have f(T, x) = x?{x>a}. This expectation is evaluated (see identity (A.1)) using E[eBZI{Z>A}]=eB2/2N(BA), with constant BσTt, giving

    f(t,x)=xe(μ12σ2)(Tt)e12σ2(Tt)N(σTt+ln⁡ (x/a)+(μ12σ2)(Tt)σTt)=xeμ(Tt)N(ln⁡ (x/a)+(μ+12σ2)(Tt)σTt).

    The reader can check that this expression solves the above PDE by computing the partial derivatives ft,2fx2,fx. Moreover, in the limit tT (defining τ = Tt):

    f(T,x)=lim⁡ T0xeμTN(ln⁡ (x/a)+(μ12σ2)TσT)=lim⁡ T0xN(ln⁡ (x/a)σT)=xH(xa).

    [Note that this equals f(T, x) = ϕ(x) = x?{x>a} for all x, except at the point of discontinuity x = a of ϕ(x), i.e., ϕ(a) = 0 and f(T−, a) = aH(0) = a/2.]

    In the second approach we use (11.52) and insert the above transition PDF to obtain

    f(t,x)=0ϕ(y)p(t,T;x,y)dy=1σ2π(Tt)0ϕ(y)e12[ln⁡ (y/x)μ(Tt)σTt]2dyy(letz=ln⁡ (y/x)μ(Tt)σTt,y=xe(μ12σ2)(Tt)+σTtz,dyy=σTtdz)=0ϕ(xe(μ12σ2)(Tt)+σTtz)e12z22πdz.

    As required, this produces exactly the same solution as we have above by the first approach. Of course, we should not be surprised by this fact since, by definition, p(t, T; x, y) is the conditional density of S(T) at y, given S(t) = x, and hence f(t,x)=E[ϕ(S(T))|S(t)=x]=0ϕ(y)p(t,T;x,y)dy.

In the next example we obtain the transition CDF/PDF for the GBM process with time-dependent drift and diffusion coefficients.

Example 11.12

Consider a GBM process {S(t)}t≥0 ∈ ℝ+ with SDE

dS(t)=μ(t)S(t)dt+σ(t)S(t)dW(t),(11.55)

where µ(t), σ(t) > 0 are continuous (ordinary) functions of time t ≥ 0. State the corresponding backward Kolmogorov PDE and obtain the transition CDF and PDF.

Solution. We have a linear SDE with coefficient functions µ(t, x) = µ(t)x and σ(t, x) = σ(t)x. The corresponding generator is the differential operator

Gt,x12σ2(t)x22x2+μ(t)xx.

The transition CDF P = P(t, T; x, y) solves the PDE in (11.51):

Pt+12σ2(t)x22Px2+μ(t)xPx=0,

for all t < T, x, y > 0 with P(T, T; x, y) = ?{xy}}. By the Feynman–Kac Theorem 11.7, the transition CDF is obtained by evaluating the conditional expectation

P(t,T;x,y)=(S(T)y|S(t)=x)=(X(T)ln⁡ y|X(t)=ln⁡ x),

where we define the process X(t) ≔ ln S(t), t ≥ 0. The SDE (11.55) has unique strong solution given by (11.31) with α(t) ≡ γ(t) ≡ 0, β(t) ≡ µ(t), δ(t) = σ(t), i.e.,

S(t)=S(0)e0tμ(s)ds120tσ2(s)ds+0tσ(s)dW(s),

hence

S(T)=S(t)etTμ(s)ds12tTσ2(s)ds+tTσ(s)dW(s)

and

X(T)=X(t)+tTμ(s)ds12tTσ2(s)ds+tTσ(s)dW(s).

It is convenient to define the time-averaged drift and volatility functions:

μ¯(t,T)1TttTμ(s)ds,σ¯(t,T)1TttTσ2(s)ds.(11.56)

Since tTσ(s)dW(s)d¯¯W(σ¯2(t,T)(Tt))d¯¯ σ¯(t,T)TtZ,Z~ Norm(0, 1),

X(T)=dX(t)+[μ¯(t,T)12σ¯2(t,T)](Tt)+σ¯(t,T)TtZ,

where X(T) − X(t), and hence Z, is independent of X(t). Combining these facts into the above gives:

P(t,T;x,y)=(X(T)X(t)ln⁡ yX(t)|X(t)=ln⁡ x)=(X(T)X(t)ln⁡ (y/x))=([μ¯(t,T)12σ¯2(t,T)](Tt)+σ¯(t,T)TtZln⁡ (y/x))=N(ln⁡ (y/x)[μ¯(t,T)12σ¯2(t,T)](Tt)σ¯(t,T)Tt).(11.57)

We note that this is the form of the transition CDF for standard GBM in Example 11.11, wherein the drift and volatility coefficients in (11.53) are now replaced by the time-averaged ones: μμ¯(t,T) and σσ¯(t,T). Observe, however, that the CDF in (11.57) is not a function of only Tt, i.e., the GBM process with time-dependent coefficients is an example of a time-inhomogeneous process (i.e., not time-homogeneous as in Example 11.11). Differentiating (11.53) with respect to y gives the lognormal density (analogous to 11.54):

p(t,T;x,y)=1yσ¯2π(Tt)exp⁡ ([ln⁡ (y/x)(μ¯12σ¯2)(Tt)]22σ¯2(Tt)),(11.58)

for all x, y > 0, t < T, and zero otherwise, where μ¯μ¯(t,T),σ¯σ¯(t,T).

Example 11.13

Consider the process {X(t)}t≥0 ∈ ℝ in Example 11.8, i.e.,

dX(t)=(αβX(t))dt+σdW(t).(11.59)

For α = 0, this process is specifically called the Ornstein–Uhlenbeck process or OU process for short. Derive the corresponding transition CDF and PDF.

Solution. From the analysis in Example 11.8, we have the strong solution for X(T) in terms of X(t) = x, which we can write in equivalent forms using time-changed BM:

X(T)=eβ(Tt)x+αβ(1eβ(Tt))+σtTeβ(Ts)dW(s)=deβ(Tt)x+αβ(1eβ(Tt))+σW(1e2β(Tt)2β)=deβ(Tt)x+αβ(1eβ(Tt))+σeβ(Tt)W(e2β(Tt)12β)=deβ(Tt)x+αβ(1eβ(Tt))+σ1e2β(Tt)2βZ.

The last line displays X(T) as a normal random variable, where Z ~ Norm(0, 1) and independent of X(t). Hence, the transition CDF is a normal CDF:

P(t,T;x,y)=(X(T)y|X(t)=x)=N(y[eβ(Tt)x+αβ(1eβ(Tt))]σ(1e2β(Tt))/2β),(11.60)

and the transition PDF is the Gaussian function

p(t,T;x,y)=1σ2β1e2β(Tt)n(y[ex+αβ(1eβ(Tt))]σ(1e2β(Tt))/2β),(11.61)

for all x, y ∈ ℝ, t < T.

Note that a transition PDF p (or CDF P) solving a given Kolmogorov PDE as in (11.51), subject to p(T−, T; x, y) = δ(xy), is in general cases not necessarily a unique solution. This is the case even if we require p (or P) to be a PDF (or CDF). If a diffusion has one or both of its endpoints (left or right endpoint) as a regular boundary, then the behaviour of the process at the endpoint can be specified differently. An example of this is the specification of a regular reflecting boundary versus a regular killing (absorbing) boundary as in the case of Brownian motion (BM) that is either reflected or killed at an upper or lower finite boundary point. The known transition PDFs for both respective cases are of course different, yet both solve the same Kolmogorov PDE for Brownian motion and have limit p(T−, T; x, y) = δ(xy). The key point is that the Kolmogorov PDE and terminal time condition make no mention of the boundary conditions imposed on the solution as a function of the spatial variable x. To obtain a unique fundamental solution that corresponds to a transition PDF (assuming of course that such a solution exists) one generally needs to also specify the spatial boundary conditions at both endpoints of the process. In some cases, such as for BM or drifted BM on ℝ, both endpoints ±∞ of the process are natural boundaries (not regular) and there is then a unique transition PDF on ℝ, i.e., the Gaussian PDF we have already derived. Similarly, for GBM the two endpoints of the state space (0, ∞) are natural boundaries and hence the process has a unique transition PDF on ℝ+, i.e., the known lognormal PDF.

The following result extends Theorem 11.7 and, as we shall see in later chapters, is used for pricing (single-asset) financial derivatives via a PDE based approach.

Theorem 11.9

(“Discounted” Feynman–Kac). Fix T > 0 and let {X(t)}t≥0 satisfy the SDE in (11.24). Let the same assumptions stated in Theorem (11.7) hold and assume r(t,x): [0, T] × ℝ → ℝ is a lower-bounded continuous function. Then, the function defined by the conditional expectation

f(t,x)Et,x[etTr(u,X(u))duϕ(X(T))]E[eϕ(X(T))|X(t)=x](11.62)

solves the PDE ft+Gt,xfr(t,x)f=0,i.e.,

ft(t,x)+12σ2(t,x)2fx2(t,x)+μ(t,x)fx(t,x)r(t,x)f(t,x)=0,(11.63)

for all x, 0 < t < T, subject to the terminal condition f(T,x) = ϕ(x).

Proof. This result follows by first rewriting the exponential factor as

etTr(u,X(u))du=e0Tr(u,X(u))due0tr(u,X(u))du.

The process defined by gte0tr(u,X(u))duf(t,X(t)) is a martingale since gt = E[gT | Ft], where gT=e0Tr(u,X(u))duf(T,X(T))=e0Tr(u,X(u))duϕ(X(T)):

gt=e0tr(u,X(u))duf(t,X(t))=Et,X(t)[e0Tr(u,X(u))duϕ(X(T))]=E[e0Tr(u,X(u))duϕ(X(T))|t].

Note that gT is FT-measurable and assumed integrable. The last step consists of computing the stochastic differential of gt via the Itô product formula. To do so, define I(t)0tr(u,X(u)) du giving dI(t) = r(t, X(t)) dt, (dI(t))2 ≡ 0 and

d[e0tr(u,X(u))du]=deI(t)=eI(t)dI(t)=e0tr(u,X(u))dur(t,X(t))dt.

Hence, using this and (11.44) within the Itô product formula gives

dgt=eI(t)df(t,X(t))+f(t,X(t))deI(t)+deI(t)df(t,X(t))=eI(t)df(t,X(t))+f(t,X(t))deI(t)=eI(t)[df(t,X(t))f(t,X(t)dI(t)]=eI(t)[(tf(t,X(t))+Gt,xf(t,X(t))r(t,X(t))f(t,X(t)))dt     +σ(t,X(t))fx(t,X(t))dW(t)].

By the martingale condition, the drift coefficient (i.e., the expression multiplying dt) must vanish for all values X(t) = x and time t; namely, (t+Gt,xr(t,x))f(t,x) = 0. This is precisely the PDE in (11.63). Finally, the terminal condition follows trivially from (11.62) for t=T:f(T,x)=E[eTTr(u,X(u))duϕ(X(T))|X(T)=x]=E[ϕ(X(T))|X(T)=x]=ϕ(x).

An important special case is when r(t,x) = r is a constant. Then, the function defined by the conditional expectation, f(t,x):=er(Tt)Et,x[ϕ(X(T))], solves

ft(t,x)+12σ2(t,x)2fx2(t,x)+μ(t,x)fx(t,x)rf(t,x)=0,(11.64)

with terminal condition f(T,x) = ϕ(x).

11.7.1 Forward Kolmogorov PDE

Proposition 11.8 states that a transition PDF solves the Kolmogorov PDE (11.51) in the backward variables (t,x). We can also define the differential operator G˜G˜T,y acting on the forward variables (T,y):

G˜f(T,y)122y2(σ2(T,y)f(T,y))y(μ(T,y)f(T,y)).(11.65)

This is also referred to as the differential adjoint to the generator G. It can be shown that under fairly general conditions the transition PDF p = p(t,T; x,y) (considered as a function of T, y, for any fixed t, x) satisfies the so-called forward Kolmogorov or Fokker–Planck PDE

pT=G˜p,(11.66)

with limT↓t p(t,T;x,y) = δ(y − x). The name forward derives from the fact that y refers to the value of the process at future time T > t.

The formal proof of (11.66), and under what conditions it holds true, requires a rather technical discussion that is beyond our scope. However, it is instructive to see how (11.66) arises from the backward PDE. Let the interval ℐ denote the state space of process X. For example, ℐ = ℝ for standard BM, ℐ = ℝ+ for GBM, ℐ = (L, ∞) for GBM killed at a lower level L > 0, etc. In our heuristic justification of (11.66) we shall now make use of the Chapman–Kolmogorov relation:

p(t,T;x,y)=p(t,t;x,x)p(t,T;x,y)dx(11.67)

for any t<t<T,x,y. Equation (11.67) is an important general property that follows from the Markov property. To derive this relation, consider the joint PDF of the triplet (X(T),X(t'),X(t)) and applying conditioning gives

fX(T),X(t),X(t)(y,x,x)=fX(t)(x)fX(t)|X(t)(x|x)fX(T)|X(t)(y|x)

where fX(T)|X(t),X(t)(y|x,x)=fX(T)|X(t)(y|x) by the Markov property. Dividing both sides by the PDF of X(t), fx(t)(x), and using the definition of the transition PDF in (10.16), gives the joint PDF of the pair X(T), X(t') conditional on X(t) = x:

fX(T),X(t)|X(t)(y,x|x)fX(T),X(t),X(t)(y,x,x)fX(t)(x)=p(t,t;x,x)p(t,T;x,y).

Integrating out the x' variable gives the PDF of X(T) conditional on X(t) = x:

fX(T)|X(t)(y|x)=fX(T),X(t)|X(t)(y,x|x)dx'=p(t,t;x,x)p(t,T;x,y)dx'.

By definition, fx(T)X(t)(yx) = p(t,T;x,y) and therefore we obtain (11.67).

To arrive at (11.66) we begin by differentiating both sides of (11.67) w.r.t. t′ and note that tp(t,T;x,y)0, giving

[p(t,T;x,y)tp(t,t;x,x)+p(t,t;x,x)tp(t,T;x,y)]dx0.(11.68)

We leave the first integral term as is, but re-express the second part of the integral by using the backward PDE, tp(t,T;x,y)=Gt,xp(t,T;x,y), to obtain

p(t,t;x,x)tp(t,T;x,y)dx=p(t,t;x,x)​ Gt,xp(t,T;x,y)​ ​ ​ dx.

The next step consists of using the differential operator Gt,x, applying integration by parts on the above right-hand integral and assuming that contributions from the boundaries of ℐ vanish (see Exercise 11.35) to obtain

p(t,t;x,x)Gt,xp(t,T;x,)dx=p(t,T;xy)G˜t,xp(t,t;x,x)dx.(11.69)

This shows that G˜ indeed acts as the corresponding adjoint operator to G. Using this relation into the second term in the integrand of (11.68) gives

p(t,T;x,y)[tp(t,t;x,x)G˜txp(t,t;x,x)]dx0.(11.70)

Since this integral is identically zero for arbitrary given values t>t,x, then (assuming a large enough family of positive transition PDFs p(t',T;x',y) as functions of x') the integrand must be zero for all x. This implies that the term in brackets in the integrand must equal zero, i.e., for fixed backward variables t, x we have the forward Kolmogorov PDE, pt=G˜t,xp, in the forward variables t',x' for an arbitrary transition PDF p = p(t,t';x,x').

11.7.2 Transition CDF/PDF for Time-Homogeneous Diffusions

In many applications, including derivative pricing, the stochastic process is assumed to be time-homogeneous. We recall the definition of a time-homogeneous process from the previous chapter, i.e., the relation in (10.17). For a time-homogeneous diffusion process, this means that the drift and diffusion coefficient functions are only functions of the "spatial variable" and are not functions of time t: μ(x,t) = μ(x) and σ(x,t) = σ(x). The generator Gt,xGx for such a process is then of the form

Gx12σ2(x)2x2+μ(x)x.(11.71)

Since the transition PDF (or CDF) satisfies a time-homogeneous PDE, it is then a function of the time difference: τ ≡ Tt, i.e., we write it as px,y) and the transition CDF as Px,y). This time dependence on τ = Tt can also be realized from the conditional expectation definition of the transition CDF. Indeed, the defining relation in (10.17) implies

P(t,T;x,y)(X(T)y|X(t)=x)=(X(t+τ)y|X(t)=x)=(X(τ)y|X(0)=x)=P(0,τ;x,y)P(τ;x,y),

and p(τ;x,y)=yP(τ;x,y) Writing p(t,T;x,y) = p(τ; x,y) and using the fact that τT=1 and τt=1 gives

p(t;T;x,y)T=p(τ;x,y)τandp(t,T;x,y)t=p(τ;x,y)τ.

The backward and forward Kolmogorov PDEs are then given by

pτ=12σ2(x)2px2+μ(x)px(backward)(11.72)

pτ=122y2(σ2(y)p)y(μ(y)p)(forward)(11.73)

for a transition PDF p = p(τ;x,y) and the same PDEs for the corresponding CDF P(τ;x,y). The previous terminal condition is now an initial condition where

lim⁡ τ0p(T,x,y)p(0+,x,y)=δ(xy)andP(0,x,y)=I{xy}.(11.74)

Note that, by time homogeneity, the conditional expectation in (11.50) in the above Feynman–Kac Theorem gives

E[ϕ(X(T))|X(t)=x]=E[ϕ(X(t+τ))|X(t)=x]=E[ϕ(X(τ))|X(0)=x]f(τ,x).

That is, (11.52) now reads

f(τ,x)=ϕ(y)p(τ;x,y)dy,(11.75)

where ƒ solves the backward Kolmogorov PDE

fτ=12σ2(x)2fx2+μ(x)fx(11.76)

with initial condition ƒ(0,x) = ϕ(x). Note: ƒ(0+,x) ≡ ƒ(0,x) for continuous ϕ(x).

Assuming a constant discount function r(t,x) = r, we observe that the discounted expectation is also a function of variables τ, x, i.e., we have the function

ν(τ,x)=er(Tt)Et,x[ϕ(X(T))]=erτf(T,x)=erτϕ(y)p(τ;x,y)dy

satisfying the PDE

ντ=12σ2(x)2νx2+μ(x)vxrν(11.77)

with initial condition v(0,x) = ϕ(x). This is the time-homogeneous version of (11.64).

We have already seen several specific examples of time-homogeneous processes such as standard BM, GBM in Example 11.11, and the OU process in Example 11.13. In Example 11.11, the GBM process is time homogeneous with coefficient functions μ(x) = μx and σ(x) = σx, and having respective transition CDF and PDF:

P(τ;x,y)=N(ln⁡ (y/x)(μ12σ2)τστ)(11.78)

and

p(τ;x,y)=1yσ2πτexp⁡ ([ln⁡ (y/x)(μ12σ2)τ]2σ2τ2),(11.79)

x,y > 0,τ > 0. The reader can verify by direct differentiation that both functions satisfy (11.72) and (11.73) with the appropriate initial condition in (11.74). This is also the case for the time-homogeneous OU process, where setting τ = Tt in (11.60) and (11.61) gives the transition CDF and PDF that satisfy the above time-homogeneous Kolmogorov PDEs with μ(x) = αβx, σ(x) = σ. In contrast, for nonconstant μ(t) and (or) nonconstant σ(t), the GBM process in Example 11.11 is time inhomogeneous , i.e., the transition functions in (11.57) and (11.58) cannot be written as functions of only τ = Tt in the time variables, but rather depend on both t and T, separately, via the time-averaged quantities in (11.56).

11.8 Radon–Nikodym Derivative Process and Girsanov's Theorem

Our main goal in this section is to use and build upon the basic tools and ideas developed in Section 9.5 of Chapter 9 in order to understand how to construct a certain type of probability measure change which introduces a drift in the BM. In particular, we are interested in a measure change, say ^, whereby we begin with W{W(t)}t≥0 as a standard BM under measure ℙ and then define a new process, which we denote by W^{W^(t)}t0, such that W^ is a standard BM under the new measure ^. We will see that the measure change from ^ is constructed by using a positive random variable that is an exponential ℙ-martingale and that there is a precise relationship between the two Brownian motions W and W^ which will differ only by a drift component. This is the essence of Girsanov's Theorem, whose statement and proof are given later. The change of measure has many useful applications and will also allow us to compute conditional expectations of processes or random variables that are functionals of Brownian motion under two different probability measures ℙ and ^. These two measures will be equivalent in the sense that (as we recall from our previous discussion on equivalent probability measures) all events having zero probability under one measure also have zero probability under the other measure.

Let us begin by fixing a filtered probability space (Ω, ℱ, ℙ, ?), where ? = {Ft}t≥0 is any filtration for standard Brownian motion and recall Definition 10.1 of a (ℙ,Fopf;)-BM. This is shorthand for a standard Brownian motion W{W(t)}t≥0 w.r.t. a given filtration ? and a measure ℙ. That is, W has (a.s.) continuous paths started at W(0) = 0 and, under given measure ℙ, it has normally distributed increments W(t)W(s) ~ Norm(0,ts) that are independent of ℱs, for all 0 ≤ s < t. From these original defining properties we then showed that W has quadratic variation [W,W](t) = t and that it is a (ℙ, ?)-martingale (shorthand for a martingale w.r.t. filtration ? and measure ℙ). Now, let's assume that we have a process that we know is a continuous martingale started at zero and with the same quadratic variation formula as standard Brownian motion. The question is whether or not this process is a standard Brownian motion. It turns out that the answer is yes, the process is a standard Brownian motion as stated in the following theorem, originally due to Lévy. In what follows, this will give us a useful way to recognize when a martingale process is in fact a standard Brownian motion. The characterization makes no assumption of the normality and independence of increments! Rather, these properties are implied. Besides the martingale property, the requirement of continuity of all paths and the fact that they must start at zero, the recognition that we have a BM follows from the assumption of the above quadratic variation formula.

[Technical Remark: We note that the proof of Theorem 11.10 below makes use of a general version of the Itô formula in (11.18). Although we do not prove it, it turns out that we have the same Itô formula as in (11.18) if W is replaced by a continuous martingale process M{M(t)}t≥0, that starts at zero and has quadratic variation [M,M](t) = t, i.e., dM(t)dM(t) = d[M,M](t) = dt:

df(t,M(t))=(ft(t,M(t))+12fxx(t,M(t)))dt+fx(t,M(t))dM(t).(11.80)

Essentially one can think of this as the Itô formula in (11.20) where XM with zero drift

μ ≡ 0 and unit diffusion function σ ≡ 1. The integral form of (11.80) is (see (11.19))

f(t,M(t))=f(0,M(0))+0t(fu(u,M(u))+12fxx(u,M(u)))du+0tfx(u,M(u))dM(u).(11.81)

Assuming the usual square integrability condition as we did for any Itô integral w.r.t. BM, the above stochastic integral w.r.t. the increment dM(u) is defined in a similar fashion and is a martingale having zero expected value. Note that if ƒ(t,x) is a C1,2 function satisfying the PDE ft(t,x)+12fxx(t,x)=0, then the process defined by Y(t)ƒ(t,M(t)) is a martingale.]

Theorem 11.10

(Lévy’s characterization of standard BM). Let the process {M(t)}t≥0 be a continuous (ℙ,?)-martingale started at M(0) = 0 (a.s.) and with quadratic variation [M,M](t) = t for all t ≤ 0. Then, {M(t)}t≥0 is a standard (ℙ,?)-BM.

Proof. Since we have already assumed that M has continuous paths all starting at zero, from the definition of a (ℙ,?)-BM we have left to show that M(t)M(s) ~ Norm(0, t − s) and that these increments are independent of ℱs for all 0 ≤ s < t. For this purpose, consider the function f(t,x)=e12θ2t+θx with arbitrary real parameter θ. Since ƒ(t,x) satisfies the PDE ft(t,x)+12fxx(t,x)=0, from the above discussion we have that the process f(t,M(t))=e12θ2t+θM(t),t0, is a martingale. In fact, we recognize this as an example of an exponential martingale. Taking the expectation of the process at time t, conditional on filtration ℱs, st, and using the martingale property gives the conditional moment-generating function (m.g.f.) of M(t)M(s) as a function of θ:

E[e12θ2t+θM(t)|s]=e12θ2s+θM(s)E[eθ(M(t)M(s))|s]=e12θ2(ts).

This is equivalent to the m.g.f. of M(t)M(s), which is the m.g.f. of a Norm(0,ts) random variable (by the tower property):

E[eθ(M(t)M(s))]=E[E[eθ(M(t)M(s))|s]]=e12θ2(ts).

Hence, as function of θ, the m.g.f. and the m.g.f. conditional on ℱs are the same and correspond to that of a Norm(0, ts) random variable, i.e., M(t)M(s) ~ Norm(0,ts) and M(t)M(s) is independent of ℱs, for all st.

[Remark: In what follows, we will only distinguish between different probability measures while fixing a filtration ? for BM. Hence, we shall also write ℙ-BM to mean standard (ℙ,?)-BM, i.e., standard BM w.r.t. filtration ? and under measure ℙ. Equivalently, we shall also say that W is a BM under the measure ℙ. We shall also sometimes loosely say BM (or Brownian motion) where we clearly really mean standard BM. Also, we simply say ℙ-martingale to mean a (ℙ,?)-martingale and ^-martingale to mean a (^,F)-martingale when ? is fixed.]

In what follows we let the probability measure ^^(ϱ) be defined by (9.109) of Section 9.5, i.e., ^(A)Aϱ(ω)d(ω),A, with Radon–Nikodym random variable ϱd^d assumed positive (almost surely) with unit expectation under measure ℙ, E[ϱ] = 1. We recall how ϱ is used in (9.110) and (9.111) for computing the expectation of any integrable random variable under measures ^ and ℙ, respectively. Shortly we shall explicitly specify this random variable and, in fact, its precise specification is a key ingredient in Girsanov’s Theorem. However, for the moment we can keep our assumptions on ϱ as is (which are as general as possible). In preparation for our main result, we will need to define and discuss some basic properties of a so-called Radon–Nikodym derivative process of ^ w.r.t. ℙ. In a previous chapter on discrete-time financial models, we defined a similar process but in a discrete-time stochastic setting. In continuous time we shall fix some terminal time T > 0 and define the Radon–Nikodym derivative process {ϱt}0≤t≤T (of measure ^^(ϱ) w.r.t. measure ℙ for a given filtration ?) by

ϱtE[ϱ |t],0tT.(11.82)

We remark that it is customary to also use the following more explicit equivalent notations for the random variable ϱt:

ϱt(d^d)tor(d^(ϱ)d)tor(d^(ϱ)d)t.

Hence (11.82) is also written as (d^d)tE[d^d|t]. These notations really spell out the definition in (11.82) and also visually remind us of the "direction of the measure change," e.g., ^. In what follows we shall try to keep our notation less cumbersome as long as there is no ambiguity.

Clearly ϱt is ℱt-measurable and integrable, E[|ϱt|] ≤ E[|ϱ|] = E[ϱ] = 1 < ∞, for all t[0,T]. By the tower property and the definition in (11.82), we immediately we see that the process {ϱt}0≤t≤T is a ℙ-martingale (recall the Doob-Léevy martingale):

E[ϱt|s]=E[E[ϱ|t]|s]=E[ϱ| s]=ϱs,0stT.

By definition, the process also starts with unit value: ϱ0 = E[ϱ| ℱ0] ≡ E[ϱ] = 1. Hence, by the martingale property, the process has unit expectation, E[ϱt] = ϱ0 = 1, for all t[0,T].

The next proposition gives a useful formula for computing the ^-measure expectation of an ℱt-measurable random variable X, conditional on information up to a time s prior to time t, as a ℙ-measure conditional expectation of X · (ϱt/ϱs). The ratio ϱt/ϱs of the Radon–Nikodym derivative process at times s and t adjusts for the change of measure in the conditional expectation.

Proposition 11.11.

Let ^ be defined by ^(A)Aϱ(ω)d(ω),A, with process ϱt ≔ E[ϱ| ℱt], 0 ≤ tT. Assume the random variable X is integrable w.r.t. ^ and ℱt-measurable for a given time t[0,T]. Then, for all 0 ≤ st,

E^[X|s]=ϱsE[ϱtX|s].(11.83)

Proof. This result follows as a simple application of Theorem 9.7 where we set Gs, and st implies G. Then, upon using the definition in (11.82) for time s, the formula in (9.113) gives

E^[X|s]=E[ϱX|​ s]E[ϱ|s]=ϱsE[ϱX|s].

The last expectation on the right is now recast by reversing the tower property, by conditioning on ℱt, and using the fact that X is Ft-measurable (so it is pulled out of the inner expectation conditional on ℱt below):

E[ϱX|s]=E[E[ϱ​ X|t]|s]=E[XE[ϱ|t]|s]=E[Xϱt|s].

In the last step we used the definition E[ϱ | ℱt] = ϱt.

Note that a special case of (11.83) is when s = 0. Since ϱ0=1,E^[X|0]=E^[X] and E[ϱt X | ℱ0] = E[ϱt X], we have

E^[X]=E[ϱtX]fort-measurableX.(11.84)

Consider a continuous-time stochastic process {X(t)}t≥0 adapted to the filtration ?. Since X(t) is ℱt-measurable for every t ≥ 0, we may put X = X(t) in (11.83) to obtain

E^[X(t)|s]=ϱsE[ϱtX(t)|s],0stT.(11.85)

As a consequence of this property we have the following result.

Proposition 11.12.

A continuous-time adapted stochastic process {M(t)}0≤t≤T is a ^-martingale if and only if {ϱtM(t)}0≤t≤T is a ^-martingale.

Proof. Assume {M(t)}0≤t≤T is a ^-martingale. Then, using (11.85) with X(t)M(t),

M(s)=E^[M(t)|s]=ϱsE[ϱtM(t)​ |s]ϱsM(s)=E[ϱtM(t)|s]

for 0 ≤ s ≤ t ≤ T, where the last relation is the ℙ-martingale property of {ϱtM(t)}0≤t≤T. The converse follows since all the above steps may be reversed. Moreover, {M(t)}0≤t≤T is adapted to ? and integrable w.r.t. ^ if and only if {ϱtM(t)}0≤t≤T is adapted to ? and integrable w.r.t. ℙ.

We are now finally ready to state and prove Girsanov’s Theorem for the case of standard Brownian motion.

Theorem 11.13

(Girsanov’s Theorem for BM). Let {W(t)}0≤t≤T be a standard ℙ-BM w.r.t. a filtration ? = {Ft}0≤t≤T and assume the process {γ(t)}0≤t≤T is adapted to ?, for a given T > 0. Define

ϱtexp⁡ (120tγ2(s)ds+0tγ(s)dW(s)),0tT,(11.86)

and the probability measure ^^(ϱ) by the Radon–Nikodym derivative d^d=(d^d)TϱT. Furthermore, assume the square-integrability condition holds:

E[0Tϱsγ2(s)ds]<.(11.87)

Then, the process {W^(t)}0tT defined by

W^(t)W(t)0tγ(s)ds(11.88)

is a standard ^-BM w.r.t. filtration ?.

Some clarifying remarks on Theorem 11.13 before its proof:

  1. The condition in (11.87) is required to ensure that {ϱt}0≤t≤T is a ℙ-martingale with E[ϱt] = 1, i.e., this corresponds to the Itô process 0tϱsγ(s)dW(s),0tT, being a martingale. An equivalent and more practically verified condition that guarantees the process {ϱt}0≤t≤T is a ℙ-martingale is the so-called Novikov condition:

    E[exp(120Tγ2(s)ds)]<.(11.89)

  2. The differential increments of the two Brownian motions are simply related: dW(t)=dW^(t)+γ(t)dtanddW^(t)=dW(t)γ(t)dt.
  3. Pay attention to the consistent and correct use of the ± signs. In this regard, we note that the Radon–Nikodym derivative random variable in (11.86) can equivalently be written as

    ϱt=exp⁡ (120tθ2(s)ds0tθ(s)dW(s)).

    Note the − sign instead of the + sign in front of the Itô integral. Then, (11.88) is replaced by W^(t)W(t)+0tθ(s)ds,i.e.,dW^(t)=dW(t)+θ(t)dt. This is obtained simply by setting γ(t) = −θ(t) in the original definition where γ2(t) = θ2(t).

  4. In general, γ is an adapted process so that ϱt is a functional of BM from time 0 to t. In particular, the Radon–Nikodym derivative process has the form of an exponential ℙ-martingale in the process γ w.r.t. the ℙ-BM, i.e., by the definition in (11.28) we have ϱtϱt(γ)=εt(γW). Dividing the process value at any two times 0 ≤ s < t ≤ T gives

    ϱtϱs(d^d)t(d^d)s=t(γW)s(γW)=exp⁡ (12stγ2(u)du+stγ(u)dW(u)).

  5. Note that ? is any filtration for BM. It can, but need not be, the natural filtration ?W generated by W.
  6. In the simplest case we can choose a constant process, γ(t) = γ = constant, where

    ϱtϱt=e12γ2t+γW(t)(11.90)

    and W^(t)W^(γ)(t)W(t)γt,0tT,is a ^-BM.

Proof. First let us verify that {ϱt}0≤t≤T is a Radon–Nikodym derivative process. By the assumption in (11.87) (or the Novikov condition) we have that {ϱt}0≤t≤T is a ℙ-martingale; in fact it is an exponential ℙ-martingale. This can be seen by applying Itô's formula to the stochastic exponential in (11.86), giving

dϱt=ϱtγ(t)dW(t)ϱt=ϱ0+0tϱsγ(s)dW(s)

where the Itô integral is a martingale (under measure ℙ) by the condition in (11.87). Because of the ℙ-martingale property, E[ϱt] = ϱ0 = e0 = 1, 0 ≤ t ≤ T. In particular, E[ϱT] = 1 and ϱT is also nonnegative. Hence, ϱd^d=ϱT is a proper Radon-Nikodym derivative and by the ℙ-martingale property the process in (11.86) satisfies the definition in (11.82), i.e., it is indeed a Radon–Nikodym derivative process.

We now show that the process W^ defined by (11.88) is a standard ^-BM by verifying all the defining properties in Theorem 11.10 with measure ^ (filtration ? fixed):

  1. (i) The process starts at zero, W^(0)=W(0)=0, and is continuous in time since W^(t)W(t)0tγ(s)ds where W(t) and the integral 0tγ(s) ds are both continuous in t ≥ 0.
  2. (ii) d[W^,W^](t)=dW^(t)dW^(t)=(dW(t)γ(t)dt)(dW(t)γ(t)dt)=dW(t)dW(t)=dt, i.e., the process has quadratic variation [W^,W^](t)=t.
  3. (iii) {{W^(t)}0tT is a ^-martingale. By Proposition 11.12, this follows if we can show that the process {W^(t)}0tT is a ℙ-martingale. To show the latter, we compute the stochastic differential by Itô's product rule (using dϱt = ϱtγ(t) dW(t) and dW^(t)=dW(t)γ(t)dt and setting dW(t) dW(t) = dt, dW(t) dt = 0):

    d(ϱtW^(t))=ϱtdW^(t)+W^(t)dϱt+dϱtdW^(t)=ϱt[dW(t)γ(t)dt]+ϱtγ(t)W^(t)dW(t)+ϱtγ(t)dW(t)[dW(t)γ(t)dt]=ϱt[1+γ(t)W^(t)]dW(t).

    This is a stochastic differential with a zero drift term (i.e., the coefficient in dt is zero). In integral form, where W^(0)=0, we have

    ϱtW^(t)=0tϱs[1+γ(s)W^(s)]dW(s),0tT.

    By the assumed boundedness of 0tγ(s)ds, and the fact that the BM W(t) is bounded (a.s.), W^(t) is bounded (a.s.) for all 0 ≤ t ≤ T. Combining this fact with the square-integrability condition (11.87), it follows that the above Itô integral is defined as it satisfies the square-integrability condition, E[0Tϱs2[1+γ(s)W^(s)]2ds]<, and is hence a ℙ-martingale, i.e., {W^(t)}0tT is a ℙ-martingale.

11.8.1 Some Applications of Girsanov’s Theorem

Let’s begin by considering a simple example of how Girsanov’s Theorem can be applied to change probability measures so as to eliminate the drift in a drifted Brownian process.

Example 11.14

Let X(t)W(μ,σ) (t) be a drifted BM process (recall (10.23))

X(t)μt+σW(t),

where {W(t)}t≤0 is a standard ℙ-BM. Find a measure under which {X(t)}0≤t≤T, for any T > 0, is a scaled BM with zero drift.

Solution. We note that the drift μ and volatility parameter σ > 0 are constants. Hence, by using Girsanov’s Theorem we define a measure ^,d^d=ϱT, where ϱt is given by (11.90). Now, W^(t)W(t)γt is a standard ^-BM and writing X(t) in terms of W^(t) gives

X(t)=μt+σW(t)=μt+σ(W^(t)+γt)=(μ+σγ)t+σW^(t).

So the drift coefficient of X(t) is now μ + σγ, while the volatility parameter multiplying the standard ^-BM is still σ. Note that we can also see this in stochastic differential form:

dX(t)=μdt+σdW(t)=μdt+σ(dW^(t)+γdt)=(μ+σγ)dt+σdW^(t).

Hence, choosing γ = −μ/σ gives zero drift, μ + σγ = 0, and the process X(t)=σW^(t) is a zero-drift scaled BM under measure ^. The measure change ^,d^d=ϱT, is defined explicitly by the Radon–Nikodym derivative process

ϱt(d^d)t=exp⁡ (μ2t2σ2μσW(t)),0tT.(11.91)

For the above example, we can also find the CDF of X(t) in the ^-measure, denoted by F^X(t). It is instructive to see the two ways to obtain this CDF. One way is to simply use X(t)=σW^(t)d¯¯σtZ^,Z^~Norm(0,1) under measure ^:

F^X(t)(x)^(X(t)x)=^(Z^xσt)=N(xσt).

The other way is to compute a ℙ-measure expectation using ϱt and apply the identity in (11.84) since I{X(t)x}=I{μt+σW(t)x} is an ℱt-measurable random variable:

F^X(t)(x)=E^[I{X(t)x}]=E[ϱtI{X(t)x}]=E[e12γ2t+γW(t)I{μt+σW(t)x}]=e12γ2tE[eγW(t)I{W(t)(xμt)/σ}]=e12γ2te12γ2tN(xμtσtγt)=N(x(μ+σγ)tσt)=N(xσt)

where μ + σγ = 0. Note that here we used the expectation identity (A.2) in the Appendix where W(t) ~ Norm(0,t) under measure ℙ.

The CDF of X(t) in the ℙ-measure was already computed in Section 10.3.1, i.e.,

FX(t)(x)(X(t)x)(W(μ,σ)(t)x)=N(xμtσt).

We therefore see from the above two expressions for the CDF of the process at time t (in the two different measures) that the measure change ^ eliminates the drift μt when γ = −μ/σ. Observe that X(t) ~ Norm(0,σ2t) under the ^-measure:

E^[X(t)]=σE^[W^(t)]=0,E^[X2(t)]=σ2E^[W^2(t)]=σ2t.

In contrast, X(t) ~ Norm(μt, σ2t) under the ℙ-measure.

In previous chapters we saw how measure changes are employed in discrete-time asset price models such as the binomial model. In particular, we discussed various risk-neutral measures. By using Girsanov’s Theorem, we can now consider our first example of how to construct a risk-neutral measure for a single stock GBM price process in continuous time.

Example 11.15

(Changing the drift in GBM) Assume a non-dividend-paying stock price process with SDE

dS(t)=S(t)[μdt+σdW(t)],

where {W(t)}t≤0 is a standard BM under the physical (real-world) measure ℙ, μ is a constant physical (i.e., historical) growth rate, and σ > 0 is a constant volatility. Find the risk-neutral probability measure ˜ defined such that the discounted stock price process {S¯(t)ertS(t)}0tT, for any T > 0, is a ˜-martingale, where r is a constant interest rate.

Solution. By the strong solution of the SDE

S(t)=S(0)e(μσ2/2)t+σW(t)S¯(t)=ertS(t)=S(0)e(μr)teσ2t/2+σW(t)S(0)e(μr)tt(σW).

We recognize {εt(σW)eσ2t/2+σW(t)}t0 as a (exponential) ℙ-martingale with unit expectation, E[t(σ · W)] = 1 (see Example 10.2 in Chapter 10). So we now proceed to eliminate the drift μr by expressing W in terms of a new BM, W˜, in the new measure ˜. Since μr and σ are constants, we can accomplish this by employing a measure change as in the above example:

ϱt(d˜d)t=e12γ2t+γW(t)t(γW),(11.92)

where d˜d=ϱT and W˜(t)=W(t)γt is a standard ˜-BM. Substituting W(t)=W˜(t)+γt into the above exponential expression gives

S¯(t)=S(0)e(μr)teσ2t/2+σ(W˜(t)+γt)=S¯(0)e(μr+σγ)tt(σW˜)(11.93)

where S¯(0)=S(0). Note that {εt(σW˜)eσ2t/2+σW˜(t)}t0 is a ˜-martingale where:

E ˜[t(σW˜)|u]=u(σW˜),ut.

Clearly, by setting γ = (r − μ)/σ, we have μr + σγ = 0 and this gives the unique measure change for eliminating the drift in (11.93), giving the discounted stock price process as a ˜-martingale, i.e.,

S¯(t)=S¯(0)t(σW˜),0tT,(11.94)

where

E ˜[S¯(t)|u]=S¯(u),0utT.(11.95)

In summary, the risk-neutral measure is the unique measure obtained with the Radon–Nikodym derivative process and measure change defined by (11.92) with γ = (rμ)/σ:

(d˜d)t=t((rμ)σW),0tT;d˜d=T((rμ)σW).(11.96)

Note that the measure ˜ is uniquely specified by (11.96), where γ = (rμ)/σ always exists since σ > 0. We can also see directly how to choose the above measure change by working with the SDE where the Brownian increment dW(t)=dW˜(t)+γdt is used within the original SDE:

dS(t)=S(t)[μdt+σ(dW˜(t)+γdt)]=S(t)[(μ+σγ)dt+σdW˜(t)].(11.97)

Taking the stochastic differential of S¯(t)ertS(t) and using the above dS(t) term:

dS¯(t)=d(ertS(t))=ert[dS(t)rS(t)dt]=ertS(t)[(μr+σγ)dt+σdW˜(t)]=S¯(t)[(μr+σγ)dt+σdW˜(t)](11.98)

dS¯(t)=σS¯(t)dW˜(t)(11.99)

where the last expression with zero drift is obtained by choosing γ = (rμ)/σ, i.e., by employing the measure change defined in (11.96). Note that the SDE in (11.99) with initial condition S¯(0) is equivalent to (11.94), which is its unique solution. For an arbitrary choice of γ the SDE with drift in (11.98) subject to initial condition S¯(0) is equivalent to (11.93), which is its unique solution. Finally, note that choosing γ = (rμ)/σ in (11.97) gives the stock price drifting at the risk-free rate within the risk-neutral measure:

dS(t)=S(t)[rdt+σdW˜(t)](11.100)

with unique solution

S(t)=S(0)ertt(σW˜)=S(0)e(rσ2/2)t+σW˜(t)(11.101)

equivalent to (11.94). The ˜-martingale property in (11.95) is equivalently expressed as

E ˜[S(t)|u]=er(tu)S(u),0utT.(11.102)

In Example 11.14 we used Girsanov’s Theorem to obtain a new measure ^, defined by the Radon–Nikodym process in (11.91), such that the process X(t)W(μ,σ)(t) is a scaled standard ^-BM. We now employ the same measure change and thereby compute expectations and joint probabilities of events associated with the sampled maximum or minimum of BM with drift. In particular, let’s simply set σ = 1 and consider the process defined by (10.68) in Section 10.4.3, i.e.,

X(t)=μt+W(t)=W˜(t).

The expression in (11.91), for σ = 1, gives the Radon–Nikodym derivative for the change of measure ^,ϱt=(d^d)t=e12μ2tμW(t). Hence, the Radon–Nikodym derivative for the change of measure ^ is expressed in terms of the W^(t)=W(t)+μt, as

1ϱt=(dd^)t=e12μ2t+μW(t)=e12μ2t+μW^(t).(11.103)

Let A, B be any two Borel sets in ℝ and consider the t-measurable indicator random variables I{MX(t)A,X(t)B} and I{mX(t)A,X(t)B} where the respective sampled maximum, MX(t), and minimum, mX(t), of the drifted BM process X are defined in (10.70) and (10.71). That is,

MX(t)=sup⁡ 0utX(u)=sup⁡ 0utW^(u)MW^(t)

and

mX(t)=inf⁡ 0utX(u)=inf⁡ 0utW^(u)mW^(t).

The sampled maximum M(t)MW(t) and minimum m(t)mW(t) of the standard ℙ-BM, W, are defined in (10.33) and (10.34). Applying the change of measure while using (11.103) within (11.84) gives

(MX(t)A,X(t)B)E[I{MX(t)A,X(t)B}]=E^[ϱtI{MX(t)A,X(t)B}]=e12μ2tE^[eμW^(t)I{MW^(t)A,W^(t)B}]=e12μ2tE[eμW(t)I{M(t)A,W(t)B}].(11.104)

In the last equation line we simply removed all "hats" since the random variables MW^(t) and W^(t) under measure ^ are the same as M(t) and W(t) under measure ℙ. By the same steps as in (11.104) we have

(mX(t)A,X(t)B)=e12μ2tE[eI].(11.105)

Equations (11.104) and (11.105) can be used to compute the probability of any joint event involving either pair MX(t), X(t) or mX(t), X(t). For example, taking intervals A=(,m],B=(,x] gives the respective joint CDFs

FMX(t),X(t)(m,x)(MX(t)m,X(t)x)=e12μ2tE[eμW(t)I{M(t)m,W(t)x}](11.106)

and

FmX(t),X(t)(m,x)(mX(t)m,X(t)x)=e12μ2tE[eμW(t)Im(t)m,W(t)x].(11.107)

Expressing the expectation in (11.106) as an integral over the joint density of M(t), W(t):

FMX(t),X(t)(m,x)=e12μ2t0mmeμyfM(t),W(t)(w,y)dydw.(11.108)

Differentiating, and making use of the known joint PDF of M(t), W(t) in (10.39), gives the joint PDF of MX(t),X(t)

fMX(t),X(t)(m,x)=e12μ2t+μxfM(t),W(t)(m,x)=2(2mx)t2πte12μ2t+μx(2mx)2/2t,(11.109)

for x ≤ m,m > 0 and zero otherwise. Similarly, the joint PDF of mX(t), X(t) follows from (11.107) and the joint PDF in (10.43),

fmX(t),X(t)(m,x)=e12μ2t+μxfm(t),W(t)(m,x)=2(x2m)t2πte12μ2t+μx(x2m)2/2t,(11.110)

for x ≤ m,m < 0, and zero otherwise. Other applications of (11.104) and (11.105) are given in Section 10.4.3.

11.9 Brownian Martingale Representation Theorem

Before moving on to the next section on multidimensional (vector) BM we state a result that we will later see has some theoretical importance in replication (hedging) and pricing derivative contracts within a continuous-time financial model driven by a single BM. We have already learned that, given an adapted process {X(t)}0≤t≤T with 0TE[X2(t)]dt<, the Itô process {I(t)0tX(s)dW(s)}0tT is a (ℙ,?)-martingale where {W(t)}t≥0 is a (ℙ,?)-BM. A question that one may ask is: Are all (ℙ,?)-martingales expressible as an Itô process? It turns out that this is the case if we consider martingales that are square integrable and we also restrict the filtration to be the natural filtration generated by the BM, i.e., if F=FW={FtW}t0{σ(W(s):0st)}t0. We summarize this in the following known theorem without proof.

Theorem 11.14

(Brownian Martingale Representation Theorem). Assume {M(t)}0≤t≤T is a (ℙ,?W)-martingale and that it is square integrable, i.e.,

E[M2(t)]<,foreveryt[0,T].

Then, there exists an ?W -adapted process {θ(t)}0≤t≤T such that (a.s.)

M(t)=M(0)+0tθ(u)dW(u).(11.111)

This theorem tells us that if a process is a square-integrable martingale, w.r.t. a given measure ℙ and natural filtration ?W generated by the standard ℙ-BM W, then it can be expressed as a sum of its initial value and an Itô integral in the ℙ-BM. The integrand of the Itô integral is a process that is adapted to ?W. Note that the Itô integral itself is a square-integrable (ℙ,?W)-martingale and also continuous in time. So the martingale having this representation is also continuous in time (i.e., the process has no jumps).

We are now ready to state a closely related result that is a consequence of the above theorem and will later be applicable to our discussion of derivative replication in Chapter 12. Let us consider what happens when we change measures ^ as defined in Girsanov’s Theorem 11.13. As we already noted, ? could be any filtration for the ℙ-BM, W. Now set ? = ?W where {γ(t)}0≤t≤T is assumed to be ?W -adapted and clearly the time integral of this process occurring in (11.88) is ?W -adapted. In particular, the σ(0tγ(s)ds)tW for every t ≥ 0. Then, by the definition in (11.88), the σ-algebra tW^σ(W^(u):0ut)=tW. Hence, if {γ(t)}0≤t≤T is chosen as an ?W -adapted process, then the natural filtration FW^={tW^}0tT, generated by W^ in (11.88), is equal to the natural filtration ?W, generated by W, i.e., FW=FW^. In summary, by combining these facts with Theorem 11.14 we have the result below. This states that, if the change of measure ^ is defined via Girsanov’s Theorem with an ?W -adapted process, then we can always express a square-integrable (^,FW)-martingale as its initial value plus an Itô integral in the ^-BM.

Proposition 11.15.

Let the measure ^ be defined as in Girsanov’s Theorem 11.13 with the assumption that the process {γ(t)}0≤tT is ?W -adapted. If {M(t)}0≤tT is a square-integrable (^,FW)-martingale, then there exists an adapted process, say {θ^(t)}0tT, such that (a.s.)

M(t)=M(0)+0tθ^(u)dW^(u).(11.112)

Proof. By the above argument we have FW=FW^. Hence, {M(t)}0≤t≤T is a square-integrable (^,FW^)-martingale where E^[M2(t)]=E[ϱtM2(t)]E[M2(t)]<. It now follows trivially by Theorem 11.14 that there exists an FW^-adapted (and hence ?W-adapted) process {θ^(t)}0tT such that (11.112) holds (a.s.).

11.10 Stochastic Calculus for Multidimensional BM

11.10.1 The Itô Integral and Itô's Formula for Multiple Processes on Multidimensional BM

We now extend the definition of one-dimensional standard BM {W(t)}t into d dimensions for any finite integer d ≥ 1. As seen below, the extension to multiple dimensions is fairly straightforward as we take each component as an independent one-dimensional standard BM. Notation needs to be introduced to precisely denote each component BM and boldface is used for a vector BM.

Definition 11.4.

A standard BM in ℝd (or standard d-dimensional BM) is a vector process

W(t)(W1(t),W2(t),...,Wd(t)),t0,

where each component process {Wi(t)}t≥0, 1 ≤ id, is an independent one-dimensional standard BM in ℝ.

Hence, each component is i.i.d. where Wi(t) ~ Norm(0, t) and Wi(t) − Wi(s) ~ Norm(0, ts), 1 ≤ id, 0 ≤ st. We call this a standard vector BM since, by construction, each component is an identical and independent copy of a one-dimensional standard BM. That is, Wi(t) and Wj(t) are independent if ij. A filtration F={t}t0 is a filtration for standard d-dimensional BM if it is a filtration for each component BM, {Wi(t)}t≥0. The natural filtration for {W(t)}t≥0, denoted by ?W, is the filtration generated by all components of the standard d-dimensional BM. Given any filtration ? for {W(t)}t≥0, we must have that {W(t)}t≥0 is ?-adapted, i.e., W(t) is t-measurable, and that each Brownian vector increment W(t + s) − W(t) is independent of t for s, t ≥ 0.

Since each component is a standard BM, then we have the usual properties such as the quadratic variation formula for each 1 ≤ id:

[Wi,Wi](t)=td[Wi,Wi](t)dWi(t)dWi(t)=dt.(11.113)

Moreover, [f, Wi](t) has zero covariation for any continuously differentiable function f(t). In particular, for each 1 ≤ id,

[t,Wi](t)=0dWi(t)dt=0.(11.114)

The covariation of two independent Brownian motions is zero, i.e.,

[Wi,Wj](t)=0d[Wi,Wj](t)dWi(t)dWj(t)=0,forij.(11.115)

It is simple to see how this arises by considering a time partition {0 = t0, t1, . . . , tn = t} and forming the partial sum of products of individual Brownian increments:

Qni,j(t)k=1n(Wi(tk)Wi(tk1))(Wj(tk)Wj(tk1))

for ij. Using the fact that the increments are all mutually independent with mean zero, E[Wi(tk) − Wi(tk−1)] = 0, for every k, then E[Qni,j(t)]=0. Since all n terms in the sum are mutually independent, the variance of the sum is the sum of the individual variances. Using the independence of the product terms, where E[(Wi(tk) − Wi(tk−1))2] = E[(Wj(tk) − Wj(tk−1))2] = tktk−1, gives

Var(Qni,j(t))=k=1nE[(Wi(tk)Wi(tk1))2]E[(Wj(tk)Wj(tk1))2]=k=1n(tktk1)2Δnk=1n(tktk1)=Δn(tnt0)=Δnt

where Δnmax⁡ k=1,...n(tktk1) is the maximum time increment over the partition. Clearly, Var(Qni,j(t)))0asΔn0, i.e., this implies that, for all t ≥ 0, the random variable Qni,j(t) converges to its expected value E[Qni,j(t)]=0asΔn0 and hence the co-variation [Wi,Wj](t)lim⁡ ΔnQni,j(t) must be zero for ij.

For convenience we summarize the above “basic rules” for the stochastic increments as follows:

dWi(t)dWj(t)=δijdt,dWi(t)dt=0,(dt)2=0,(11.116)

where δij = 1 if i = j, and 0 if ij.

As in the case of standard BM in one dimension, there is a similar useful characterization of a standard d-dimensional BM due to Lévy which we state in the following lemma. The result can be proven based on multidimensional extensions of the Itô formula.

Theorem 11.16

(Lévy's Characterization of a Standard Multidimensional BM). Consider the vector-valued process {M(t)≔ M1(t), . . . , Md(t)}t⩾0 where each component process {Mi(t)}t≥0, 1 ≤ id, is a continuous (ℙ, ?)-martingale starting at Mi(0) = 0 (a.s.) and having quadratic variation [Mi, Mi](t) = t, for all t ≥ 0. Also, assume [Mi, Mj](t) = 0 for ij. Then, {M(t)}t≥0 is a standard d-multidimensional (ℙ, ?)-BM.

According to this result, a vector process is a standard vector BM (in a given measure and filtration) if we can verify that every component process is a martingale with continuous paths starting at zero, has the same quadratic variation as a standard BM, and all covariations among different components are zero. Basically, this means that each component is an i.i.d. standard one-dimensional BM.

Let us fix a filtration F={t}t0 for BM in ℝd for a given integer d ≥ 1. The formulae and concepts we developed in previous sections on the Itô integral, Itô's formula for a function of an Itô process and SDEs can be generalized to a multiple (vector) BM and multiple Itô processes that are driven by the vector BM in ℝd. Let us first discuss this extension for the case of BM in ℝ2, i.e., d = 2 where W(t) = (W1(t), W2(t)). We can have any number of Itô processes that can be represented as an Itô integral w.r.t. W(t) plus a drift term which is a Riemann (or Lebesgue) integral. Consider two Itô processes X ≡ {X(t)}t≥0 and Y ≡ {Y(t)}t≥0 which form a vector process, (X(t), Y(t))t≥0. Let µX(t) and µY(t) be -adapted drift coefficients of processes X and Y, respectively. The diffusion or volatility coefficient vectors are -adapted vectors in ℝ2 denoted by

σX(t)=(σX,1(t),​ σX,2(t))andσY(t)=(σY,1(t),σY,2(t))

for processes X and Y, respectively. The two processes have the representations:

X(t)=X(0)+0tμX(u)du+0tσX(u)dW(u),(11.117)

Y(t)=Y(0)+0tμY(u)du+0tσY(u)dW(u).(11.118)

In each case, the first (Riemann or Lebesgue) integral is the drift term and the second integral is a sum of two Itô integrals; one is w.r.t. the first component of the volatility vector and the first BM and the second is w.r.t. the second component of the volatility vector and the second BM. That is, we define

0tσX(u)dW(u)0tσX,1(u)dW1(u)+0tσX,2(u)dW2(u),(11.119)

0tσY(u)dW(u)0tσY(u)dW1(u)+0tσY,2(u)dW2(u).(11.120)

Given a time T > 0, throughout we shall assume the square integrability condition holds for the Itô integrals on all time intervals [0, t], 0 ≤ tT, i.e., given an adapted vector process {σ(t) = (σ1(t), σ2(t))}t≥0 then we assume

E[(0Tσ(t)dW(t))2]=0TE[σ(t)2]dt<.(11.121)

where ||σ(t)||2i=1dσi2(t) is the square magnitude of the volatility vector, e.g., for d = 2 then ||σ(t)||2=σ12(t)+σ22(t). This condition is equivalent to requiring 0TE[σi2(t)]dt<, for every component i, and it guarantees the martingale property,

E[0Tσ(s)dW(s)|t]=0tσ(s)dW(s),0tT.(11.122)

Hence, the d-dimensional Itô integrals have zero expectation. The equality in (11.121) is the Itô isometry formula for vector BM, which is a special case of the covariance formula,

Cov(0tσ(s)dW(s),0tγ(s)dW(s))=0tE[σ(s)γ(s)]ds,(11.123)

where σ(t) and γ(t) are t-adapted d-dimensional vectors. This is readily derived by writing out the two Itô integrals as sums of (one-dimensional) Itô integrals, as in (11.119), and then using the covariance relation for each pair of Itô integrals. Also, we assume that any drift coefficient µ(t) is integrable,

E[0T|μ(t)|dt]<.(11.124)

The Itô integrals in (11.119)−(11.120) are the one-dimensional Itô integrals w.r.t. a single standard BM which is taken as either W1 or W2. The Riemann (Lebesgue) integrals in (11.117) and (11.118) are continuous functions of time and therefore have zero quadratic variation. To obtain the quadratic variation of the X process, note that the quadratic variation of each Itô integral in (11.119),

IX,1(t)0tσX,1(u)dW1(u)andIX,2(t)0tσX,2(u)dW2(u),

is computed according to (11.11) (where W1 and W2 individually act as W):

[IX,1,IX,1](t)0tσX,12(u)duand[IX,2,IX,2](t)0tσX,22(u)du.(11.125)

Since [W1, W2](t) = 0, i.e., dW1(t) dW2(t) = 0, the covariation of the two integrals is zero: [IX, 1, IX, 2](t) = 0. Hence, the quadratic variation of the X process is the quadratic variation of the Itô integral in (11.119), which, in turn, is the sum of the two quadratic variations in (11.125):

[X,X](t)=[IX,1,IX,1](t)+[IX,2,IX,2](t)=0t(σX,12(u)+σX,22(u))du=0tσX(u)2du.(11.126)

Similarly, the Y process has quadratic variation

[Y,Y](t)=0t(σY,12(u)+σY,22(u))du=0tσY(u)2du.(11.127)

The stochastic differential forms of (11.126) and (11.127) are

d[X,X](t)=dX(t)dX(t)=(σX,12(t)+σX,22(t))dt=σX(t)2dt,(11.128)

d[Y,Y](t)=dY(t)dY(t)=(σY,12(t)+σY,22(t))dt=σY(t)2dt.(11.129)

It is easier to obtain (11.128) and (11.129) by working directly with the stochastic differential forms of (11.117) and (11.118),

dX(t)=μX(t)dt+σX(t)dW(t)μX(t)​ dt+σX,1(t)dW1(t)+σX,2(t)dW2(t),dY(t)=μY(t)dt+σY(t)dW(t)μY(t)dt+σY,1(t)dW1(t)+σY,2(t)dW2(t),

and then applying the rules in (11.116). For example, by squaring the differential dX(t) and setting the terms dt dW1(t) = dt dW2(t) = 0, dW1(t) dW2(t) = 0, (dt)2 = 0, and (dW1(t))2 = (dW2(t))2 = dt, we obtain

dX(t)dX(t)(dX(t))2=(μX(t)dt+σX(t)dW(t))2=σX(t)σX(t)dt=σX(t)2dt.

This recovers the result in (11.128). A similar derivation based on squaring dY(t) gives (11.129). The covariation is also simpler to compute based on this differential approach. By multiplying the two stochastic differentials and applying the simple rules in (11.116),

d[X,Y](t)=dX(t)dY(t)=(μX(t)dt+σX(t)dW(t))(μY(t)dt+σY(t)dW(t))=(σX(t)dW(t))(σY(t)dW(t))=σX(t)σY(t)dt.(11.130)

The last equation line is obtained as follows:

(σX(t)dW(t))(σY(t)dW(t))=i=1d=2j=1d=2σX,i(t)σY,j(t)dWi(t)dWj(t)=δijdt=(i=1d=2σX,i(t)σY,i(t))dt=σX(t)σY(t)dt.

The integral form of (11.130) gives the covariation of the two Itô processes,

[X,Y](t)=0tσX(u)σY(u)du=0t(σX,1(u)σY,1(u)+σX,2(u)σY,2(u))du.

The Itô formula in (11.20) and (11.21) for a function of one Itô process, and time t, extends further to the slightly more general case of a function of two Itô processes and time t. We simply state this important result as a lemma (without proof). The main idea, and a simple way to remember the formula in (11.131), is to Taylor expand f(t, x, y) up to terms of order dt, (dx)2, (dy)2 and then replace ordinary variables xX(t), yY(t) and ordinary differentials by their respective stochastic differentials: dx → dX(t), dy → dY(t), (dx)2 → (dX(t))2 ≡ d[X, X](t), (dy)2 → (dY(t))2 ≡ d[Y, Y](t), and dx dy → dX(t) dY(t) ≡ d[X, Y](t).

Lemma 11.17

(Itô Formula for a Function of Two Processes). Assume f(t, x, y) is a C1, 2, 2 function on+ × ℝ2, i.e., having continuous derivatives ftft,fxfx,fyfy,fxx2fx2,fxy2fxyandfyy2fy2. Let the processes X and Y be Itô processes as given in (11.117) and (11.118). Then, the process defined by F(t) ≔ f(t, X(t), Y(t)), t ≥ 0, has stochastic differential dF(t) ≡ df(t, X(t), Y(t)) given by

df(t,X(t),Y(t))=ft(t,X(t),Y(t))dt+fx(t,X(t),Y(t))dX(t)+fy(t,X(t),Y(t))dY(t)+12fxx(t,X(t),Y(t))d[X,X](t)+12fyy(t,X(t),Y(t))d[Y,Y](t)+fxy(t,X(t),Y(t))d[X,Y](t).(11.131)

In integral form,

f(t,X(t),Y(t))=f(0,X(0),Y(0))(11.132)

+0t[fu(u,X(u),Y(u))+12σX(u)2fxx(u,X(u),Y(u))+12σY(u)2fyy(u,X(u),Y(u))+σX(u)σY(u)fxy(u,X(u),Y(u))]du+0tfx(u,X(u),Y(u))dX(u)+0tfy(u,X(u),Y(u))dY(u).(11.133)

It should be remarked (and we shall see later when we present the general form of the Itô formula for functions of multiple processes driven by multiple Brownian motions) that this lemma is generally valid for any number d ≥ 1 of underlying Brownian motions, although we have focused our present discussion on taking d = 2 as the base case. For d ≥ 2 the volatilities are d-dimensional vectors and the standard BM is a d-dimensional vector (standard) BM. For the case that d = 1 we simply have the vectors becoming scalars, e.g., γX(t) → σX (t), σY(t) → σ(t), and W(t) → W(t).

Observe that the first integral in (11.133) is a Riemann (or Lebesgue) integral on the time interval [0, t], whereas the second and third integrals are stochastic integrals w.r.t. the Itô processes X in (11.117) and Y in (11.118). The representation of df(t, X(t), Y(t)) in (11.131) and its corresponding integral form in (11.133) is written in terms of the stochastic differentials of X and Y. The Itô formula is also equivalently rewritten by substituting the above stochastic differentials for dX(t) and dY (t). Then, (11.131) takes the form

df=(ft+μX(t)fx+μY(t)fy+12||σX(t)||2fxx+12||σY(t)||2fyy+σX(t)σY(t)fxy) dt   +(fxσX(t)+fyσY(t))dW(t)μf(t) dt+σf(t)dW(t)      (11.134)

where ff(t, X(t),Y (t)), fxfx(t, X(t), Y (t)), etc., is used to compact the expressions. In the second equation line we simply identified the drift μf (t) and volatility vector σf (t) for the process {f(t, X(t), Y (t))}t⩾0. We see that μf(t) and σf(t) are adapted processes defined explicitly as functions of f(t, X(t), Y (t)) and its partial derivatives, as well as functions of linear combinations of the drift and volatility vector coefficients of processes X and Y. In particular, the volatility vector σf(t) := fx σX(t) + fy σ Y (t) = (σf,1(t),σf,2(t)) has components

σf,1(t)=fxσX,1(t)+fyσY,1(t),  σf,2(t)=fxσX,2(t)+fyσY,2(t).      (11.135)

Hence, {F(t)}t≥0 ≡ {f(t, X(t), Y (t))}t≥0 is an Itô process satisfying the stochastic integral equation

F(t)=F(0)+0tμf(u) du+0tσf(u)dW(u)F(0)+0tμf(u) du+0tσf,1(u) dW1(u)+0tσf,2(u) dW2(u).      (11.136)

The following example shows that the Itô Product Rule, derived previously, now follows simply by applying the Itô formula in (11.131).

Example 11.16.

Let {X(t)}t⩾0 and {Y (t)}t⩾0 be Itô processes. Obtain the stochastic differential of their product.

Solution. Defining the function f(t, x, y) := xy gives the product F(t) := f(t, X(t), Y (t)) = X(t) Y (t) as an Itô process whose stochastic differential is given according to (11.131). In this case the function is independent of t and has derivatives:

ft=0,  fx=y,  fy=x,  fxy=1,  fxx=fyy=0.

Substituting these terms into (11.131) (with x = X(t), y = Y (t)) gives

d(X(t)Y(t))df(t,X(t),Y(t))=0dt+Y(t) dX(t)+X(t) dY(t)+120dX(t) dX(t)+120dY(t) dY(t)+1dX(t) dY(t)=Y(t) dX(t)+X(t) dY(t)+dX(t) dY(t).      (11.137)

Assuming X(t)Y (t) ≠ 0, we note that a useful way to represent this is to divide by X(t)Y (t) (i.e., factor out the product), giving the relative differential

dF(t)F(t)d(X(t)Y(t))X(t)Y(t)=dX(t)X(t)+dY(t)Y(t)+dX(t)X(t)dY(t)Y(t).      (11.138)

We can also write this in the form of (11.134):

d(X(t)Y(t))X(t)Y(t)=(μX(t)X(t)+μY(t)Y(t)+σX(t)X(t)σY(t)Y(t)) dt+(σX(t)X(t)+σY(t)Y(t))dW(t)μXY(t) dt+σXY(t)dW(t).      (11.139)

This shows how the drift μXY (t) and volatility vector σXY (t) for the product process F = XY are related to the drifts and volatility vectors of the processes X and Y.

Another important example is the Quotient Rule for the stochastic differential of a ratio of two Itô processes. This rule is useful when pricing derivatives where we need to compute the drift and volatility of a process defined by a ratio of two asset price processes.

Example 11.17.

Let {X(t)}t⩾0 and {Y (t)}t⩾0 ≠ 0 be Itô processes. Obtain the stochastic differential of their ratio F(t):=X(t)Y(t).

Solution. Let f(t, x, y) := x/y, i.e., f(t, X(t),Y (t)) = X(t)/Y (t) is an Itô process with its stochastic differential given by (11.131). The relevant partial derivatives are

ft=0,  fx=1y,  fy=xy2,  fxy=1y2,  fyy=2xy3,  fxx=0.

Substituting these terms into (11.131) (with x = X(t),y = Y (t)) gives

d (X(t)Y(t))dF(t)df(t,X(t),Y(t))=1Y(t)dX(t)X(t)Y2(t)dY(t)+X(t)Y3(t)(dY(t))21Y2(t)dX(t) dY(t)=(dX(t)Y(t)X(t)Y(t)dY(t)Y(t))(1dY(t)Y(t)).      (11.140)

This is written in a more convenient form (for later use) by dividing through by X(t)/Y (t):

dF(t)F(t)dX(t)Y(t)X(t)Y(t)=(dX(t)X(t)dY(t)Y(t))(1dY(t)Y(t)).      (11.141)

Substituting the expressions for dX(t) and dY (t), applying the basic rules, and combining terms in dt and dW(t) gives the form in (11.134) as

dX(t)Y(t)X(t)Y(t)=(μX(t)X(t)μY(t)Y(t)+σY(t)Y(t)(σY(t)Y(t)σX(t)X(t))) dt+(σX(t)X(t)σY(t)Y(t))dW(t)μXY(t) dt+σXY(t)dW(t).      (11.142)

This gives the drift μXY(t) and volatility vector σXY(t) for the quotient process F=XY in terms of the drifts and volatility vectors of the individual processes X and Y.

The Itô product and quotient rules in (11.139) and (11.142) take on a more compact form if the processes X and Y can be represented in terms of the so-called log-drifts and log-volatility vectors (sometimes also referred to as local drift and local volatility). It will turn out to be particularly convenient when we later model the asset (e.g., stock) price processes. That is, assume processes X and Y satisfy the SDEs

dX(t)X(t)=μX(t) dt+σX(t)dW(t),      (11.143)

dY(t)Y(t)=μY(t) dt+σY(t)dW(t),      (11.144)

where μX (t),μY (t), σX (t), σY (t) are ℱt-adapted log-drifts and log-volatility vectors. Note that these SDEs are quite general. The difference is that the previous coefficients are related to these “log-coefficients” by sending the previous coefficients μX (t) → μX (t)X(t), μY (t) → μY (t)Y (t), σX (t) → σX (t)X(t), σY (t) → σY (t)Y (t). The Itô formula applied to (11.143) and (11.144) still gives (11.138) and (11.140). However, now the terms occurring in (11.139) and (11.142) simplify, where we replace the previous ratios μX(t)X(t)μX(t),μY(t)Y(t)μY(t),σX(t)X(t)σX(t),σY(t)Y(t)σY(t), giving

d(X(t)Y(t))X(t)Y(t)=(μX(t)+μY(t)+σX(t)σY(t)) dt+(σX(t)+σY(t))dW(t)μXY(t) dt+σXY(t)dW(t).      (11.145)

Here, μXY (t) and σXY (t) denote the log-drift and log-volatility vector of process XY and

dX(t)Y(t)X(t)Y(t)=(μX(t)μY(t)+σY(t)(σY(t)σX(t))) dt+(σX(t)σY(t))dW(t)μXY(t) dt+σXY(t)dW(t)      (11.146)

where μXY(t) and σXY(t) denote the log-drift and log-volatility vector of process XY.

For d = 1, the SDEs in (11.143)–(11.146) are all of the form in (11.26) where all volatility coefficients are scalars. The processes can therefore be represented as in (11.27). Given initial values X(0) and Y(0):

X(t)=X(0)exp[0t(μX(s)12σX2(s)) ds+0tσX(s) dW(s)],Y(t)=Y(0)exp[0t(μY(s)12σY2(s)) ds+0tσY(s) dW(s)],X(t)Y(t)=X(0)Y(0)exp[0t(μXY(s)12σXY2(s)) ds+0tσXY(s) dW(s)],X(t)Y(t)=X(0)Y(0)exp[0t(μXY(s)12σXY2(s)) ds+0tσXY(s) dW(s)].

The reader can verify that the above third equation obtains by multiplying the expressions in the first and second equations, while the fourth equation obtains by dividing the expressions in the first and second equations. In the special case that the log-drift and log-volatility vectors are nonrandom (constants or ordinary functions of time t) the above processes are all GBM processes.

The above representations readily extend to the general vector case of d ⩾ 1. Consider the X process. Its natural logarithm has SDE:

d lnX(t)=dX(t)X(t)12(dX(t)X(t))2=μX(t) dt+σX(t)dW(t)12(σX(t)dW(t))2=(μX(t)12||σX(t)||2) dt+σX(t)dW(t).

In integral form,

ln X(t)X(0)=0t(μX(s)12||σX(s)||2) ds+0tσX(s)dW(s).

By exponentiating, X(t)X(0)=exp(lnX(t)X(0)),

X(t)=X(0)exp[0t(μX(s)12||σX(s)||2) ds+0tσX(s)dW(s)].      (11.147)

This expresses X(t) in the general case of d-dimensional BM and reduces to the above expression in case d = 1. Similar expressions hold for the other processes. In fact, given an adapted drift μ(t) and volatility vector σ(t) (satisfying the above integrability assumptions), the SDE

dU(t)U(t)=μ(t) dt+σ(t)dW(t)      (11.148)

with initial value U(0) is equivalent to the representation

U(t)=U(0)exp[0t(μ(s)12||σ(s)||2) ds+0tσ(s)dW(s)]=U(0) e0tμ(s) dsεt(σW)      (11.149)

where the vector BM version of the stochastic exponential in (11.28) is defined by

εt(σW) :=exp[120t||σ(s)||2ds+0tσ(s)dW(s)].      (11.150)

Hence, each process satisfying (11.143)–(11.146) has an equivalent representation as in (11.149). For the X process we see that (11.147) has precisely the form in (11.149). For processes Y, XY, and X/Y the same form obtains in the obvious manner where the corresponding drifts and volatility vectors, for the respective processes, are substituted into (11.149).

We now further extend our discussion to arbitrary dimensions d ⩾ 1. As already noted, all the formulae presented so far are valid for d ⩾ 1. Given a d-dimensional ℱt-adapted vector γ(t)=(γ1(t),...,γd(t)), the Itô integral w.r.t. d-dimensional BM is defined by the sum of one-dimensional Itô integrals,

0tγ(s)dW(s) :=i=1d0tγi(s) dWi(s).      (11.151)

Throughout we shall assume that all such Itô integrals are square-integrable martingales for all times 0 ⩽ tT, given some T > 0, where (11.121)–(11.123) hold.

Let {X(t)(X1(t),X2(t),...,Xn(t))}t0, n1, be an n-dimensional Itô vector process where each component is a real-valued Itô process driven by a d-dimensional BM:

dXi(t)=μi(t) dt+j=1dσij(t) dWj(t)μi(t) dt+σi(t)dW(t)      (11.152)

with corresponding integral form

Xi(t)=0tμi(s) ds+j=1d0tσij(s) dWj(s)0tμi(s) ds+0tσi(s)dW(s)      (11.153)

for i = 1,...,n. Each {μi(t)}t⩾0 is an integrable adapted process. The coefficients σij(t) are adapted and satisfy the square-integrability condition, 0TE[σij2(s)] ds<. The n × d matrix of coefficients σ(t) := [σij(t)]i=1,...,n; j=1,...,d is the matrix of volatilities where the ith row gives the volatility vector σi(t) of the ith process Xi:

σi(t)=(σi1(t),σi2(t),...,σid(t)),   i=1,...,n.

The jth component of σi(t) is the volatility coefficient σij (t).

Being Ft-adapted, the coefficients μi(t) and σij(t) can generally depend on the entire path of the vector process X up to time t, e.g., they can be functionals of the Brownian path {W(s) : 0 ⩽ st}. Generally this can be an intractable situation. However, an important case (and one that leads to some tractable models) is when these coefficients are known (defined) functions of the endpoint value of the process: μi(t) := μi(t, X(t)) and σi(t) := σi(t, X(t)). In this case, we say that the coefficients are state dependent and the vector process {X(t)}t⩾0 is a vector-valued diffusion process with each component process solving an SDE of the form

dXi(t)=μi(t,X(t)) dt+σi(t,X(t))dW(t).      (11.154)

We will return to this case later.

We now turn to the more general multidimensional version of the Itô formula by extending Lemma 11.17 to the case of a function of time and any n ⩾ 1 processes. In preparation, we already computed the quadratic variation and the covariation of two Itô processes driven by a vector BM (see (11.126), (11.127), and (11.130)). In particular, any component process Xi has quadratic variation

[Xi,Xi](t)=0t||σi(u)||2 du=j=1d0tσij2(u) du      (11.155)

which is written equivalently in differential form as

d [Xi,Xi](t)dXi(t) dXi(t)(dXi(t))2=||σi(t)||2dt.      (11.156)

Any pair of processes Xi, Xj has covariation

[Xi,Xj](t)=0tσi(u)σj(u) du,      (11.157)

or in differential form,

d [Xi,Xj](t)dXi(t) dXj(t)=σi(t)σj(t) dt.      (11.158)

It is also useful to write the above as d[Xi, Xj](t)= Cij (t)dt by defining the coefficients

Cij(t) :=σi(t)σj(t)=k=1nσik(t) σjk(t).      (11.159)

These are the elements of an n × n matrix C(t) := [Cij(t)]i,j=1,...,n, where Cij(t) are related to the instantaneous covariances between the differential increments of the two processes Xi and Xj. In terms of the matrix σ(t), the instantaneous covariances are given by Cij(t) = (σ(t) σ(t)T)ij where σ(t)T is the d × n transpose of the matrix σ(t). Given the instantaneous covariances, we also define the instantaneous correlations

ρij(t) :=σi(t)σj(t)||σi(t)|| ||σj(t)||=Cij(t)||σi(t)|| ||σj(t)||.      (11.160)

[Remark: We can see, at least heuristically, that these coefficients are a measure of the instantaneous correlations between the increments of a pair of processes Xi and Xj when we divide the differential of the covariation with the square root of the product of the differentials of the variations, d[Xi,Xj](t)d[Xi,Xi](t)d[Xj,Xj](t)=Cij(t)||σi(t)||||σj(t)||=ρij(t).]

As in the case of Lemma 11.17, the main idea, which also gives us a simple way to remember the Itô formula for smooth functions f(t, x1,...,xn) of n variables and time t, is to apply a Taylor expansion of f up to terms of first order in the time increment dt, and up to second order in the increments dx1,..., dxn. The stochastic differential form of the Itô formula is then obtained upon replacing the ordinary variables XiXi(t), and replacing all ordinary differentials by the respective stochastic differentials, dxi → dXi(t), dxi dxj → d[Xi,Xj](t), i, j = 1,..., n. We then also write the formula more explicitly by applying the basic rules in (11.116). The formal proof (which we omit) follows in the same manner as the proof for the two variable case in Lemma 11.17. Here we simply state this important result as a lemma.

Lemma 11.18

(Itô Formula for a Function of Several Processes). Let the vector-valued process {X(t) ≡ (X1(t), X2 (t),...,Xn(t))}t⩾0 satisfy the SDE in (11.152) and assume f(t, x) ≡ f(t, x1 ,...,xn) is a real-valued function on +×n that is continuously differentiable with respect to time t and twice continuously differentiable with respect to the n variables x1,...,xn. Then, the process defined by F(t) := f(t, X(t)) ≡ f(t, X1 (t),...,Xn(t)), t ⩾ 0, is an Itô process with stochastic differential dF (t) ≡ df(t, X(t)) given by

dF(t)=ft(t,X(t)) dt+i=1nfxi(t,X(t)) dXi(t)  +12i=1nj=1n2fxixj(t,X(t)) d [Xi,Xj](t)      (11.161)

Note that the formula in (11.131) is recovered in the special case that n = 2 by setting X1 (t) ≡ X(t),X2 (t) ≡ Y(t). The stochastic differential in (11.161) has meaning when written as an Itô process in integral form. Using (11.158), (11.161) takes the form

df(t,X(t))=[ft(t,X(t))+12i=1nj=1nCij(t)2fxixj(t,X(t))] dt  +i=1nfxi(t,X(t)) dXi(t).      (11.162)

This expression involves the time differential and a linear combination of stochastic differentials in the component processes. By further making use of (11.152), we obtain Itô’s formula, extending (11.134), where the stochastic differential is written in terms of the vector BM increment:

df(t,X(t))=[ft(t,X(t))+12i=1nj=1nCij(t)2fxixj(t,X(t))+i=1nμi(t)fxi(t,X(t))] dt+i=1nfxi(t,X(t)) σi(t)dW(t)μf(t) dt+σf(t)dW(t).      (11.163)

In the second equation line we identified the drift and volatility vector of the process {f(t, X(t))}t⩾0.

11.10.2 Multidimensional SDEs, Feynman–Kac Formulae, and Transition CDFs and PDFs

The main concepts, theorems, and formulae that we established in Section 11.7 for the case of a single process driven by a one-dimensional BM also carry over into the multidimensional case with appropriate assumptions in place. Here we only give a very brief account of some of the relevant results. Our main starting point is the n-dimensional diffusion process solving the system of SDEs in (11.154), i.e.,

dXi(t)=μi(t,X(t)) dt+j=1dσij(t,X(t)) dWj(t),    i=1,...,n.      (11.164)

In integral form,

Xi(t)=Xi(0)+0tμi(s,X(s)) ds+j=1d0tσij(s,X(s)) dWj(s),   i=1,...,n.      (11.165)

The drift and volatility coefficients, μi(t, x) and σij(t, x), are given functions of time and variables x =(x1,...,xn).

Theorem 11.4, which provides sufficient conditions on the existence and uniqueness of a strong solution to the one-dimensional SDE (11.24), extends in a similar manner to the above system of SDEs. The absolute values for the Lipschitz condition and the linear growth condition on the coefficients are now replaced by appropriate vector and matrix norms. We denote the drift vector by μ(t, x) = (μ1 (t, x),...,μn(t, x)). The norm of a vector vn is ||v||:=i=1nυi2 and the norm of a matrix A with elements aij is defined similarly as ||A||:=i,jaij2. If there is a constant K > 0 such that the Lipschitz condition

||μ(t,x)μ(t,y)||+||σ(t,x)σ(t,y)||K||xy||

and the linear growth condition

||μ(t,x)||+||σ(t,x)||K(1+||x||)

are both satisfied, for x,yn, then this ensures that there is a unique vector process {X(t)}t⩾0 solving (11.165) and that the paths of the vector process are continuous in time. As in the one-dimensional case, these conditions are not necessary, but are sufficient conditions to guarantee the existence of a unique strong solution.

The solution to (11.165) is also a vector Markov process, i.e.,

(X(t)y|s)=(X(t)y|X(s))      (11.166)

for all yn or, for Borel function h:n,

E[h(X(t))|s]=E[h(X(t))|X(s)],   0st.      (11.167)

The conditional expectation of h(X(T)), given the vector value X(t)=xn at a time tT , is denoted by

Et,x[h(X(T))] :=E[h(X(T))|X(t)=x].

As in the scalar case, the subscripts t, x are shorthand for conditioning on a given vector value X(t)= x at time t. The conditional expectation is a function of the ordinary variables x and t, i.e., Et,x[h(X(T))] = g(t, x) for fixed T. The Markov property is expressible as E[h(X(T)) |ℱt] = Et,X(t)[h(X(T))] = g(t, X(t)), for all 0 ⩽ tT. Hence, if we know the conditional probability distribution of the random vector X(T), given X(t) = x, then we can compute the function g(t, x).

As in the scalar case, the conditional PDF of X(T), given X(t) is the (joint) transition PDF, p(t, T; x, y) ≡ p(t, T; x1,...,xn,y1,...,yn), for the vector process X obtained by differentiating the corresponding (joint) transition CDF, P(t, T; x, y):

p(t,T;x,y)=ny1ynP(t,T;x,y),      (11.168)

P(t,T;x,y):=(X(T)y|X(t)=x)(X1(T)y1,...,Xn(T)yn|X1(t)=x1,...,Xn(t)=xn)=y1ynp(t,T;x,z) dz.      (11.169)

As in the one-dimensional case, the Markov and tower property lead to the multidimensional version of the Chapman–Kolmogorov relation:

p(s,t;x,y)=np(s,u;x,z)p(u,t;z,y) dz,    s<u<t.      (11.170)

Given any Borel set B(n), the probability that the time-t vector process has value in B, given that it has value xn at some earlier time s < t, is given by integrating the transition PDF over B:

P(X(t)B|X(s)=x)=Bp(s,t;x,y) dy.      (11.171)

The multidimensional analogue of (11.41) for computing a conditional expectation is an integral over n:

Et,x[h(X(T))]=nh(y)p(t,T;x,y) dy,  t<T.      (11.172)

As in the case of a scalar diffusion on ℝ with the generator in (11.43), the generator for the above vector diffusion process {X(t)}t⩾0 on n is defined by the differential operator Gt,x acting on a smooth function f = f(t, x),

Gt,xf :=12i=1nj=1nCij(t,x)2fxixj+i=1nμi(t,x)fxi      (11.173)

where [Cij(t,x)=k=1dσik(t,x)σjk(t,x)]i,j=1,...,n is the diffusion matrix. We shall assume that this matrix is positive definite where vTCv>0 for nonzero vn. The differential and integral forms of the Itô formula are now written compactly (extending (11.44) and (11.45) to the multidimensional case):

df(t,X(t))=(t+Gt,x) f(t,X(t)) dt+j=1d(i=1nfxi(t,X(t)) σij(t,X(t))) dWj(t)      (11.174)

and

f(t,X(t))=f(0,X(0))+0t(s+Gs,x) f(s,X(s)) ds   +j=1d0t(i=1nfxi(s,X(s)) σij(s,X(s))) dWj(s).      (11.175)

The analogues of (11.46)–(11.48) also follow if we fix a time T > 0 and assume the square-integrability condition,

0TE [(i=1nfxi(s,X(s)) σij(s,X(s)))2] ds<,  j=1,...,n,      (11.176)

which ensures that all Itô integrals (w.r.t. each BM, Wj) in (11.175) are martingales. By using a similar argument as in the one-dimensional case, the process {Mf (t)}0⩽t⩽T defined by

Mf(t) :=f(t,X(t))0t(s+Gs,x) f(s,X(s)) ds      (11.177)

is a martingale, i.e., (11.48) holds. As a particular application of (11.177), we obtain a martingale defined by Mf (t) := f(t, X(t)) if the function f solves the PDE: ft+Gt,xf=0.

The martingale property of the process defined in (11.177) allows us to extend Theorem 11.7, Theorem 11.8 and Theorem 11.9 to the multidimensional case. Here, we simply state useful versions of the multidimensional extensions. Their proofs involve some similar steps as in the one-dimensional case.

Theorem 11.19.

(Multidimensional Feynman–Kac). Let {X(t) := (X1 (t),...,Xn(t))}0⩽t⩽T solve the system of SDEs in (11.164) and let ϕ:n be a Borel function. Also, assume the square-integrability condition (11.176) holds. Suppose the function f(t, x) is a solution to the backward Kolmogorov PDE ft+Gt,xf=0, i.e.,

ft(t,x)+12i=1nj=1nCij(t,x)2fxixj(t,x)+i=1nμi(t,x)fxi(t,x)=0,      (11.178)

for all xn, t < T , subject to the terminal condition f(T, x) = ϕ(x). Then, assuming Et,x[|ϕ(X(T))|]<,f(t,x) has the representation

f(t,x)=Et,x[ϕ(X(T))]E[ϕ(X(T))|X(t)=x]      (11.179)

for all xn, 0 ⩽ tT.

The slightly more general result below includes an additional exponential discount factor via a discounting function r(t, x). Theorem 11.19 is then a particular case of this theorem by simply setting r(t, x) ≡ 0.

Theorem 11.20.

(“Discounted” Feynman–Kac). Let {X(t) := (X1 (t),...,Xn(t))}0⩽t⩽T solve the system of SDEs in (11.164) and assume the square integrability condition (11.176) holds. Let ϕ:n be a Borel function and r(t,x):[0,T]×n be a lower-bounded continuous function. Then, the function defined by the conditional expectation

f(t,x) :=Et,x[etTr(u,X(u)) duϕ(X(T))]E[etTr(u,X(u)) duϕ(X(T))|X(t)=x]      (11.180)

solves the PDEft+Gt,xfr(t,x)f=0, i.e.,

ft(t,x)+12i=1nj=1nCij(t,x)2fxixj(t,x)+i=1nμi(t,x)fxi(t,x)r(t,x)f(t,x)=0,      (11.181)

for all x, 0 < t < T, subject to the terminal condition f(T, x) = ϕ(x).

In the special case that the discount function is a constant, r(t, x) ≡ r, then (11.180) simplifies since the discount factor is simply er(Tt) where f(t,x)=er(Tt)Et,x[ϕ(X(T))].

The multidimensional version of Proposition 11.8 also follows where both the transition PDF and CDF solve the backward Kolmogorov PDE in (11.178) in the (backward) variables (t, x). In particular, fixing a time T > 0 and a vector yn, and setting ϕ(x)=I{x1y1,...,xnyn} in Proposition 11.19 implies that the transition CDF

P(t,T;x,y)E[I{X1(T)y1,...,Xn(T)yn}|X1(t)=x1,...,Xn(t)=xn]

solves the PDE in (11.178) with terminal condition as the indicator function, P(T,T;x,y)=(x1y1,...,xnyn)=I{x1y1,...,xnyn}. The transition PDF, p = p(t, T; x, y), is obtained from the CDF by differentiating in the y variables, according to (11.168). Hence, p also solves (11.178) and the terminal condition is given by

limtT p(t.T;x,y)=δ(yx)

where δ(y − x) = δ(y1x1) ... δ(ynxn) is the n-dimensional Dirac delta function as a product of univariate delta functions.

The transition PDF p(t, T; x, y) is the conditional PDF of the random vector X(T) at y, given X(t) = x. Hence, according to (11.179) the solution to the PDE problem takes the form of an integral of the product of the transition PDF and the function ϕ(y):

f(t,x)=E[ϕ(X(T))|X(t)=x]=nϕ(y)p(t,T;x,y) dy.      (11.182)

That is, the transition PDF is the fundamental solution to the PDE problem stated in Theorem 11.19.

For many applications the vector diffusion process is time homogeneous where the drift and diffusion matrix are time independent, μi(t, x) ≡ μi(x) and Cij(t,x)=Cij(x)=k=1dσik(x)σjk(x). The generator is then a differential operator only in the x variables,

Gt,x=Gx :=12i=1nj=1nCij(x)2xixj+i=1nμi(x)xi.

Defining τ := Tt, the solution in (11.179) is a function of (τ, x), i.e., f = f(τ, x), and the backward PDE in (11.178) takes the form

fτ=12i=1nj=1nCij(x)2fxixj+i=1nμi(x)fxi      (11.183)

subject to the initial condition f(0, x) = ϕ(x). Moreover, if the discount function in Theorem 11.20 is time independent, r(t, x) ≡ r(x), then the operator Gt,xr(t,x)Gxr(x) is time independent, i.e., the PDE in (11.181) is time-homogeneous:

fτ=12i=1nj=1nCij(x)2fxixj+i=1nμi(x)fxir(x) f,      (11.184)

with the solution represented in (11.180) as a function f = f(τ, x) having initial condition f(0, x) = ϕ(x).

For the time-homogeneous case we hence have the transition CDF and PDF as functions of τ, x, y where we equivalently write P(t, T; x, y) as P(τ; x, y) and p(t, T; x, y) as p(τ; x, y).

Both P(τ; x, y) and p(τ; x, y) solve the PDE in (11.183) where p(0+; x, y) = δ(xy) and P(0; x, y) given by the n-dimensional unit step function I{yx}=I{y1x1,...,ynxn}.

As a first example of how Theorem 11.19 can be applied in practice, consider a simple 2-dimensional process (n = 2) driven by a 2-dimensional BM (d = 2). That is, let {X(t)=[X1(t),X2(t)]T}t02 be two scaled and drifted Brownian motions satisfying the system of SDEs:

dX1(t)=μ1 dt+σ1 dW1(t)μ1 dt+σ1dW(t),dX2(t)=μ2 dt+σ2ρ dW1(t)+σ21ρ2 dW2(t)μ2 dt+σ2dW(t),      (11.185)

where ρ ∊ (−1, 1) is a constant correlation coefficient, σ1 = [σ1, 0] and σ2=[σ2ρ,σ21ρ2] are volatility vectors with magnitudes ||σ1||=σ1,||σ2||=σ2. Note that σ1·σ2 = ρσ1σ2. We can also represent this system of SDEs in vector-matrix notation, dX(t) = μ dt + σ dW(t):

[dX1(t)dX2(t)]=[μ1μ2] dt+ [σ10σ2ρσ21ρ2]  [dW1(t)dW2(t)].

The above 2 × 2 is the σ-matrix whose rows correspond to the volatility vectors σ1 and σ2. The diffusion matrix C = σσT is then

C=[σ10σ2ρσ21ρ2] [σ1σ2ρ0σ21ρ2]=[σ12ρσ1σ2ρσ1σ2σ22],

where the 2 × 2 matrix σ is the lower Cholesky factorization of C.

The SDEs in (11.185), subject to arbitrary initial conditions X1(t) = x1, X2(t) = x2, are solved by simply integrating from time t to T:

X1(T)=x1+μ1(Tt)+σ1(W(T)W(t)),X2(T)=x2+μ2(Tt)+σ2(W(T)W(t)).      (11.186)

We can express these random variables in terms of two standard normal random variables:

X1(T)=x1+μ1τ+σ1τZ1,X2(T)=x2+μ2τ+σ2τZ2.      (11.187)

where

Z1 :=σ1(W(T)W(t))σ1τ,  Z2 :=σ2(W(T)W(t))σ2τ,      (11.188)

τ := Tt. The correlation (and covariance) between these two standard normals equals ρ. This follows from the fact that Wi(T) − Wi(t), i = 1, 2, are i.i.d. Norm(0,τ):

Cov(Z1,Z2)=1σ1σ2τ Cov(σ1(W(T)W(t)),σ2(W(T)W(t)))=1σ1σ2τi=12j=12σ1iσ2j Cov(Wi(T)Wi(t),Wj(T)Wj(t))=1σ1σ2τi=12(σ1iσ2i) τ=σ1σ2σ1σ2=ρ.

Hence, by (11.187), the matrix of correlations, ρij, of the component processes is the 2 × 2 correlation matrix given by ρ11 = ρ22 = 1, ρ12 = ρ21 = ρ, i.e.,

ρ12 :=Corr(X1(T),X2(T))=Cov((X1(T),X2(T))Var(X1(T)) Var(X2(T))=Cov(Z1,Z2)=ρ.

The covariance matrix of X(T) is given by Cov(Xi(T), Xj(T)) = Cijτ = (σi · σj)τ = ρijσiσj τ; i, j = 1, 2. The time-scaled solution vector 1τX(T) is a bivariate normal,

[X1(T)τ,X2(T)τ]Τ~Norm2([x1+μ1ττ,x2+μ2ττ]Τ,C).

The time-homogeneous transition CDF for the vector process is obtained by computing a joint conditional probability while using (11.186), or (11.187), and the fact that W(T) − W(t) is independent of X(t), i.e., the pair Z1, Z2 is independent of the pair X1(t), X2(t):

P(τ;x1,x2,y1,y2):=(X1(T)y1,X2(T)y2|X1(t)=x1,X2(t)=x2)=(x1+μ1τ+σ1τZ1y1,x2+μ2τ+σ2τZ2y2)=(Z1y1x1μ1τσ1τ,Z2y2x2μ2τσ2τ)=N2(y1x1μ1τσ1τ,y2x2μ2τσ2τ;ρ).      (11.189)

This is a bivariate normal CDF and differentiating (using the chain rule) gives the transition PDF as a bivariate normal density,

p(τ;x1,x2,y1,y2)2y1y2P(τ;x1,x2,y1,y2)=1σ1σ2τn2(y1x1μ1τσ1τ,y2x2μ2τσ2τ;ρ)=12πτσ1σ21ρ2exp(z12+z222ρz1z22(1ρ2)),      (11.190)

where z1=y1x1μ1τσ1τ, z2=y2x2μ2τσ2τ. According to Theorem 11.19, both transition CDF and PDF, P and p, solve the time-homogeneous PDE in (11.183) in the variables τ, x1, x2, for fixed arbitrary real values of y1, y2. Using the above explicit constant expressions for C11=σ12, C22=σ22, C12 = C21 = ρσ1σ2, and the constant drift coefficients μ1 and μ2, p and P solve the backward PDE, i.e.,

pτ=12σ122px12+12σ222px22+ρσ1σ22px1x2+μ1px1+μ2px2

and similarly for P. We leave it as an exercise for the reader to show that the transition CDF in (11.190) has limit P(0+;x1,x2,y1,y2)=I{y1x1,y2x2} for all xy. The Dirac delta function initial condition for the transition PDF then follows by formally differentiating the step function to obtain p(0+; x1, x2, y1, y2) = δ(y1x1)δ(y2x2). The reader can verify by direct differentiation that the transition PDF in (11.190) satisfies the above PDE and is hence the fundamental solution. As a density in the variables y1, y2, the transition PDF should also integrate to unity over 2. That is, the event {(X1(T),X2(T))2}, conditional on X1(t) = x1, X2(t) = x2, must have unit probability. This is directly verified as follows:

(X1(T)<,X2(T)<|X1(t)=x1,X2(t)=x2)=limy1,y2 P(τ;x1,x2y1,y2)=limy1,y2 N2(y1x1μ1τσ1τ,y2x2μ2τσ2τ;ρ)=N2(,;ρ)=1.

From (11.169), this implies that the PDF integrates to unity for all x1,x2,

p(τ;x1,x2,y1,y2) dy1 dy2=1.

Alternatively, this is easily shown by directly integrating the bivariate density in (11.190). Note that in the special case when ρ = 0, the two processes are independent (uncorrelated) drifted and scaled BM. The joint transition PDF and CDF are simply products of the one-dimensional PDFs and CDFs of the component processes. This is consistent with the fact that the above PDE is separable in the variables x1 and x2 and hence admits a solution as a product of individual functions of x1 and x2.

Let us now consider a multidimensional GBM process. This is an important example that arises in Chapter 13 where we consider derivative pricing within a standard economic model containing multiple stocks whose price processes are correlated geometric Brownian motions. In particular, consider n ⩾ 1 strictly positive stock price processes S(t) := (S1(t),...,Sn(t)), t ⩾ 0, that are driven by a standard d ⩾ 1 dimensional BM, W(t) = (W1(t), ..., Wd(t)):

dSi(t)=Si(t)[μi dt+j=1dσij dWj(t)]Si(t)[μi dt+σdW(t)],   i=1,...,n,      (11.191)

where the log-drifts μi and log-volatilities σij are assumed to be constant parameters. It is important to stress the distinction that these are log-coefficients, although by standard convention we are still using similar symbols for the drift and volatility coefficient functions! To be precise, by identifying the SDE in (11.191) with that in (11.164) (where X(t) → S(t)) we see that the drift and volatility coefficient functions for the i-th stock price in (11.191) are state-dependent (time-independent) linear functions of the i-th variable

μi(t,x)μi(x)=μixi  and  σij(t,x)σij(x)=σijxi      (11.192)

where on the right of each equality are the log-drift and log-volatility parameters μi and σij. [We note that the symbols μi and σij are constant parameters when denoted without arguments and they are the drift and volatility coefficient functions when denoted with arguments.] So the above SDEs are time homogeneous with linear functions μi(t, S(t)) ≡ μi(S(t)) = μiSi(t) and σij(t, S(t)) ≡ σij (S(t)) = Si(tij.

The log-volatility coefficient matrix σ=[σij]i=1,...,n;j=1,...,d is an n × d constant matrix with the i-th row being the 1 × d volatility vector σi = [σi1 ..., σid] for the i-th stock price process. The system of SDEs in (11.191) has matrix-vector form:

[dS1(t)S1(t)dSn(t)Sn(t)]=[μ1μn] dt+[σ11σ1dσn1σnd] [dW1(t)dWd(t)].      (11.193)

As shown just below, the log-diffusion matrix C = σσT is proportional to the n × n matrix of covariances among the log-returns of the stocks. We assume that C is nonsingular. As usual, we define the n × n matrix of correlations, ρ:=[ρij]i,j=1,...,n where Cij = σi ·σj = ρijσiσj , where the i-th volatility vector has magnitude denoted by σi > 0,

Cii=σi2||σi||2=σi12+...+σid2.

The system in (11.191) is readily solved by considering the log-prices defined by Xi(t) := ln Si(t). In fact, we have already solved this problem. See (11.148)–(11.150), where each SDE in (11.191) is of the form in (11.148) with solution of the form in (11.149). For the sake of clarity, we repeat the same steps here by using Itô’s formula where (upon substituting the expression in (11.191)):

dXi(t)=d ln Si(t)=dSi(t)Si(t)12(dSi(t)Si(t))2=(μi12||σi||2) dt+σidW(t)=(μi12σi2) dt+σidW(t)

with initial condition Xi(0) = ln Si(0), where Si(0), i = 1, . . . , n, are the initial stock prices. Integrating and exponentiating gives the stock prices Si(t)=eXi(t) for all t ⩾ 0:

Si(t)=Si(0) e(μi12σi2)t+σiW(t)=Si(0) eμitεt(σiW).      (11.194)

It is easy to verify by computing the stochastic differential of this expression, upon directly applying Itô’s formula, that each Si(t) solves (11.191). The solution in (11.194) is in fact the unique strong solution subject to the initial price vector [S1(0),...,Sn(0)]T.

The second expression in (11.194) involves an exponential ℙ-martingale,

εt(σiW)=exp [12σi2t+σiW(t)]=exp [12σi2t+j=1dσijWj(t)].

To see that this is a ℙ-martingale with expectation one, note that σi · W(t) is normal with mean E[σiW(t)]=j=1dσijE[Wj(t)]=0 and variance

Var (σiW(t))=Var (j=1dσijWj(t))=j=1dσij2 Var (Wj(t))=||σi||2t=σi2t.

Here we used the fact that all Wj(t) BMs are i.i.d. Norm(0, t). Hence, by the expression for the m.g.f. of a normal random variable, E[eασiW(t)]=e12α2αi2t for any α, i.e., E[eσiW(t)]=e12αi2t and E[εt(σiW)]=1, for all t ⩾ 0. So the mean of the price in (11.194) is

E[Si(t)]=Si(0) eμit E[εt(σiW)]=Si(0) eμit.      (11.195)

As in the one-dimensional GBM stock model, the log-normal drift parameter μi is therefore the (physical) growth rate of the i-th price process in the (physical) measure ℙ.

From the strong solution in (11.194) we have

Si(T)=Si(t) e(μi12σi2)(Tt)+σi(W(T)W(t)).      (11.196)

It is convenient to define the log-return random variables over a time interval τ := Tt,

Xi :=lnSi(T)Si(t)=σi+σiτ Zi,      (11.197)

αi(μi12σi2)τ, where

Zi :=σi(W(T)W(t))σiτ=1σiτj=1dσij(Wj(T)Wj(t)),

i = 1, ..., n. The Zi’s are normal random variables since they are linear combinations of Brownian increments. The vector Z = [Z1,...,Zn]T has multivariate normal distribution:

[Z1,...,Zn]Τ~Normn(0,ρ).      (11.198)

To verify this, the covariances are computed using the same steps as in the calculation of the covariance of Z1 and Z2 in (11.188):

Cov(Zi,Zj)=1σiτ1σjτCov(σi(W(T)W(t)),σj(W(T)W(t)))=1σiτ1σjτσiσjτ=σiσjσiσj=ρij.      (11.199)

Hence, Cov(Xi, Xj) = τσiσj Cov(Zi,Zj) = τσiσjρij = τCij. The log-returns are therefore jointly normally distributed:

[X1,...,Xn]Τ~Normn([α1,...,αn]Τ,τC).      (11.200)

In particular, the matrix ρ is in fact the matrix of correlations of the stock price log-returns:

Corr(Xi,Xj)=Cov(Xi,Xj)Var(Xi) Var(Xj)=τCijσiσj=ρij.      (11.201)

The above GBM process is time homogeneous. Let x = (x1, ..., xn) and y = (y1, ..., yn) be strictly positive vectors in +n. The (joint) transition CDF of the stock price process S(t) is given by the conditional probability below which is calculated by using independence among all log-returns {XiSi(T)Si(t)}i=1,...,n and time-t stock prices {Si(t)}i = 1, ..., n:

P(τ;x,y):=(S1(T)y1,...,Sn(T)yn|S1(t)=x1,...,Sn(t)=xn)=(lnS1(T)S1(t)lny1x1,...,lnSn(T)Sn(t)lnynxn|S1(t)=x1,...,Sn(t)=xn)=(X1lny1x1,...,Xnlnynxn)=(Z1a1,...,Znan)=Nn(a1...,an;ρ)      (11.202)

where αilnyixiαiσiτ=lnyixi(μiσi2/2)τσiτ. The function Nn(a1, ..., an; ρ) is the n-variate standard normal CDF of Z with given correlation matrix ρ:

Nn(a1,...,an;ρ)=ana1nn(z1,...,zn;ρ) dz1...dzn.

The standard normal PDF of Z, nn(z1,...,zn;ρ)=nz1znNn(z1...,zn;ρ), is given by the n-variate Gaussian density

nn(z1,...,zn;ρ)=1(2π)n detρexp(12zρ1zΤ),      (11.203)

z=[z1,...,zn]n. Differentiating according to (11.168), and applying the chain rule, gives the (joint) transition PDF for the time-homogeneous GBM stock price process:

p(τ;x,y)=ny1...ynP(τ;x,y)=(i=1naiyi)na1...anNn(a1...,an;ρ)=(i=1n1yiσiτ)nn(a1...,an;ρ)=1y1...ynσ1...σn(2πτ)ndetρexp(12aρ1aΤ),      (11.204)

a = [a1, ..., an] and ai’s defined above. This is of the form of a multivariate log-normal density. Note that this density can also be written in terms of the covariance matrix C = DρD, D = diag(σ1, ..., σn), ρ−1 = DC−1D.

By (11.192), the diffusion matrix function has elements

Cij(x)=k=1dσik(x)σjk(x)=k=1dxiσikxjσjk=xixjk=1dσikσjk=xixjCij

with constants Cij = σi·σj = ρijσiσj. Hence, the time-homogeneous PDE in (11.183) takes the equivalent form:

fτ=12i=1nj=1nρijσiσjxixj2fxixj+i=1nμixifxi=12i=1nσi2xi22fxi2+j=1ni<jρijσiσjxixj2fxixj+i=1nμixifxi.      (11.205)

The Feynman–Kac Theorem 11.19 assures us that the transition CDF in (11.202) and PDF in (11.204) both solve the PDE (11.205) in the variables τ > 0, x+n, for fixed y+n. The initial condition P(0;x,y)=I{xy}I{x1y1,...,xnyn} follows from the basic limit properties of the multivariate normal CDF. We leave it to the reader to verify. Then, by multiple differentiation of the step functions, the initial condition p(0+; x, y) = δ(xy) is obtained for the transition PDF.

A common case is when n = d = 2. The above formulation simplifies since we have only one correlation coefficient ρ for the log-returns of stocks 1 and 2, where ρ12 = ρ21 ≡ ρ, ρ11 = ρ22 = 1. The log-diffusion matrix of covariances has elements C11 = σ12, C22 = σ22, C12 = C21 = ρσ1σ2. The log-volatility vectors are σ1 = [σ11, σ12]= [σ1, 0] and σ2=[σ21,σ22]=[ρσ2,σ21ρ2]. In this case, the system of SDEs in (11.191) simplifies for two stock prices driven by two BMs:

dS1(t)=S1(t)[μ1 dt+σ1 dW1(t)]dS2(t)=S2(t)[μ2 dt+σ2ρdW1(t)+σ21ρ2 dW2(t)].      (11.206)

The unique solution is

S1(t)=S1(0)e(μ112σ12)t+σ1W1(t),S2(t)=S2(0)e(μ212σ22)t+σ2(ρW1(t)+1ρ2W2(t)).

The transition CDF and PDF obtain as a special case of (11.202) and (11.204) for n = 2:

P(τ;x1,x2,y1,y2)=N2(lny1x1(μ112σ12)τσ1τ,lny2x2(μ212σ22)τσ2τ;ρ)      (11.207)

and

P(τ;x1,x2,y1,y2)=1y1y2σ1σ2τn2(lny1x1(μ112σ12)τσ1τ,lny2x2(μ212σ22)τσ2τ;ρ).      (11.208)

By the Feynman–Kac Theorem 11.19, these functions solve the time-homogeneous PDE in (11.205) for n = 2:

fτ=12σ12x122fx12+12σ22x222fx22+ρσ1σ2x1x22fx1x2+μ1x1fx1+μ2x2fx2.      (11.209)

The reader can verify that the transition CDF has the limiting form P(0+;x1,x2,y1,y2)=I{y1x1,y2x2}, for all xy, and p(0+; x1, x2, y1, y2) = δ(y1x1)δ(y2x2).

11.10.3 Girsanov’s Theorem for Multidimensional BM

We recall Girsanov’s Theorem 11.13 where the measure change was constructed in terms of a Radon–Nikodym process which has the form of an exponential martingale involving a single standard BM in the original measure P. Based on our knowledge of multidimensional BM and Itô integrals on multidimensional BM we can now consider the multidimensional version of Girsanov’s Theorem. The main ingredients are as in Theorem 11.13 where the single BM is now a multidimensional BM. As usual, we fix some filtered probability space (Ω,,,F), where F={t}t0 is any filtration for standard Brownian motion.

Theorem 11.21

(Girsanov’s Theorem for Multidimensional BM). Fix a time T > 0 and let W(t) = (W1(t), ..., Wd(t)), 0 ⩽ tT, be a standard d-dimensional ℙ-BM with respect to a filtration F={t}0tT. Assume the vector process γ(t) = (γ1(t),...,γd(t)), 0 ⩽ tT, is adapted to F such that

E[exp(120T||γ(s)||2ds)]<.      (11.210)

Define

ϱt:=exp(120t||γ(s)||2ds+0tγ(s)dW(s)),0tT,      (11.211)

and the probability measure ^^(ϱ) by the Radon–Nikodym derivative d^d=(d^d)TϱT. Then, the process W^(t)=(W^1(t),...,W^d(t)), 0 ⩽ tT, defined by

W^(t):=W(t)0tγ(s)ds      (11.212)

is a standard d-dimensional ^-BM w.r.t. filtration F.

We don’t provide the proof of this result here. We leave it as an exercise where one can apply similar steps as in (i)–(iii) in the proof of Theorem 11.13. In the multidimensional case we have d-dimensional BM and Lévy’s characterization in Theorem 11.16 can be applied.

The same remarks as were stated for Theorem 11.13 also apply to Theorem 11.21, where the adapted process is now the vector γ rather than the scalar γ. Of course, this multidimensional version generalizes Theorem 11.13, which obtains in the simplest case with d = 1. The Radon–Nikodym derivative process that defines the change of measure ^, such that the new d-dimensional BM, W^, is a ^BM, is given by the exponential ℙ-martingale in (11.211). It can be proven that the Novikov condition in (11.210) guarantees that the process {ϱt}0tT is indeed a proper Radon–Nikodym derivative process, i.e., it is a ℙ-martingale with constant unit expectation, E[ϱt]=1 for all t ∊ [0, T]. By Itô’s formula, the stochastic exponential in (11.211) is equivalent to the stochastic differential (using ϱ0=1):

dϱt=ϱtγ(t)dW(t)ϱt=1+0tϱsγ(s)dW(s).

The Novikov condition assures us that the Itô integral satisfies the integrability condition as in (11.121) and is therefore a ℙ-martingale with zero expectation. By the definition in (11.150) we have ϱtϱt(γ)=εt(γW). Dividing the process value at any two times 0 ⩽ s < tT gives

ϱtϱs(d^d)t(d^d)s=εt(γW)εs(γW)=exp(12st||γ(u)||2du+stγ(u)dW(u)).

In general, γ is an adapted vector process so that ϱt is a functional of d-dimensional BM from time 0 to t. By choosing a constant vector, γ(t) = γ, we have the simple and important special case where the Radon–Nikodym derivative process is also a GBM expressed equivalently as

ϱtϱt(γ)=e12||γ||2t+γW(t)=e12||γ||2t+γW^(t)      (11.213)

where W^(t)W^(γ)(t):=W(t)γt,0tT, is a ^BM. We recall that the random variable γW(t)~Norm(0,||γ||2t). Hence, from its m.g.f. (under measure ℙ) we have E[eγW(t)]=e12||γ||2t, i.e., E[ϱt]=1 for all t ⩾ 0. This also follows trivially from the fact that the process is easily verified to be a ℙ-martingale and therefore must have constant expectation under measure ℙ, i.e., E[ϱt]=E[ϱ0]=E[1]=1.

The multidimensional version of Girsanov’s Theorem 11.21 has many far-reaching applications. We now give one of its applications that is particularly important for financial derivative pricing theory. Namely, we shall use Girsanov’s Theorem to find a risk-neutral measure such that all stock prices S1 (t),...,Sn(t) in the multidimensional GBM model have a common drift rate equal to a constant r. Here, r is again the fixed (continuously compounded) interest rate. This problem is equivalent to applying Girsanov’s Theorem to construct an equivalent martingale measure ^˜, such that all discounted stock price processes defined by {S¯i(t):=ertSi(t)}t0,i=1,...,n, are ˜-martingales. For the case of a single stock (n = 1) driven by one BM (d = 1), we have already solved this problem in Example 11.15.

For each i-th stock, the (log-)drift μi and all components of the (log-)volatility vector σi in (11.191) are constants. It follows that the measure change that we will need to employ uses (11.213), i.e., with constant d-dimensional vector γ =[γ1,...,γd]:

ϱt(d˜d)t=e12||γ||2t+γW(t)      (11.214)

with d-dimensional ˜BM given by W˜(t)W˜(γ)(t):=W(t)γt. In terms of stochastic differentials, dW˜(t)=dW(t)γ dt. A quick method of arriving at the change of measure is to consider the SDE satisfied by the stock prices (with respect to the physical ℙ-BM) in (11.191) and write dW(t)=dW˜(t)+γ dt:

dSi(t)=Si(t)[μi dt+σi(dW˜(t)+γdt)]=Si(t)[(μi+σiγ)dt+σidW˜(t)], i=1,...,n.      (11.215)

Hence, a risk-neutral measure exists if we can find a vector γ such that the log-drift coefficient equals r for every i = 1,..., n. That is, we have

dSi(t)=Si(t)[r dt+σidW˜(t)], i=1,...,n,      (11.216)

which is equivalent to {S¯i(t)}t0,i=1,...,n, being ˜-martingales, if and only if γ solves μi + σi · γ = r, for each i = 1,..., n. This is a linear system of n-equations in d unknowns γ1,...,γd. Using the components of the (log-)volatility vectors, σi = [σi1,...,σid], we then have an n × d linear system

[σ11σ1d        σn1σnd][γ1γd]=[rμ1rμn].      (11.217)

In compact notation this reads σγT = b, where σ is n × d, γT is d × 1, and b := r1 − μ is n × 1, where μ = [μ1,...,μn]T, 1 = [1,..., 1]T.

Hence, the question of the existence of a risk-neutral measure is answered quite simply by applying standard linear algebra. Generally a solution vector γ exists if and only if b is spanned by the d column vectors of σ. Here we should point out that we are seeking a solution for arbitrary physical drift vector μn. A solution vector γ exists for any bn, and hence for any μn, if the d column vectors of σ span n, i.e., if rank(σ) = n. In the case when rank(σ) = n < d, we have an infinite (continuum) number of solution vectors γ and each corresponds to a (different) risk-neutral measure ˜˜(γ). This is therefore the case where the risk-neutral measure exists and is not unique. If rank(σ) = n = d, which is the case where the number of stocks equals the number of independent BMs and the n × n matrix σ has an inverse σ−1, then the risk-neutral measure ˜ exists and is uniquely given by γT = σ−1(r1 − μ), i.e., γj=i=1n(σ1)ji(rμi), j = 1,..., d. Finally, if d < n, then rank(σ) ⩽ d < n (the d column vectors of σ do not span all of n) and hence a solution vector γ exists only for μ vectors such that b is in the span of the column vectors of σ. In this case, there does not exist a risk-neutral measure ˜ for arbitrary μ.

11.10.4 Martingale Representation Theorem for Multidimensional BM

In closing this chapter, we simply state (without proof) the multidimensional version of Theorem 11.14. This is of importance when discussing hedging financial derivatives in an economy with multiple assets that are driven by a multidimensional BM. The theorem is quite similar and extends Theorem 11.14 in a rather obvious fashion whereby the Itô integrals, and hence the Itô processes, are defined with respect to a d-dimensional BM where d ⩾ 1. The theorem basically states that a square-integrable (,FW)-martingale is expressible as its initial value plus an Itô integral in the d-dimensional BM and some FW-adapted vector process as integrand. In the result below, we combine Theorem 11.14 and Proposition 11.15 into one Theorem for the more general case of multidimensional BM.

Theorem 11.22

(Multidimensional Brownian Martingale Representation Theorem). Let W(t) = (W1 (t),...,Wd(t)) be a d-dimensional standard BM where FW denotes its natural filtration. Assume {M(t)}0⩽tT is a square-integrable (,FW)-martingale. Then, there exists an FW-adapted d-dimensional process (a.s.) θ(t)=(θ1 (t),...,θd(t)), 0 ⩽ tT, such that

M(t)=M(0)+0tθ(u)dW(u)M(0)+j=1d0tθj(u)dWj(u),      (11.218)

for all t ∊ [0, T]. Moreover, let ^ be a measure constructed using Girsanov’s Theorem 11.21 with the assumption that the d-dimensional process {γ(t)}0⩽tT is FW-adapted. If the process {M^(t)}0tT is a square-integrable (^,FW)-martingale, then there exists an adapted d-dimensiona process {θ^(t)=(θ^1(t),...,θ^d(t))}0tT, such that (a.s.)

M^(t)=M^(0)+0tθ^(u)dW^(u).      (11.219)

Again we stress that the martingale having this representation is (a.s.) continuous in time (i.e., the process has no jumps) since it is an Itô process.

Exercises

  1. Exercise 11.1. In each case show whether or not the stochastic integral is well-defined. Note: you do not need to compute the values of the integrals.

    1. (a) 01(W2(t)+W(t)+1)dW(t);
    2. (b) 01|W(t)|1/2dW(t);
    3. (c) 01W(1t)dW(t);
    4. (d) 01|tW(t)|1/4dW(t);
    5. (e) 01Wa(t)dW(t).

    For part (e) find all values of the parameter a for which the integral is well-defined. [Hint for singular integrands: Recall that if t = 0 is a singular point of a function f(t), where f(t)=O(|t|p) with p < 0, as t → 0, then 01f(t)dt< iff p>1.]

  2. Exercise 11.2. In each case show whether or not the stochastic integral is well-defined. Note: you do not need to compute the values of the integrals.

    1. (a) 01W(2t)dW(t);
    2. (b) 01W(t2)dW(t);
    3. (c) 0W(1t)dW(t);
    4. (d) 01|W(t)|1/2dW(t).
  3. Exercise 11.3. For any α ∊ (0, 1), define the stochastic integral

    0TW(t)  dαW(t):= δ(Pn)0      limi=1n[αW(ti1)+(1α)W(ti)](W(ti)W(ti1)),

    where Pn = {0 = t0 < t1 < ... < tn = T} is a finite partition of [0, T], and δ(Pn) is the mesh size. Write 0TW(t) dαW(t) as a linear combination of the Itô integral 0TW(t)dW(t) and the Stratonovich integral 0TW(t)dW(t).

  4. Exercise 11.4. Evaluate the following repeated (double) stochastic integral:

    0t(0sdW(u)) dW(s).

  5. Exercise 11.5. Show that 0tW2(s)dW(s)=13W3(t)0tW(s)ds.

    [Hint: You may use an appropriate Itô formula.]

  6. Exercise 11.6. Use the Itô isometry property to calculate the variances of the Itô integrals

    1. (a) 0t|W(s)|1/2dW(s),
    2. (b) 0t|W(s)+s|2dW(s),
    3. (c) 0t(W(s)+s)3/2dW(s).

    Explain why the above integrals are well-defined.

  7. Exercise 11.7. Calculate

    n,δ(Pn)0                 limi=1n(2W(ti1)+W(ti))(W(ti)W(ti1)),

    where Pn = {0 = t0 < t1 < t2 < ... < tn = T} is a partition of [0, T] with mesh size δ(Pn)= i=1,...,n    max|titi1|.

  8. Exercise 11.8. Using Itô’s formula, show that the process defined by

    X(t):=W4(t)60tW2(u) du,

    t ⩾ 0, is a martingale w.r.t. a filtration for Brownian motion.

  9. Exercise 11.9. Use Itô’s formula to show that for any integer k ⩾ 2,

    E[Wk(t)]=k(k1)20tE[Wk2(s)]ds,

    and use this to derive a formula for all the moments of the standard normal distribution.

  10. Exercise 11.10. Show that M(t) := et/2 sin(W (t)), t ⩾ 0, is a martingale w.r.t. a filtration for Brownian motion.
  11. Exercise 11.11. Use Itô’s formula to show that for any nonrandom, continuously differentiable function f(t), the following formula of integration by parts is true:

    0tf(s) dW(s)=f(t)W(t)0tf(s)W(s)ds.

  12. Exercise 11.12. Use Itô’s formula to find the stochastic differentials for the following functions of Brownian motion:

    1. (a) eW(t);
    2. (b) Wk(t), k ⩾ 0;
    3. (c) cos(tW(t));
    4. (d) eW2(t);
    5. (e) arctan(t + W(t)).
  13. Exercise 11.13. Using Itô’s formula, show that Y (t) := W3(t) − 3tW(t),t ⩾ 0, is a martingale w.r.t. a filtration for Brownian motion.
  14. Exercise 11.14. Define Z(t) = exp(σW(t)). Use Itô’s formula to write down a stochastic differential for Z(t). Then, by taking the mathematical expectation, find an ordinary (deterministic) first order linear differential equation for m(t) := E[Z(t)] and solve it to show that

    E[exp(σW(t))]=exp(σ22t).

  15. Exercise 11.15. Let N(x) be the standard normal CDF and consider the process

    X(t):=N(W(t)Tt),0t<T.

    Express this process as an Itô process, i.e. you need to determine the explicit expressions for the adapted drift μ(t) and diffusion σ(t) and provide the explicit form for X(t) as:

    X(t)=X(0)+0tμ(s)ds+0tσ(s)dW(s).

    Show that the process is a martingale w.r.t. any filtration for BM. Find the limiting value X(T)=tT  limX(t). What is the state space for the X process.

  16. Exercise 11.16. Suppose that the processes X := {X(t)}t⩾0 and Y := {Y(t)}t⩾0 have the log-normal dynamics:

    dX(t)=X(t)(μX dt+σX dW(t))dY(t)=Y(t)(μY dt+σY dW(t)).

    Show that the process Z(t):=Y(t)X(t) is also log-normal, with dynamics

    dZ(t)=Z(t)(μZ dt+σZdW(t)),

    and determine the coefficients μZ and σZ in terms of those of X and Y. Solve the same problem now assuming that X and Y are governed by two correlated Brownian motions WX and WY, respectively, where Corr(WX(t),WY (t)) = ρt, i.e., dWX (t)dWY (t) = ρ dt, for a given correlation coefficient −1 ⩽ ρ ⩽ 1.

  17. Exercise 11.17. Let a time-homogeneous diffusion X(t) have a stochastic differential with drift coefficient function μ(x) = 3x − 1 and diffusion coefficient function σ(x)=2x. Assuming that X(t) ⩾ 0, find the stochastic differential for the process Y(t):=X(t). Find the generator for Y (t).
  18. Exercise 11.18. Let X(t) = tW2(t) and Y(t)= eW(t). Find the stochastic differential of Z(t):=X(t)Y(t). Compute the mean and variance of Z(t).
  19. Exercise 11.19. Let X(t) be a time-homogeneous diffusion process solving an SDE with drift and diffusion coefficient functions μ(x)= cx and σ(x) = σ, respectively, where c, σ are constants and with initial condition X(0)=x. Consider the process defined by Y(t):=X2(t)2c0tX2(s)dsσ2t, t0.

    1. (a) Represent the Y process as an Itô process and show that is a martingale w.r.t. any filtration for Brownian motion.
    2. (b) Compute the mean and variance of Y (t) for all t ⩾ 0.
  20. Exercise 11.20. Consider the linear SDE in (11.25) in the case where δ(t) ≡ 0 and where α(t), β(t), γ(t) are continuous nonrandom functions of time t ⩾ 0. Assume a constant initial condition X(0) = x0. Show that the process {X(t)}t⩾0 is a Gaussian process and compute its mean and covariance functions.
  21. Exercise 11.21. Use the Itô formula to write down stochastic differentials for the following processes:

    1. (a) Y(t)=exp(σW(t)12σ2t),
    2. (b) Z(t) = f(t)W (t) where f is a continuously differentiable function.
  22. Exercise 11.22. A time-homogeneous diffusion process X has a stochastic differential with respective drift and diffusion coefficient functions μ(x) = 0 and σ(x) = x(1 − x). Assuming 0 < X(t) < 1, show that the process Y(t):=ln(X(t)1X(t)) has a constant diffusion coefficient.
  23. Exercise 11.23. Let X(t):=(1t)0tdW(s)1s, where 0 ⩽ t < 1. Provide the stochastic differential equation for X(t) in the form dX(t) = a(t, X(t)) dt + b(t, X(t)) dW (t). Check your answer by solving the SDE obtained subject to the initial condition X(0) = 0.
  24. Exercise 11.24. Solve the following linear SDEs:

    1. (a) dX(t) = W (t)X(t) dt + W (t)X(t)dW (t), X(0) = 1;
    2. (b) dX(t) = α(θ − X(t)) dt + σX(t)dW (t), X(0) = x ∊ ℝ;
    3. (c) dX(t) = a(t)X(t) dt + σX(t)dW (t), X(0) = x ∊ ℝ.
    4. (d) dX(t) = X(t) dt + X(t) dW (t), X(0) = 1.

    Assume α, θ, σ are positive constants.

  25. Exercise 11.25. Let g(y) be a given function of y, and suppose that y = f(x) is a solution of the ODE dy = g(y) dx, that is, f′(x) = g(f(x)). Show that X(t) = f(W(t)) is a solution of the SDE

    dX(t)=12g(X(t))g(X(t))dt+g(X(t))dW(t).

  26. Exercise 11.26. Use Exercise 11.25 to solve the following nonlinear SDEs, subject to X(0) = x0 ∊ ℝ.

    1. (a) dX(t)=σ24dt+σX(t) dW(t);
    2. (b) dX(t) = X3(t) dt + X2(t)dW (t);
    3. (c) dX(t)=12e2X(t)dt+eX(t)dW(t).

    In each case, find the time interval for which the solution exists. [Hint: In each case, f(x) of Exercise 11.25 is determined by solving a first order separable ODE. For parts (b) and (c) the solution exists up to an “explosion time” when the solution becomes singular.]

  27. Exercise 11.27. Show that for any u ∊ ℝ, the function f(t, x) = exp(uxu2t/2) solves the backward PDE for Brownian motion. Take the first, second, and third derivatives of exp(uxu2t/2) w.r.t. u, and set u = 0, to show that functions x, x2t, x3 − 3tx also solve the backward equation for Brownian motion. Deduce that W2 (t) − t and W3(t) − 3tW(t) are martingales.
  28. Exercise 11.28. Consider the drifted BM, X(t) = x0 + μt + σW(t), and define the process Y (t) := f(X(t)), t ⩾ 0. By applying an appropriate Itô formula, obtain a general expression for a twice continuously differentiable function f(x), x ∊ ℝ, such that {Y (t)}t⩾0 is a martingale w.r.t. any filtration for BM.
  29. Exercise 11.29. Derive a system of diffusion-type SDEs for the coupled processes X(t) = cos(W(t)) and Y(t) = sin(W(t)).
  30. Exercise 11.30. Consider the process defined by X(t) = sinh (C + t + W (t)), t ⩾ 0, where C = sinh−1 x0 with initial condition X(0) = x0. This process is a diffusion on ℝ and it satisfies an SDE of the form

    dX(t)=μ(X(t))dt+σ(X(t))dW(t).

    1. (i) Find the coefficient functions μ(x) and σ(x).
    2. (ii) Provide the backward Kolmogorov PDE and the terminal condition for the transition PDF p(t, T; x, y).
    3. (iii) Derive analytical expressions for the transition CDF and PDF of the process X.
  31. Exercise 11.31. Give the probabilistic representation of the solution f(t, x) of the PDE

    ft+x222fx2=0, 0tT,  f(T,x)=x2.

    Solve this PDE using the solution of the respective SDE.

  32. Exercise 11.32. Consider the boundary value problem for the heat equation:

    Vt+122Vx2=0,  V(1,x)=f(x)

    where f, the boundary value for time t = 1, is given, and where we are looking for a solution V = V(t, x) defined for 0 ⩽ t ⩽ 1 and x ∊ ℝ. Show that the solution is

    V(t,x)=f(y)e(xy)22(1t)dy2π(1t).

    Can you think of a function f for which the solution formula would not make sense?

  33. Exercise 11.33. Consider the boundary value problem for the heat equation with a drift term:

    Vt+122Vx2+aVx=0,  V(1,x)=f(x)

    where f, the boundary value for time t = 1, is given, and a is a real constant. Derive an explicit (integral) formula for the solution V = V (t, x).

  34. Exercise 11.34. Let f(t, x) satisfy the PDE

    ft+12σ2x22fx2+μxfx=0,  0tT, x+,

    for fixed T > 0, with real constants σ > 0, μ. Solve for f(t, x) subject to the terminal condition f(T,x)=I{K1<x<K2}, where K2 > K1 > 0 are constants.

  35. Exercise 11.35. To compact notation, we suppress all other variables except x′ and denote f(x′) ≡ p(t, t′; x, x′) and g(x′) ≡ p(t′, T; x′, y). Using the definition of Gt,x, the left-hand integral in (11.69) becomes

    xf(x)Gt,xg(x)dx=12xf(x)σ2(t,x)2gx2dx+xf(x)μ(t,x)gx dx.

    Now apply integration by parts to both integrals (in the first integral note that 2gx2=x(gx)). State appropriate assumptions that allow you to set the boundary terms to zero. Then, apply integration by parts again on the remaining integral containing σ2(t′, x′). Again, state appropriate assumptions that allow you to set the boundary terms to zero. In the end, obtain

    xf(x)Gt,xg(x)dx=xg(x)G˜t,xf(x) dx

    where G˜t,xf=122x2(σ2(t,x)f)x(μ(t,x)f).

  36. Exercise 11.36. Assume that a stock price process {S(t)}t⩾0 satisfies the SDE

    dS(t)=rS(t)+σS(t)dW˜(t),

    with constants r,σ > 0, and where {W˜(t)}t0 is a standard ˜BM. By using Girsanov’s Theorem, find the explicit expression for the Radon–Nikodym derivative process

    ϱt:=(d^d˜)t

    such that the process defined by S^(t):=ertS(t),t0, is a ^-martingale. Give the SDE satisfied by the stock price S(t) w.r.t. the ^BM.

  37. Exercise 11.37. Consider a stock price process {S(t)}t⩾0 that obeys the SDE

    dS(t)=μS(t)dt+σ(S(t))1+βdW(t),  S(0)=S>0,

    with constant parameters σ ≠ 0, β, μ.

    1. (a) Assume the process 0t(S(u))1+βdW(u) is a ℙ-martingale w.r.t. the filtration generated by the standard ℙ-Brownian motion {W(t)}t⩾0. Determine an exact expression for the mean of the process E[S(t)]. [Hint: You may write S(t) in terms of an exponential martingale by considering the process defined by X(t) := ln S(t).]
    2. (b) Fix T > 0 and assume the existence of a Radon–Nikodym derivative process ϱt:=(d˜d)t, 0 ⩽ tT, such that {ertS(t)}0⩽tT is a ˜-martingale. Give the form of ϱt as an exponential ℙ-martingale.
  38. Exercise 11.38. Consider a one-dimensional general diffusion process {X(t)}t⩾0 having a transition PDF p(s, t; x, y), s < t, w.r.t. a given probability measure ℙ, for all x, y in the state space of the process. Assume a change of measure ^ is defined by a Radon–Nikodym derivative process

    ϱt:=(d^d)t=h(t,X(t))

    for all t ⩾ 0 and where h(t, x) is some given Borel function of t, x. Let p^(s,t;x,y) be the transition PDF w.r.t. the measure ^. Show that the two transition PDFs are related by

    p^(s,t;x,y)=h(t,y)h(s,x)p(s,t;x,y).

    [Hint: Consider the definition of the transition CDF (w.r.t. the measure ^):

    p^(s,t;x,y)=^(X(t)y|X(s)=x)=E^[I{X(t)y}|X(s)=x]

    and make use of the Markov property and the change of measure for computing expectations.]

1The square integrability condition is also denoted by writing XL2 ([0, T], Ω). Throughout we assume that the integrand process X is measurable. That is, for every Borel set B(), the sets {(t,ω):X(t,ω)B}(+)×. By Fubini's Theorem, assuming E[X2 (t)] < ∞ for all t ⩾ 0, then this expectation is a Lebesgue-measurable function of time t and we may interchange the expectation integral with the time integral: E[0TX2(t)dt]=0TE[X2(t)]dt.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset