Let (Xt, t ∈ T) be a real, square-integrable process defined on the probability space . We suppose T to be an interval of finite or infinite length on , and we set:
where m is the mean of (Xt) and C is its covariance.
In the following, unless otherwise indicated, we suppose that m = 0.
(Xt) is said to be continuous in mean square at t0 ∈ T if t → t0 (t ∈ T) leads to E(Xt − Xt0)2 → 0.
It is equivalent to say that is a continuous mapping from T to and that (Xt) is continuous in mean square (on all of T).
THEOREM 12.1.– The following properties are equivalent:
1) (Xt) is continuous in mean square.
2) C is continuous on the diagonal of T × T.
3) C is continuous on T × T.
PROOF.–
– (1) ⇒ (3) as (s, s′) → (t, t′) leads to and by the bicontinuity of the scalar product E(XsXs′) → E(XtXt′).
– (3) ⇒ (2): evidently.
– (2) ⇒ (1) as, if s → t, we have:
Let (Xt) be a square-integrable, centered, and measurable process (i.e. is measurable) defined on T = [a, b], −∞ < a < b < +∞.
To define its mean-square Riemann integral on [a, b], we set:
where a = tn,0 < tn,1 < ··· < tn,kn = b; sn, i ∈ [tn, i−1, tn, i].
If when n → ∞, and supi (tn, i − tn, i−1) → 0 and if I does not depend on the chosen sequence of partitions, then (Xt) is said to be mean-square (Riemann) integrable on [a, b] and we write:
I is therefore, by definition, a square-integrable, real random variable, and EI = lim EΔn = 0.
THEOREM 12.2.– (Xt) is mean-square Riemann integrable on [a, b] if and only if C is Riemann integrable on [a, b] × [a, b].
PROOF.– Let (Δn) and (Δm) be the Riemann sums associated with two partition sequences (tn, i) and (tm, i). (Xt) is then integrable, with integral I, if and only if:
for every pair (tn, i), (tm, i).
These conditions are equivalent to E(Δn − Δm)2 → 0 and to E(ΔnΔm) → ℓ, ∀(Δn), ∀(Δm), which is written as:
and the latter condition means that C is Riemann integrable on [a, b].
THEOREM 12.3.– If C is continuous on the diagonal [a, b] × [a, b], and if f and g are continuous functions on [a, b], then
In particular
[12.1]
PROOF.– From Theorem 12.1, C is continuous on [a, b] × [a, b]. Furthermore, the processes (f(t)Xt) and (g(t)Xt) have respective covariances and . These functions being continuous on [a, b] × [a, b], Theorem 12.2 leads to the integrability of (f(t)Xt) and (g(t)Xt) on [a, b]. Now, in clear notation,
and
and by the bicontinuity of the scalar product
Moreover
Hence, the stated equation.
COMMENT 12.1.–It may be shown that [12.1] remains valid when it is only supposed that C is integrable on [a, b]2.
When (Xt) is not centered, we say that it is mean-square integrable if m(t) is integrable on [a, b] and if (Xt − m(t)) is mean-square integrable. Then:
and
Using the integral
Consider the following input–output schema:
EXAMPLE 12.1.–
– An emission of particles → their recording by a computer.
– An arrival of customers to a service window → the service time.
– The exchange rate of the dollar → foreign trade.
Let h(t, s) be the response at time t to a signal of unit intensity emitted at time s. In many systems, we have:
If the intensity Xs of the signal is assumed to be random, and the system begins to work at time 0, then the response Yt at time t is the “sum” of the responses at time t, to the signals at the times s ∈ (0, t). Hence
Extending the mean-square integral to intervals of infinite length, we may define responses of the type:
Note that in discrete time, the formula would be written as:
which defines (Yt) as the transform of (Xt) by a realizable linear filter.
(Xt, t ∈ T) is said to be mean-square differentiable at if there exists some random square-integrable variable such that:
THEOREM 12.4.– The following two conditions are equivalent:
1) (Xt) is mean-square differentiable at t0.
2) C has a generalized derivative at t0, i.e.
PROOF.–
– (1) ⇒ (2) by the bicontinuity of the scalar product, and furthermore .
– (2) ⇒ (1) as, if we set , we have E(YhYk) → ℓ, therefore
and the Cauchy criterion implies the convergence in quadratic mean of (Yh).
REMARK 12.1.–
1) The covariance of the process is the generalized derivative of C.
2) If a two-variable function has a generalized derivative, it has mixed second derivatives. Conversely, a function that has continuous mixed second derivatives has a generalized derivative.
3) The trajectories of a mean-square differentiable process are not necessarily differentiable functions in the usual sense, or even continuous functions. For example, the process , t ∈ [0, 1], with basis space where λ is the Lebesgue measure, is mean-square differentiable whereas its trajectories are not continuous.
THEOREM 12.5.–Let (Xt) be a centered, square-integrable and measurable process, which is continuous in mean-square; T is a compact interval in . Under these conditions, there exist an orthonormal sequence (φn, n ≥ 0) in and a sequence of orthogonal random variables (ξn, n ≥ 0) such that:
where the series converges in quadratic mean.
PROOF.– From Theorem 12.1, C is continuous. As it is symmetric and positive semidefinite, Mercer’s theorem lets us write:
where (φn) is an orthonormal sequence in and the λn are real and such that:
Moreover, the series converges uniformly to C and the φn are continuous.
Now, from Theorem 12.2, the process (Xt φn(t), t ∈ T) is mean-square integrable on T and we may set:
Then, from Theorem 12.3,
Therefore, the ξn are pairwise orthogonal.
Furthermore
Consequently
If the process (Xt) is Gaussian, the random variables ξn are Gaussian, as is the limit in quadratic mean of a Gaussian sequence (see the definition of the integral), and they are independent. In addition, it may be shown that the convergence of the KL series is almost sure.
DEFINITION 12.1.– is said to be a Wiener process or a Brownian motion process if:
1) , t ≥ 0, where σ2 is a strictly positive constant.
2) (Wt) has independent increments: ∀k ≥ 3; ∀0 ≤ t1 < t2 < … < tk, the random variables Wt2 − Wt1 , Wt3 − Wt2 ,…, Wtk − Wtk−1 are independent.
INTERPRETATION 12.1.– A particle immersed in a motionless homogeneous fluid is subjected to molecular impacts which constantly modify its trajectory. For , we denote by Wt the abscissa of the projection of the particle on any axis, with origin W0, is then a Wiener process.
Brown (1827) observed this phenomenon for the first time. Einstein (1905) showed that:
where R is the gas constant and T the absolute temperature, N Avogadro’s number, and f is the friction coefficient. Wiener (1923) gave the precise mathematical definition of (Wt).
THEOREM 12.6.–
1) A Wiener process (Wt) has stationary increments and covariance:
2) Conversely, every centered Gaussian process with covariance σ2 min (s, t) is a Wiener process.
PROOF.–
1) Let ℓh, t be the characteristic function of Wt+h − Wt. Since
we have, by independence of the increments,
Hence
and consequently : the increments are stationary.
Furthermore, for s < t,
We therefore have C(s, t) = σ2 min (s, t).
2) It is sufficient to show that the increments are independent. Now, for t1 < t2 ≤ t3 < t4, we have:
The increments are therefore orthogonal and thus independent, since the process is Gaussian.
MEAN-SQUARE PROPERTIES.– Since σ2 min (s, t) is continuous, (Wt) is continuous in mean square (Theorem 12.1), and mean-square integrable when it is measurable (Theorem 12.2). It is not mean-square differentiable, as
TRAJECTORIES PROPERTIES.– The trajectories of (Wt) are continuous, but (almost surely) not differentiable. This property is delicate to establish. We will only show the following result:
THEOREM 12.7.– The trajectories of (Wt) are (almost surely) not functions of bounded variation.
Recall that the total variation of a numerical function f, defined on [α, β], is written as:
A monotone function is of bounded variation; a function whose first derivative is bounded on [α, β] is likewise; every function of bounded variation is equal to the difference of two monotone functions.
PROOF.– (Wt) may always be assumed to be standard (i.e. σ2 = 1) and [α, β] = [0, 1].
We set:
Then let N be a random variable with distribution , and let us set:
thus
Hence, from the independence of the increments,
Applying Tchebychev’s inequality, we obtain:
The Borel–Cantelli lemma therefore leads to:
Hence
and the variation of (Wt) on [0, 1] is +∞ a.s.
THEOREM 12.8.– The Karhunen–Loeve decomposition of a standard Wiener process is written as:
where the ξn are independent and have the respective distributions:
PROOF.– It is sufficient to determine the eigenfunctions and eigenvalues of min (s, t), determined by:
that is
[12.2]
Differentiating, we obtain:
[12.3]
then
The general solution of this equation is of the form
[12.4]
From [12.2], ℓn (0) = 0, therefore A = 0, and from [12.3], (1) = 0, therefore
Hence
and, since
we have . We may choose as ℓn and ξn are defined having either sign.
Consequently
and
is centered Gaussian with variance λn. The result is deduced by applying Theorem 12.5.
COMMENT 12.2.– It may be shown that the series from Theorem 12.8 converges uniformly (a.s.) which leads to the (a.s.) continuity of the trajectories of (Wt).
Let (Wt) be a Wiener process observed on the time interval [0, τ]. The only unknown parameter is σ2. In the (purely theoretical) case where (Wt, 0 ≤ t ≤ τ) is entirely observed, we consider the estimator:
To study the asymptotic behavior of (Zn), we will use the preliminary result:
LEMMA 12.1.– Let X1, …, Xn be independent random variables with the same distribution such that , EXi = 0, then
[12.5]
PROOF.–
By their independence, the terms may be factorized from which we deduce [12.5].
Let us set:
The are independent and follow a χ2 (1) distribution. Then, successively applying Tchebychev’s inequality and Lemma 12.1, we obtain:
where and are constant. Consequently,
where c is constant and, from the Borel–Cantelli lemma,
[12.6]
Therefore, for any τ, is almost surely equal to the unknown parameter σ2!
This result is not surprising as the large irregularity of the trajectories of (Wt) does not allow us to completely observe them.
In fact, [12.5] may be interpreted as a convergence result: an estimator Zn of σ2 is constructed from the observations (W(kτ/2n), 0 ≤ k ≤ 2n) and Zn tends almost surely to σ2 when n tends to infinity. It is to be noted that this “asymptotic” is not the usual asymptotic of discrete-time processes, which corresponds here to τ → +∞: considering the observations (Wkh, 0 ≤ k ≤ n) where nh = τ, we may construct the estimator:
where h is fixed and n → +∞. is then unbiased and
[12.7]
[12.7] may be established again using Lemma 12.1.
Let be a weakly stationary process. Its autocovariance is defined by setting:
If (γt) is integrable on , then the spectral density of (Xt) is defined by:
and by the inverse Fourier transform:
provided that f is integrable.
From Theorem 4.1, (Xt) is continuous in mean square if and only if γt is continuous at the origin, and γt is then continuous everywhere. Consequently, if (γt) is continuous at the origin, (Xt) is mean-square integrable on each bounded interval.
If (Xt) is mean-square differentiable, then (γt) is twice differentiable, and is a weakly stationary process with autocovariance .
If (Xt), with mean m, is observed in the interval [0, τ], we consider the unbiased estimator:
which is defined when (Xt) is mean-square integrable on each bounded interval.
Then, from Theorem 12.3,
If, for example, γ is integrable on , then
The rate of convergence may be specified, as in the discrete case.
The empirical autocovariance is defined by:
and the empirical spectral density by:
Thus, under analogous conditions to those in the discrete case, when t → ∞, we have:
but
and the introduction of weighting functions allows us to obtain convergent estimators of the spectral density.
When Xt is observed at the instants 0, h, …, (n − 1) h, where h is a fixed interval of time, the mean may be estimated by setting:
We may also estimate the autocovariance for t = jk, 0 ≤ j ≤ n − 1, by:
but cannot estimate γt for any t!
This inconvenience is called “aliasing”. It is found in the estimation of the spectral density because the spectral density of a discretized process does not permit the reconstruction of that of the initial process. To avoid aliasing, we may observe (Xt) at times T1 < T2 < ··· < Tn < ··· associated with a Poisson process.
EXERCISE 12.1. (Ornstein–Uhlenbeck process).– Let be a centered, stationary, Markovian, Gaussian process with autocovariance (γ(h), such that γ(0) > 0. The autocorrelation of (Xt) is defined by setting:
Supposing that ρ is continuous, and that there exists δ > 0 such that 0 < ρ(δ) < 1,
1) Establish the relation
2) Show that there exists a > 0 such that:
EXERCISE 12.2.– Let be a centered Gaussian process with covariance:
where u and v are continuous and such that φ(t) = u(t)/υ(t) is continuous and strictly increasing.
1) Show that the process:
is a standard Wiener process.
2) Apply this result to the Ornstein–Uhlenbeck process for (see Exercise 12.1).
EXERCISE 12.3.– Let (Wt, t ≥ 0) be a standard Wiener process, and set:
where is an unknown parameter.
1) Determine EXt and E(XsXt) – E(Xs)E(Xt) = Cov(Xs, Xt), s, t ≥ 0. Is this process Gaussian? Is it stationary?
2) (Xt, 0 ≤ t ≤ T) (T > 0) are observed and an estimator of θ is defined by setting:
where dXt = θ cos tdt + dWt.
Determine and deduce from this an asymptotic equivalent of when T → ∞. Show that has a limit in distribution and specify it.
3) Setting
show that
determining the function φ.
4) Verify that the statistical predictor is unbiased, i.e. .
5) Study the asymptotic behavior of in quadratic mean and in distribution.
6) Construct an asymptotic prediction interval for rT(XT, θ), and then for XT+h.
7) What happens if h = 2kπ (k, an integer)?
8) How do you calculate in practice?
EXERCISE 12.4.– Let be a sequence of independent random Gaussian variables with the same distribution . Consider the continuous-time process:
1) Show that is a Gaussian process. Determine its mean and covariance.
2) Show that the Xt follows the same distribution. Is the process (Xt) stationary?
3) Study the continuity and differentiability of (Xt) in the usual sense and in mean square.
4) Supposing (Xt) to be observed on the interval [0, T], propose an estimator of σ2 and study its asymptotic properties.
EXERCISE 12.5.– If (Xt) is continuous in mean square on [a, b], show that, as defined in L2,
EXERCISE 12.6.–
1) (W1, …, Wn) is said to be a standard Wiener process in if the components W1, …, Wn are standard (i.e. of variance 1), mutually independent Wiener processes. Show that, for every is again a Wiener process.
2) The converse is not necessarily true. Let (W1, W2) be a standard Wiener process in and let:
Show that every linear combination of γ1 and γ2 is a Wiener process. Why is (γ1, γ2) not a Wiener process in the plane (i.e. a Gaussian process with values in and with independent and stationary increments)?
EXERCISE 12.7.– Let (Wt, t ≥ 0) be a standard Wiener process, and let . Show that:
1) W is a martingale with respect to , i.e. for all s ≤ t, ;
2) – t is a martingale;
3) exp (λWt – λ2/2t) is a martingale for every fixed .
EXERCISE 12.8.–
1) Show that, if X follows a standard normal distribution, then, for all λ > 0,
2) Deduce that, if W is a standard Wiener process, then
Hint: You may choose
It is to be noted that this result is much weaker than the law of the iterated logarithm:
EXERCISE 12.9.– Supposing that W is a standard Wiener process and letting , determine the joint distribution of (X, Y, W1).
EXERCISE 12.10.–Setting, for t > 0, , where (W(t), t ≥ 0) is a Brownian motion process with variance σ2, show that, for s ≤ t,
and from this, deduce Cov (Xt, Xs).
EXERCISE 12.11.– Let (W(t), t ≥ 0) be a Brownian motion process. Setting t = exp (u) and , prove that is strictly stationary.
EXERCISE 12.12.– Let be the class of real, measurable, continuous-time processes such that Xt has the density f (which does not depend on t) in the class C2, which is bounded, as are its derivatives. We seek to estimate the derivative f′of f.
1) i) Show that no unbiased estimator of f′, based on the observation of X0, exists on the class (making the necessary regularity hypotheses).
2) Consider the estimator:
where .
i) Show that it is asymptotically unbiased if limT→+∞ hT = 0.
ii) Study its convergence and its rate of convergence in quadratic mean when (Xt) is strictly stationary and such that gu = f(X0, Xu) − f ⊗ f exists for u ≠ 0, and is continuous on every (x, x), , and is integrable on ]0, +∞[.