The delta method is a statistical approach for deriving an approximate probability distribution for a function of an asymptotically normal estimator using the Taylor series approximation.
Let x = be a random variable asymptotically distributed as N(μ, σ2) with p.d.f. f(x), and g(x) be a single-valued function. If the integral exists, it is called the mathematical expectation of g(x), denoted by E[g(x)]. The mean and variance of g(x) of the random variable x may not be readily obtainable. If x has the standard normal density, the integral corresponding to E[g(x)] does not have a tractable expression. In these situations, approximate mean and variance for g(x) can be obtained by taking a linear approximation of g(x) in the neighborhood of μ. With μ = E(x) and g(x) being continuous, the Taylor series expansion of the function g(x) about μ is given by
Let g(x) = y about μ, and the g(x) can be expressed as
Taking the variance of both sides of Equation (B.3) yields:
So, if Y, or g(x), is any function of a random variable x, only the variance of x and the first derivative of the function need to be considered in calculating the approximate for the variance of g(x).
The delta method for the multivariate case is simply the extension of the above specifications. Suppose that is a random vector with mean μ and variance–covariance matrix , and g(x) is a vector-valued transform of x where g is the link function. Let and be the estimates of x and g(x), respectively. Then, the first-order Taylor series expansion of yields approximation of mean
The delta method depends on the validity of the Taylor series approximation, and therefore, some caution must be exercised when using it before its adequacy is verified with simulation.