12.4 Alternative Algorithms

In many applications, there are no closed-form solutions for the conditional posterior distributions. But many clever alternative algorithms have been devised in the statistical literature to overcome this difficulty. In this section, we discuss some of these algorithms.

12.4.1 Metropolis Algorithm

This algorithm is applicable when the conditional posterior distribution is known except for a normalization constant; see Metropolis and Ulam (1949) and Metropolis et al. (1953). Suppose that we want to draw a random sample from the distribution inline, which contains a complicated normalization constant so that a direct draw is either too time-consuming or infeasible. But there exists an approximate distribution for which random draws are easily available. The Metropolis algorithm generates a sequence of random draws from the approximate distribution whose distributions converge to inline. The algorithm proceeds as follows:

1. Draw a random starting value θ0 such that inline.

2. For t = 1, 2, … ,

a. Draw a candidate sample θ* from a known distribution at iteration t given the previous draw θt−1. Denote the known distribution by inline, which is called a jumping distribution in Gelman et al. (2003). It is also referred to as a proposal distribution. The jumping distribution must be symmetric—that is, inline for all θi, θj, and t.

b. Calculate the ratio

inline

c. Set

inline

Under some regularity conditions, the sequence {θt} converges in distribution to f(θ|X); see Gelman et al. (2003).

Implementation of the algorithm requires the ability to calculate the ratio r for all θ* and θt−1, to draw θ* from the jumping distribution, and to draw a random realization from a uniform distribution to determine the acceptance or rejection of θ*. The normalization constant of f(θ|X) is not needed because only a ratio is used.

The acceptance and rejection rule of the algorithm can be stated as follows: (i) if the jump from θt−1 to θ* increases the conditional posterior density, then accept θ* as θt; (ii) if the jump decreases the posterior density, then set inline with probability equal to the density ratio r, and set θt = θt−1 otherwise. Such a procedure seems reasonable.

Examples of symmetric jumping distributions include the normal and Student-t distributions for the mean parameter. For a given covariance matrix, we have inline, where inline denotes a multivariate normal density function with mean vector θo.

12.4.2 Metropolis–Hasting Algorithm

Hasting (1970) generalizes the Metropolis algorithm in two ways. First, the jumping distribution does not have to be symmetric. Second, the jumping rule is modified to

inline

This modified algorithm is referred to as the Metropolis–Hasting algorithm. Tierney (1994) discusses methods to improve computational efficiency of the algorithm.

12.4.3 Griddy Gibbs

In financial applications, an entertained model may contain some nonlinear parameters (e.g., the moving-average parameters in an ARMA model or the GARCH parameters in a volatility model). Since conditional posterior distributions of nonlinear parameters do not have a closed-form expression, implementing a Gibbs sampler in this situation may become complicated even with the Metropolis–Hasting algorithm. Tanner (1996) describes a simple procedure to obtain random draws in a Gibbs sampling when the conditional posterior distribution is univariate. The method is called the Griddy Gibbs sampler and is widely applicable. However, the method could be inefficient in a real application.

Let θi be a scalar parameter with conditional posterior distribution inline, where inline is the parameter vector after removing θi. For instance, if inline, then inline. The Griddy Gibbs proceeds as follows:

1. Select a grid of points from a properly selected interval of θi, say, θi1 ≤ θi2 ≤ ⋯ ≤ θim. Evaluate the conditional posterior density function to obtain inline for j = 1, … , m.

2. Use w1, … , wm to obtain an approximation to the inverse cumulative distribution function (CDF) of inline.

3. Draw a uniform (0,1) random variate and transform the observation via the approximate inverse CDF to obtain a random draw for θi.

Some remarks on the Griddy Gibbs are in order. First, the normalization constant of the conditional posterior distribution inline is not needed because the inverse CDF can be obtained from inline directly. Second, a simple approximation to the inverse CDF is a discrete distribution for inline with probability inline. Third, in a real application, selection of the interval [θi1, θim] for the parameter θi must be checked carefully. A simple checking procedure is to consider the histogram of the Gibbs draws of θi. If the histogram indicates substantial probability around θi1 or θim, then the interval must be expanded. However, if the histogram shows a concentration of probability inside the interval [θi1, θim], then the interval is too wide and can be shortened. If the interval is too wide, then the Griddy Gibbs becomes inefficient because most of wj would be zero. Finally, the Griddy Gibbs or Metropolis–Hasting algorithm can be used in a Gibbs sampling to obtain random draws of some parameters.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset