Collapsed importance sampling

In the case of full particles for importance sampling, we used to generate particles from another distribution, and then, to compensate for the difference, we used to associate a weighting to each particle. Similarly, in the case of collapsed particles, we will be generating particles for the variables Collapsed importance sampling and getting the following dataset:

Collapsed importance sampling

Here, the sample Collapsed importance sampling is generated from the distribution Q. Now, using this set of particles, we want to find the expectation of Collapsed importance sampling relative to the distribution Collapsed importance sampling:

Collapsed importance sampling
Collapsed importance sampling

Fig 4.22: The late-for-school model

Let's take an example using the late-for-school model, as shown in Fig 4.22. Let's consider that we have the evidence that Collapsed importance sampling, Collapsed importance sampling, and partition the variables as Collapsed importance sampling and Collapsed importance sampling. So, we will generate particles over the variable Collapsed importance sampling. Also, each such particle is associated with the distribution Collapsed importance sampling. Now, assuming some query (say Collapsed importance sampling), our indicator function will be Collapsed importance sampling. We will now evaluate for each particle:

Collapsed importance sampling

After this, we will compute the average of these probabilities using the weightings of the samples.

Now, the question is, how do we define the distribution Q and find the weightings for the particles?. We begin by partitioning the evidence variables into two parts, namely Collapsed importance sampling and Collapsed importance sampling, where Collapsed importance sampling and Collapsed importance sampling. As the collapsed importance sampling was a hybrid process, we deal with the evidence accordingly, using Collapsed importance sampling as evidence in importance sampling and Collapsed importance sampling as evidence in exact inference.

Let's consider an arbitrary distribution Q:

Collapsed importance sampling

Using this, we can reformulate Collapsed importance sampling as follows:

Collapsed importance sampling

Let's put this result back into the previous equation:

Collapsed importance sampling

From the preceding equation we get the following:

Collapsed importance sampling

Now, computing the mean of importance weights, we get the following estimator:

Collapsed importance sampling

So, we get the final equation as follows:

Collapsed importance sampling

In the preceding discussion, we didn't place any restriction on the selection of the distribution Q. The two main points to consider for the selection of the distribution Q are as follows:

  • It should be easy to generate samples from this distribution.
  • It should be similar to our target distribution Collapsed importance sampling.

In the case of collapsed particles, we will generate particles from the distribution Collapsed importance sampling. However, as we saw in the case of full particles, we had to sample over the variable's parents before sampling the variable. In the case of collapsed particles, it is quite possible that the parents of a variable are not in Collapsed importance sampling. The simplest solution to this problem is to construct the set Collapsed importance sampling in such a way that for every Collapsed importance sampling, Collapsed importance sampling holds as well. To do this, we must use a simple approach to start with the nodes having no parents, include them in , and then work downwards from there.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset