The issue motivating the question is that the MCMC chain is just one random sample from the true underlying posterior distribution. If we seeded the random chain differently (or used a few more or less burn-in steps, or thinned slightly more or less) we would have a different random MCMC chain with slightly different HDI limits because of random noise in the MCMC sample. When the MCMC chain gets very long, then in the limit it approximates the true posterior extremely well. But for chains of realistic length (not the unobtainable "in the limit" chain), how stable is the MCMC estimate of the HDI?
This blog entry reports some Monte Carlo simulations to illustrate the answer. Suppose that the true posterior is a known distribution, so we can easily generate random values from it. What we'll do is generate an MCMC chain from the true posterior, and estimate the (95%) HDI from the MCMC chain. We'll do that many times and keep track of the estimated HDI's, so we can see how much the estimated HDI's vary. For each MCMC chain we'll also determine the 2.5%ile and the 97.5%ile, to compare with the 95% HDI.
Here are results when the true posterior is a gamma distribution and the MCMC chain has length 1,000:
The top panel above shows the true gamma distribution from which the MCMC chains were generated. Each MCMC chain had 1,000 values in it, and for each chain the 95% HDI was estimated. This was done 20,000 times. The middle row of histograms shows the HDI limits across the 20,000 replications. The bottom row of histograms shows the percentiles across the 20,000 replications. The vertical dashed lines in the histograms show the true values of the HDI.
- Notice first that the 2.5%ile and 97.5%ile, in the bottom row, are very poor estimates of the HDI. This is because the true distribution is skewed, and the equal-tailed credible interval is quite different than the HDI.
- Notice next that there is indeed quite a lot of variation across the 20,000 replications. For example, the histogram of "HDI low" spans the true value of 0.0799, but different chains give different estimates. (The histogram displays the 95% HDI across the 20,000 replications going from 0.02559 to 0.1976. This is not the HDI of the MCMC chain! It is the HDI, across replications, of the estimated lower value of the HDI of the MCMC chain.)
- Notice finally that there is a bit of bias in the estimate of the limits of HDI. Specifically, the estimated HDI is a little bit narrower than the true HDI. This makes intuitive sense: The estimated HDI finds the narrowest 95% interval in the finite MCMC sample. Because the tails of the distribution have more points sampled just inside the HDI than just outside the HDI, there is, for small chains, a higher chance of finding a narrowest 95% interval just inside the true HDI. As the chain length gets larger, this small-sample bias is reduced, as can be seen in the next figures
Here are the results when the chain length is 5,000:
And here are the results when the chain length is 10,000:
When the chains have a length of 10,000, the bias in the HDI estimate is only slight. There is still, of course, some imprecision in the estimate of the HDI limits (and width). It is up to the user to ascertain whether this amount of imprecision is tolerable for the particular application, or whether a longer MCMC chain should be computed.
Conclusion: When the limits of the HDI are close to the decision boundary (such as the ROPE), or when the HDI width is only marginally narrow enough for the desired precision, then be sure to use a very long MCMC chain so that the HDI limits are stable. The graphs above help illustrate how long is "very long." In general, while the book uses chains on the order of 5,000 steps for pedagogical purposes, real research reports should probably use at least 10,000 steps or much more as patience permits.