Monday, November 14, 2011

BRugs delayed in R version 2.14

[But you don't have to use BRugs any more. See this more recent blog post about JAGS.]

See updated post regarding BRugs and OpenBUGS.


The programming language R was recently updated to version 2.14. Unfortunately, BRugs is lagging behind, so it does not yet work with R 2.14. As of minutes ago, Uwe Ligges tells me, "I hope to get a new version to CRAN 'soon', i.e. within few weeks." Meanwhile, keep your older version of R installed if you want to use BRugs!

If you don't have a previous version of R, I happened to have the R 2.13 installation executable sitting on my computer, and I've made it available here:
http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/Programs/R-2.13.0-win.exe Just save it and then execute it to install R 2.13. Then, invoke R and type install.packages("BRugs") to install BRugs.

If you are using RStudio, and have R 2.14 installed in addition to R 2.13, you have to tell RStudio to use R 2.13. In RStudio, do this:
Tools
-> Options
-> R General
-> R Version "Change..." button
-> browse to something like C:\Program Files\R\R-2.13.0
Click apply/okay.
Then quit RStudio and restart it.

Thursday, November 10, 2011

Happy Birthday, Puppies!

Happy Birthday, Puppies! Today the book turns one year old. Woohoo!

(But they have yet to turn a first penny in royalties. Fortunately, the real cake is more people doing Bayesian data analysis. That's a reason to celebrate!)

Saturday, November 5, 2011

Thinning to reduce autocorrelation: Rarely useful!

Borys Paulewicz, commenting on a previous post, brought to my attention a very recent article about thinning of MCMC chains: Link, W. A. & Eaton, M. J. (2011) On thinning of chains in MCMC. Methods in Ecology and Evolution. First published online 17 June 2011. doi: 10.1111/j.2041-210X.2011.00131.x The basic conclusion of the article is that thinning of chains is not usually appropriate when the goal is precision of estimates from an MCMC sample. (Thinning can be useful for other reasons, such as memory or time constraints in post-chain processing, but those are very different motivations than precision of estimation of the posterior distribution.)

Here's the idea: Consider an MCMC chain that is strongly autocorrelated. Autocorrelation produces clumpy samples that are unrepresentative, in the short run, of the true underlying posterior distribution. Therefore, if possible, we would like to get rid of autocorrelation so that the MCMC sample provides a more precise estimate of the posterior sample. One way to decrease autocorrelation is to thin the sample, using only every nth step. If we keep 50,000 thinned steps with small autocorrelation, then we very probably have a more precise estimate of the posterior than 50,000 unthinned steps with high autocorrelation. But to get 50,000 kept steps in a thinned chain, we needed to generate n*50,000 steps. With such a long chain, the clumpy autocorrelation has probably all been averaged out anyway! In fact, Link and Eaton show that the longer (unthinned) chain usually yields better estimates of the true posterior than the shorter thinned chain, even for percentiles in the tail of the distribution, at least for the particular cases they consider.

The tricky part is knowing how long of a chain is long enough to smooth out short-run autocorrelation. The more severe the autocorrelation is, the longer the chain needs to be. I have not seen rules of thumb for how to translate an autocorrelation function into a recommended chain length. Link and Eaton suggest monitoring different independent chains and assaying whether the estimates produced by the different chains are suitably similar to each other.

For extreme autocorrelation, it's best not to rely on long-run averaging out, but instead to use other techniques that actually get rid of the autocorrelation. This usually involves reparameterization, as appropriate for the particular model.

Link and Eaton point out that the inefficiency of thinning has been known for years, but many practitioners have gone on using it anyway. My book followed those practitioners. It should be pointed out that thinning does not yield incorrect results (in the sense of being biased). Thinning merely produces correct results less efficiently (on average) than using the full chain from which the thinned chain was extracted. There are at least a couple more mathematically advanced textbooks that you might turn to for additional advice. For example, Jackman says in his 2009 book, Bayesian Analysis for the Social Sciences, "High levels of autocorrelation in a MCMC algorithm are not fatal in and of themselves, but they will indicate that a very long run of the sampler may be required. Thinning is not a strategy for avoiding these long runs, but it is a strategy for dealing with the otherwise overwhelming amount of MCMC output. (p. 263)" Christensen et al. say in their 2011 book, Bayesian Ideas and Data Analysis, "Unless there is severe autocorrelation, e.g., high correlation with, say [lag]=30, we don't believe that thinning is worthwhile. (p.146)" Unfortunately, there is no sure advice about how long a chain is needed. Longer is better. Perhaps if you're tempted to thin by n to reduce autocorrelation, just use a chain n times as long without thinning.

Tuesday, November 1, 2011

Review in PsycCritiques

In a a recent review* in the online APA journal PsycCRITIQUES, reviewer Cody Ding says
There are quite a few books on Bayesian statistics, but what makes this book stand out is the author’s focus of the book—writing for real people with real data. Clearly a master teacher, the author, John Kruschke, uses plain language to explain complex ideas and concepts. This 23-chapter book is comprehensive, covering all aspects of basic Bayesian statistics, including regression and analysis of variance, topics that are typically covered in the first course of statistics for upper level undergraduate or first-year graduate students. A comprehensive website is associated with the book and provides program codes, examples, data, and solutions to the exercises. If the book is used to teach a statistics course, this set of materials will be necessary and helpful for students as they go through the materials in the book step by step.
My thanks to Cody Ding for taking the time and effort to write a review (and for such a nice review too!).
* Ding, C. (2011). Incorporating our own knowledge in data analysis: Is this the right time? (Book Review) PsycCRITIQUES, 56(36). DOI: 10.1037/a0024579