tag:blogger.com,1999:blog-3240271627873788873.post674282326278559184..comments2019-01-19T07:18:49.744-05:00Comments on Doing Bayesian Data Analysis: Run MCMC to achieve effective sample size (ESS) of 10,000John K. Kruschkehttp://www.blogger.com/profile/17323153789716653784noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-3240271627873788873.post-49831745709444837982018-03-28T17:14:50.283-04:002018-03-28T17:14:50.283-04:00Severe autocorrelation can come from different sou...Severe autocorrelation can come from different sources. It's an interplay of model and data; that is, a model can have little autocorrelation for some data but lots of autocorrelation for other data. In these cases you have to "get smart" about what's going on in your model by investigating the posterior distribution -- is there a strong trade-off (i.e., correlation) of parameters in the posterior distribution? An example of this is the trade-off of intercept and slope in linear regression. If so, it's possible that a clever re-scaling of the data could solve the problem. Or, maybe the autocorrelation is telling you something about the parameterization in your model. An example of this is in "ANOVA" style models in which the non-sum-to-zero parameters are badly autocorrelated but the sum-to-zero parameters have little autocorrelation. You can try reparameterizing the model. Finally, if getting smart is too much of a pain, you can try using Stan. Its HMC method bends around difficult posterior distributions and greatly reduces autocorrelation, but at the cost of longer real time per step. In your case, might be the way to go!John K. Kruschkehttps://www.blogger.com/profile/17323153789716653784noreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-8517622368486974712018-03-28T16:22:55.487-04:002018-03-28T16:22:55.487-04:00What do we do if the ESS is low (<1k), even if ...What do we do if the ESS is low (<1k), even if we're running 200k+ numSavedSteps?Sean Shttps://www.blogger.com/profile/03902084229952883298noreply@blogger.com