tag:blogger.com,1999:blog-3240271627873788873.post8182598761484769238..comments2023-01-26T01:15:33.815-05:00Comments on Doing Bayesian Data Analysis: Potpourri of recent inquires about doing Bayesian data analysisJohn K. Kruschkehttp://www.blogger.com/profile/17323153789716653784noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-3240271627873788873.post-60795207498631171972014-04-29T14:50:09.027-04:002014-04-29T14:50:09.027-04:00Jan:
Thanks for the link to Press' videos! I...Jan: <br /><br />Thanks for the link to Press' videos! I hadn't seen them before.<br /><br />Two thoughts about that particular video:<br /><br />1. In the brief chapter in DBDA about contingency tables, I assume that neither margin has fixed size. Observations just fall at random into any cell of the table.<br /><br />2. More importantly, readers should note that Press is NOT doing what I would consider Bayesian estimation. He is, instead, computing p values. But he's computing p values with incorporating uncertainty about the marginals.<br /><br />John K. Kruschkehttps://www.blogger.com/profile/17323153789716653784noreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-63739979957771487992014-04-29T12:51:31.417-04:002014-04-29T12:51:31.417-04:00On the point of contingency tables, Professor Will...On the point of contingency tables, Professor William Press has an exclusive installment regarding Bayesian analysis of contingency tables at http://granite.ices.utexas.edu/coursewiki/index.php/Segment_36._Contingency_Tables_Have_Nuisance_Parameters, which is part of his illuminating and entertaining "Opinionated Lessons on Statistics" series. Note Professor Press is <i>also</i> the co-author of <i>Numerical Recipes</i>. It's also interesting listening to his earlier presentations on contingency tables, especially where he says something like "So many people get these wrong", and he's impugning the use of the Fisher Exact Test. Indeed there's a "Bayesian inference supersedes the Fisher Exact Test" in all that.<br /><br />This does not specifically help you set this up in JAGS, but I get you can listen to Professor Press and know where to go.Jan Galkowskihttps://www.blogger.com/profile/07636706072515906253noreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-15234042093232677402014-03-23T16:24:04.499-04:002014-03-23T16:24:04.499-04:00> the concept of effect size definitely does ap...> the concept of effect size definitely does apply to Bayesian analysis. <br /><br />I guess it depends on how you define effect size. Though you are probably right that most people would call standardized estimates effect size.<br /><br />Anyway, thanks for the reply,<br /><br />MatusAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-74807902099253171852014-03-23T11:35:06.662-04:002014-03-23T11:35:06.662-04:00Commenter> [Blog post said:] "Don't fo...Commenter> [Blog post said:] "Don't forget that even though multiple decimal points are there, they are not necessarily meaningful." Do you have some strategy to find out what is the variability in parameters due to sampler and sample size (not due to estimation error)? Then we could compute how many (effective) samples we need to obtain reliable estimates of the value at a particular decimal point. I don't have a solution for this. I always end up repeating the mcmc simulation just to see whether the parameters vary at a particular decimal point. If they do I increase the chain length...<br /><br />Reply: The variability of the MCMC estimated value depends on the density of the distribution at that point. Thus, the MCMC estimate of the mean is relatively robust, assuming that the mean is at a relatively dense part of the distribution. The MCMC estimate of the 95% HDI limits, on the other hand, is relatively noisy, because the distribution is usually sparse at the limits. My rule of thumb for usefully stable estimates of the 95% HDI limits is an effective sample size (ESS) of 10,000. See, e.g., this blog post: <a href="http://doingbayesiandataanalysis.blogspot.com/2011/07/how-long-should-mcmc-chain-be-to-get.html" rel="nofollow">http://doingbayesiandataanalysis.blogspot.com/2011/07/how-long-should-mcmc-chain-be-to-get.html</a><br /><br /><br />Commenter> The concept of effect size does not really apply to bayesian analysis. I would say that every quantity that is not a bayes factor or a related probability of a model in model comarison is a kind of effect size measure. It is more conductive to ask what is the quantity that is most informative with respect to our research question. In log-normal model y~ lognorm(b0 + b1*indicator) I would report exp(b1) and interpret it as a multiplicative factor. That is if b1=-0.1 then exp(b1)=0.9 and I know that my treatment decreases the outcome var by 10 percent.<br /><br />Reply: Perhaps I don't understand what you mean, but the concept of effect size definitely does apply to Bayesian analysis. Again, see for example <a href="http://www.indiana.edu/~kruschke/BEST/" rel="nofollow">http://www.indiana.edu/~kruschke/BEST/</a><br /><br /><br />Commenter> By the way, I think you create confusion by calling the regression models with categorical predictors anova in your book. People will look for variance and variance derived quantities when in fact your main message is (or is it not?) that the regression coefficient is of primary interest and should be reported.<br /><br />Reply: Sure, the models are "ANOVA" only by analogy, and there is no analysis of variance, in the least-squares sense, being done. I discuss this a little at the top of p. 492. I agree, however, that I could have been more clear about it. On the other hand, a lot of people think of "ANOVA" as the case of the generalized linear model that has nominal predictors with a metric predicted variable, not specifically as least-squares decomposition of variance. John K. Kruschkehttps://www.blogger.com/profile/17323153789716653784noreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-22486193194511333612014-03-23T04:48:49.805-04:002014-03-23T04:48:49.805-04:00Some important topics got mentioned.
> Don'...Some important topics got mentioned.<br /><br />> Don't forget that even though multiple decimal points are there, they are not necessarily meaningful.<br /><br />Do you have some strategy to find out what is the variability in parameters due to sampler and sample size (not due to estimation error)? Then we could compute how many (effective) samples we need to obtain reliable estimates of the value at a particular decimal point. I don't have a solution for this. I always end up repeating the mcmc simulation just to see whether the parameters vary at a particular decimal point. If they do I increase the chain length...<br /> <br />> @ effect size<br /><br />The concept of effect size does not really apply to bayesian analysis. I would say that every quantity that is not a bayes factor or a related probability of a model in model comarison is a kind of effect size measure. It is more conductive to ask what is the quantity that is most informative with respect to our research question. In log-normal model y~ lognorm(b0 + b1*indicator) I would report exp(b1) and interpret it as a multiplicative factor. That is if b1=-0.1 then exp(b1)=0.9 and I know that my treatment decreases the outcome var by 10 percent.<br /><br />By the way, I think you create confusion by calling the regression models with categorical predictors anova in your book. People will look for variance and variance derived quantities when in fact your main message is (or is it not?) that the regression coefficient is of primary interest and should be reported.<br /><br />best,<br />matusAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-35744189873518240012014-03-22T23:01:31.101-04:002014-03-22T23:01:31.101-04:00I enjoy your blog so much. Like you, I am in psych...I enjoy your blog so much. Like you, I am in psychology, and opportunities to use Bayesian statistics are very scarce (just by convention/tradition, of course). This post was greatly interesting, because it showed some of the real-life applications (and problems) others are juggling with.<br /><br />This is the most Bayesian statistics I'll be exposed to for another while, I guess... (maybe until your next post :))<br /><br />PatrickPatrick Coulombenoreply@blogger.com