tag:blogger.com,1999:blog-3240271627873788873.post9183698664804493169..comments2024-03-26T06:46:11.752-04:00Comments on Doing Bayesian Data Analysis: Bayes factors for tests of mean and effect size can be very differentJohn K. Kruschkehttp://www.blogger.com/profile/17323153789716653784noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-3240271627873788873.post-87925754195159239832017-10-06T08:18:22.455-04:002017-10-06T08:18:22.455-04:00Dear John,
First, thanks for the great work ! I en...Dear John,<br />First, thanks for the great work ! I enjoy diving into Bayesian, which become my main occupation for the last few years. Yet, I am confused by two things with which I hope you can help me:<br />- Why is BF in this post discussed as a ratio between posterior and prior within ROPE (when performing a test on the mean) and not between the posterior odds and prior odds (unless, of course, the posterior of the null hypothesis and the prior of the null hipothesis are assumed to be 1 within ROPE). I'm definetly missing something. <br />- second thing is the most "burning question": How do we choose ROPE? I found in your article "Bayesian analysis" (2010) where it's indicated as "arbitrarily small to solve the technical false alarm problem". Is there a rule of thumb in chooseing ROPE for time series data? perhaps the standard deviation of the noise? <br />Thanks in advance !Andreihttps://www.blogger.com/profile/02411404979430789675noreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-66858375552059593112015-04-12T20:13:08.682-04:002015-04-12T20:13:08.682-04:00Bayer Bless the posterior predictive distribution!...Bayer Bless the posterior predictive distribution!Peter Killeennoreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-15442915949024404632015-04-12T09:39:56.470-04:002015-04-12T09:39:56.470-04:00I think it says something very deep about Bayesian...I think it says something very deep about Bayesian statistics (and inference in general) that the posterior and Bayes factor can be sensitive to different things. They're different quantities (the Bayes factor being evidence, and the posterior being a conclusion in light of the evidence) so one, in general, wouldn't expect them to behave in the same way. Since Bayes factors underly every Bayesian calculation -- even of posteriors for parameter estimation -- there's no doing without them. <br /><br />The sensitivity of the evidence to background assumptions has important philosophical implications, and I think it's important that we not portray it as "undesirable." Richard Moreyhttps://www.blogger.com/profile/11319149283079163004noreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-3126683906828888562015-04-12T08:55:38.320-04:002015-04-12T08:55:38.320-04:00Richard: Thanks for your comment.
A point I left...Richard: Thanks for your comment. <br /><br />A point I left tacit, but which was on my mind when I wrote the post, was that in frequentist tests of mean or effect size, the result is the same either way. That is, the p value for a test of the mean is the same as the p value for a test of the effect size. I think that most people (because of their frequentist training) think of the effect size as redundant with the mean, merely re-scaled. So it can be surprising that the BFs for mean and effect size can be different.<br /><br />But even within the Bayesian framework, the marginal posterior distributions on the mean and effect size tend to be pretty commensurate, leading to pretty much the same conclusions (as long as the ROPEs are consistent). So it can be surprising that the BFs for mean and effect size can be different.John K. Kruschkehttps://www.blogger.com/profile/17323153789716653784noreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-11773935675467572812015-04-12T05:07:43.471-04:002015-04-12T05:07:43.471-04:00What's the *argument* here? "First, the B...What's the *argument* here? "First, the BF for the mean (mu) need not lead to the same conclusion as the BF for the effect size unless the prior is set up just right. Second, the posterior distribution on mu and effect size is barely affected at all by big changes in the vagueness of the prior, unlike the BF."<br /><br />I'm missing some outline of a theory that says why this is "bad" or "wrong" (at least given the completely inappropriate priors you've used here), which seems to be the implication. But that can't be your argument, because that would invalidate Bayes' theorem and would be thus be self-contradictory.Richard Moreyhttps://www.blogger.com/profile/11319149283079163004noreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-77047781627562026852015-04-09T15:44:43.148-04:002015-04-09T15:44:43.148-04:00# Here is the R script I used to generate the grap...# Here is the R script I used to generate the graphs in the blog post:<br /><br /># Generate the data:<br />set.seed(47405)<br />y = rnorm(43)<br />y = (y-mean(y))/sd(y)<br />y = y + 0.5<br />myData = y<br /><br /># Specify filename root and graphical format for saving output.<br />fileNameRoot = "BFmeanBFeffSz-" <br />graphFileType = "png" <br /><br /># Load the relevant model into R's working memory:<br />source("Jags-Ymet-Xnom1grp-Mnormal.R")<br /><br /># Generate the MCMC chain:<br />mcmcCoda = genMCMC( data=myData , numSavedSteps=20000 , saveName=fileNameRoot )<br /><br /># Display posterior information:<br />plotMCMC( mcmcCoda , data=myData , <br /> compValMu=0.0 , <br /> ropeMu=c(-0.1,0.1) ,<br /> ropeEff=c(-0.1,0.1) ,<br /> pairsPlot=TRUE , showCurve=FALSE ,<br /> saveName=fileNameRoot , saveType=graphFileType )<br /><br /># Plot of effect size only:<br />mcmcMat = as.matrix( mcmcCoda )<br />effSz = ( mcmcMat[,"mu"] - 0 ) / mcmcMat[,"sigma"]<br />openGraph(width=3.0,height=2.5)<br />par( mar=c(3.5,0.5,2.5,0.5) , mgp=c(2.25,0.7,0) )<br />plotPost( effSz , xlim=c(-2.0,2.0) , ROPE=c(-0.1,0.1) ,<br /> main="Effect Size" , xlab=expression((mu-100)/sigma) )<br /><br /># More details, not displayed in blog post:<br /><br /># Display diagnostics of chain, for specified parameters:<br />parameterNames = varnames(mcmcCoda) # get all parameter names<br />for ( parName in parameterNames ) {<br /> diagMCMC( codaObject=mcmcCoda , parName=parName , <br /> saveName=fileNameRoot , saveType=graphFileType )<br />}<br /><br /># Get summary statistics of chain:<br />summaryInfo = smryMCMC( mcmcCoda , <br /> compValMu=0.0 , <br /> ropeMu=c(-0.1,0.1) ,<br /> ropeEff=c(-0.1,0.1) ,<br /> saveName=fileNameRoot )<br />show(summaryInfo)<br />John K. Kruschkehttps://www.blogger.com/profile/17323153789716653784noreply@blogger.com