tag:blogger.com,1999:blog-3240271627873788873.post3214252573808386667..comments2019-08-16T08:26:30.538-04:00Comments on Doing Bayesian Data Analysis: Interpreting Bayesian posterior distribution of a parameter: Is density meaningful?John K. Kruschkehttp://www.blogger.com/profile/17323153789716653784noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-3240271627873788873.post-49759970252938990562017-02-22T08:51:49.572-05:002017-02-22T08:51:49.572-05:00Yeah, the claim that researchers intuitively want ...Yeah, the claim that researchers intuitively want to know posterior probability density is an intuitive claim. Much like saying that researchers intuitively want to know posterior probability of parameters, not p values. <br /><br />The intuitive claim does not mean that researchers can not or should not want to know other information, such as quantile (percentile) intervals and p values. The intuitive claim is merely that it's natural for researchers to want to know posterior probability density on parameters with meaningful scales.John K. Kruschkehttps://www.blogger.com/profile/17323153789716653784noreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-53562292327092063832017-02-22T02:43:48.590-05:002017-02-22T02:43:48.590-05:00So when you write: "Density answers what the ...So when you write: "Density answers what the researcher wants to know: (...) what is the range of the credible (i.e., high density) values?" do you similarly mean that as a shorthand for: "Density answers what the researcher wants to know: (...) what is the HDI?" Because that feels circular to me: the claim that researchers are interested in densities seems to rest on itself. Joachim Vandekerckhovehttps://www.blogger.com/profile/00457442016473274970noreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-31106617655501011182017-02-21T14:01:41.807-05:002017-02-21T14:01:41.807-05:00To say, "the 95% most credible values" i...To say, "the 95% most credible values" is synonymous with saying "values in the 95% HDI". It's just a simple re-phrasing, sort of like saying "sampling distribution" instead of "the distribution of sample statistics".John K. Kruschkehttps://www.blogger.com/profile/17323153789716653784noreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-3023460102727693602017-02-21T12:25:27.711-05:002017-02-21T12:25:27.711-05:00Thank you for this post. It provides a venue for a...Thank you for this post. It provides a venue for a tortured question I've had for a while. <br /><br />"Based on the data, what is the range of the 95% most credible values of \(\delta_i\)?"<br /><br />This doesn't mean anything to me and I don't know how to parse it. There is no "95% of values" of a continuous variable and so they can't have a range. Joachim Vandekerckhovehttps://www.blogger.com/profile/00457442016473274970noreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-24171888851704237712017-02-21T09:15:40.260-05:002017-02-21T09:15:40.260-05:00I think Rasmus scratched on what I consider a corr...I think Rasmus scratched on what I consider a correct answer. The model parameters should be on a real line such that the estimate's distribution is approximately normal. This makes sense from a perspective of the MCMC samplers which perform best with normal-linear parameter distribution. From the point of probability theory the idea could be backed up with Jaynes' maximum entropy idea - a model with normal parameter estimates would be most informative with respect to a fixed prior (this is just my intuition, it would be nice to see a proof). Thus if I find a skewed non-normal posterior the first thing for me is to try to improve the model (even if the convergence indicators are in green)such that the parameter estimate's distribution is approx. normal. Then it should not matter whether one reports HDI or quantiles, the two should coincide and you get the inferential benefits of both.<br /><br />I think the decision analysis is a great tool and can be useful for answering the research question that motivated the study. However, I think the researcher should always report posterior parameter estimates. This is because, the results may be of interest to a researcher with a different research question which may require a decision analysis with different cost function. In bayesian decision analysis, the re-analysis requires the knowledge of the entire posterior distribution, which is difficult to report. However, if we assume that the posterior is approximately normal (see my first point), it's straightforward to recreate the posterior from info about HDI or quantile. IMO, the summaries are just proxies to implement the posterior model, which can then be used for prediction and inference in general.<br />matushttp://simkovic.github.comnoreply@blogger.comtag:blogger.com,1999:blog-3240271627873788873.post-9121053417532095262017-02-20T07:14:12.696-05:002017-02-20T07:14:12.696-05:00Well from a Decision theory perspective, just look...Well from a Decision theory perspective, just looking at the point estimate, the question is how bad it is to be off by a certain amount, right? If you consider it twice as bad to be off by 2.0 as to be off by 1.0, then you have a linear cost function, and the summary of the posterior that you should use is the median posterior.<br /><br />From a decision theoretic perspective the mode is a little bit tricky as it's the solution to the case when you have a 0-1 loss, that is, if you're correct then there is no loss, but if your wrong (even just a little) then you get all the loss. This is usually not the situation you are in, as it's usually worse to be off by much than to be off by a little.<br /><br />So no good answer :) But one thing I've notice is that if you transform the scale of your parameter so that it spans (-∞,∞), for example logit transforming a rate parameter, then the HDI and quantile intervals are often pretty similar... Rasmus Bååthhttps://www.blogger.com/profile/16575386339856902265noreply@blogger.com