Perception apparently employs prior knowledge that illumination comes from above, and that the surface itself has constant color. |
Abstract: Bayesian inference is conditional on the space of models assumed by the analyst. The posterior distribution indicates only which of the available parameter values are less bad than the others, without indicating whether the best available parameter values really fit the data well. A posterior predictive check is important to assess whether the posterior predictions of the least bad parameters are discrepant from the actual data in systematic ways. Gelman and Shalizi (2012a) assert that the posterior predictive check, whether done qualitatively or quantitatively, is non-Bayesian. I suggest that the qualitative posterior predictive check might be Bayesian, and the quantitative posterior predictive check should be Bayesian. In particular, I show that the “Bayesian p value,” from which an analyst attempts to reject a model without recourse to an alternative model, is ambiguous and inconclusive. Instead, the posterior predictive check, whether qualitative or quantitative, should be consummated with Bayesian estimation of an expanded model. The conclusion agrees with Gelman and Shalizi (2012a) regarding the importance of the posterior predictive check for breaking out of an initially assumed space of models. Philosophically, the conclusion allows the liberation to be completely Bayesian instead of relying on a non-Bayesian deus ex machina. Practically, the conclusion cautions against use of the Bayesian p value in favor of direct model expansion and Bayesian evaluation.Kruschke, J. K. (in press). Posterior predictive check can and should be Bayesian: Comment on Gelman and Shalizi (2012a). British Journal of Mathematical and Statistical Psychology.
Get the full manuscript here.
No comments:
Post a Comment