In Bayesian parameter estimation, is it legitimate to view the data and then decide on a distribution for the dependent variable? I have heard that this is not “fully Bayesian”.The shortest questions often probe some of the most difficult issues; this is one of those questions.
Let me try to fill in some details of what this questioner may have in mind. First, some examples:
- Suppose we have some response-time data. Is it okay to look at the response-time data, notice they are very skewed, and therefore model them with, say, a Weibull distribution? Or must we stick with a normal distribution because that was the mindless default distribution we might have used before looking at the data? Or, having noticed the skew in the data and decided to use a skewed model, must we now obtain a completely new data set for the analysis?
- As another example, is it okay to look at a scatter plot of population-by-time data, notice they have a non-linear trend, and therefore model them with, say, an exponential growth trend? Or must we stick with a linear trend because that was the mindless default trend we might have used before looking at the data? Or, having noticed the non-linear trend in the data and decided to use a non-linear model, must we now obtain a completely new data set for the analysis?
- As a third example, suppose I’m looking at data about the positions of planets and asteroids against the background of stars. I’m trying to fit a Ptolemaic model with lots of epicycles. After considering the data for a long time, I realize that a completely different model, involving elliptical orbits with the sun at a focus, would describe the data nicely. Must I stick with the Ptolemaic model because it’s what I had in mind at first? Or, having noticed the Keplerian trend in the data, must I now obtain a completely new data set for the analysis?
One worry might be that selecting a model after considering the data is HARKing (hypothesizing after the results are known; Kerr 1998, http://psr.sagepub.com/content/2/3/196.short). Kerr even discusses Bayesian treatments of HARKing (pp. 206-207), but this is not a uniquely Bayesian problem. In that famous article, Kerr discusses why HARKing may be inadvisable. In particular, HARKing can transform Type I errors (false alarms) into confirmed hypothesis ergo fact. With respect to the three examples above, the skew in the RT data might be a random fluke, the non-linear trend in the population data might be a random fluke, and the better fit by the solar-centric ellipse might be a random fluke. Well, the only cure to Type I errors (false alarms) is replications (as Kerr mentions). Pre-registered replications. There are lots of disincentives to replication attempts, but these disincentives are gradually being mitigated by recent innovations like registered replication reports (https://www.psychologicalscience.org/publications/replication). In the three examples above, there are lots of known replications of skewed RT distributions, exponential growth curves, and elliptical orbits.
Another worry might be that the analyst had a tacit but strong theoretical commitment to a particular model before collecting the data, and then reneged on that commitment by sneaking a peak at the data. With respect to the three examples above, it may have been the case that the researcher had a strong theoretical commitment to normally distributed data, but, having noticed the skew, failed to mention that theoretical commitment and used a skewed distribution instead. And analogously for the other examples. But I think this mischaracterizes the usual situation of generic data analysis. The usual situation is that the analyst has no strong commitment to a particular model and is trying to get a reasonable summary description of the data. To be “fully Bayesian” in this situation, the analyst should set up a vast model space that includes all sorts of plausible descriptive models, including many different noise distributions and many different trends, because this would more accurately capture the prior uncertainty of the analyst than a prior with only a single default model. But doing that would be very difficult in practice, as the space of possible models is infinite. Instead, we start with some small model space, and refine the model as needed.
Bayesian analysis is always conditional on the assumed model space. Often the assumed model space is merely a convenient default. The default is convenient because it is familiar to both the analyst and the audience of the analysis, but the default need not be a theoretical commitment. There are also different goals for data analysis: Describing the one set of data in hand, and generalizing to the population from which the data were sampled. Various methods for penalizing overfitting of noise are aimed at finding a statistical compromise between describing the data in hand and generalizing to other data. Ultimately, I think it only gets resolved by (pre-registered) replication.
This is a big issue over which much ink has been spilled, and the above remarks are only a few off-the-cuff thoughts. What do you think is a good answer to the emailer's question?