Sunday, March 24, 2013

How to modify a JAGS+rjags (or BUGS) program to include new parameters

A central goal of providing you with lots of examples of programs, that do Bayesian parameter estimation in JAGS (BUGS) and rjags, is for you to be able to expand and modify the programs for your own needs. The book includes several cases in which programs are incrementally modified, but this blog post provides an explicit example of the systematic changes that are needed. We start with a program for linear regression, and expand it to include a parameter for quadratic trend. All the specific changes needed for the inclusion of a new parameter are explicitly pointed out, in every section of the program. This provides a guide for changing other programs to your own needs.

To motivate the present example, here (below) are plots of the data, fit by linear regression (the original program), and by regression that includes a quadratic component (the expanded program).
Notice above that there appears to be a systematic descrepancy between credible linear regression lines and the data. The data seem to have a curvature that is not captured by the linear trend. If we include a quadratic component in the model, the credible trend lines look like this:
(Sorry that the title of the graph still says "lines" instead of "curves".)

My purpose for this blog post is to show you how to expand the program that created the first result above into the program that created the second result above. I will do this simply by providing the two programs, and marking the changed lines in the expanded program with the comment, #CHANGED. The two programs can be found at the usual program repository: The original is here and the modified is here.

Although the programs are moderately long, primarily because of all the graphics commands that come at the end, the number of changes is small. Below, I highlight the changes, section by section.

The crucial conceptual change comes in model statement, which specifies the quadratic component for the trend. The model must be expanded to include the quadratic component. The parts highlighted in yellow were added to the original model specification:
model {
    for( i in 1 : Ndata ) {
        y[i] ~ dnorm( mu[i] , tau )
        mu[i] <- beta0 + beta1 * x[i] + beta2 * pow(x[i],2) # CHANGED
    }
    beta0 ~ dnorm( 0 , 1.0E-12 )
    beta1 ~ dnorm( 0 , 1.0E-12 )
    beta2 ~ dnorm( 0 , 1.0E-12 ) # CHANGED
    tau ~ dgamma( 0.001 , 0.001 )
}
Notice that the trend, specified by mu[i], was expanded to include a new parameter; namely, the beta2 coefficient on the squared data value. Importantly, with the introduction of a new parameter, that parameter must be provided with a prior, as shown in the second highlighted line above.

All that remains to be done after that is to make sure that the rest of the program takes into account the new parameter and model form.

The data section does not need to be changed, because the data are not changed.

The chain-initialization section, if there is one, must be changed to accommodate the new parameter:
initsList = list(
  beta0 = b0Init ,
  beta1 = bInit ,
  beta2 = 0 , # CHANGED
  tau = tauInit
)
I chose to be lazy and initialize the new parameter at zero, rather than doing it more intelligently.

In the run-the-chains section, we need to tell JAGS to keep track of the new parameter:
parameters = c("beta0" , "beta1" , "beta2" , "tau")  # CHANGED
Notice the only change in that whole section was specifying the new parameter to be tracked.

Hopefully you'll find that all those changes mentioned above are pretty straightforward! In general, when you're modifying a JAGS+rjags program, those steps above are the main ones to keep in mind. Here they are, summarized:
  • Carefully specify the model with its (new) parameters.
  • Be sure all the (new) parameters have sensible priors.
  • If you are defining your own initial values for the chains, be sure you've defined intial values for all the (new) parameters.
  • Tell JAGS to track the (new) parameters.
After that, the most effortful part is graphically displaying the results. Here is the change I made for the graphs presented at the beginning of this blog entry:
# Display data with believable regression curves and posterior predictions.
# Plot data values:
xRang = max(x)-min(x)
yRang = max(y)-min(y)
limMult = 0.25
xLim= c( min(x)-limMult*xRang , max(x)+limMult*xRang )
yLim= c( min(y)-limMult*yRang , max(y)+limMult*yRang )
plot( x , y , cex=1.5 , lwd=2 , col="black" , xlim=xLim , ylim=yLim ,
      xlab="X" , ylab="Y" , cex.lab=1.5 ,
      main="Data with credible regression lines" , cex.main=1.33  )
# Superimpose a smattering of believable regression lines:
xComb = seq(xLim[1],xLim[2],length=201)
for ( i in round(seq(from=1,to=chainLength,length=50)) ) {
  lines( xComb , 
         mcmcChain[i,"beta0"] + mcmcChain[i,"beta1"]*xComb 
         + mcmcChain[i,"beta2"]*xComb^2 , # CHANGED 
         col="skyblue" )
}
The set-up of the graph was unchanged; all I did was modify the blue curves that get plotted, so that they correspond to the model that was specified for JAGS. This is perhaps the most dangerous part of modifying JAGS programs: It is easy to modify the model specification for JAGS, but inadvertently not make the identical corresponding change to what gets plotted later! It would be nice if the model were specified only once, in a form that both JAGS and R simultaneously understand, but presently that is not how it's done in the JAGS/BUGS world.

Oh -- and what about interpreting the results in this case? Is there a credible non-zero quadratic trend, and how big is it? Answer: Just look at the posterior distribution on beta2. And, you might ask, how do we examine the results of the linear regression and decide to expand the model? That process is called a posterior predictive check, and my advice about it is provided in this article (n.b., your click on that link constitutes your request to me for a personal copy of the article, and my provision of a personal copy only).

Monday, March 4, 2013

Shrinkage in bimodal hierarchical models: Toward the modes, not the middle

In hierarchical models, the estimated values of intermediate-level parameters exhibit "shrinkage" because the higher-level distribution affects the intermediate-level parameter estimates. In typical applications, the form of the higher-level distribution is also estimated. And, in most typical applications, the higher-level distribution is unimodal, producing shrinkage of intermediate-level parameters is toward the middle of the higher-level distribution. Because this type of shrinkage is so prevalent, it is easy to think that shrinkage is always inward, toward the middle. But it does not have to be. This post shows a simple case of bimodal data producing a bimodal higher-level distribution, which causes shrinkage to be outward to the two modes. In other words, this post is a reminder that shrinkage is not toward the middle, shrinkage is toward the modes.

Consider a simple case of estimating the biases of several coins, and simultaneously estimating the distribution of biases across the coins. This is like estimating individual subject parameters (the coins) and group-level summary parameters (for the distribution across coins). For the jth coin, we observe a particular number of heads, Hj, out of its total number of flips, Nj. Denote the estimated bias of the coin as θj. Then the likelihood function is the usual product of Bernoulli's:
p(Hjj,Nj) = θjH (1-θj)(N-H)
The distribution of θj is here described by a beta distribution with shape parameters a and b:
p({θj}|a,b) = Πj dbeta(θj|a,b)
The overall likelihood of the parameters, for the particular data, is computed as the product of the equations above. Our goal is to find the parameter values, for {θj} and a and b, that maximize the likelihood.

In the two examples shown below there are 6 coins, each flipped 30 times. The proportion of flips that are heads is shown by the placement of the black dots in the figures below. For both examples, there are three coins that have fewer than 50% heads and three coins that have more than 50% heads. All that differs between the two examples is how extreme the separation is between the two clusters of coins.

In the first example, the two clusters of data are not separated by much. In the figure below, the black dots show the data. This first figure shows the likelihood if we choose values for θj that exactly match the observed proportion of heads in each coin, as shown by the blue circles, and we set a=1 and b=1 for the over-arching beta distribution:

We can find parameter values that produce a higher likelihood, however. In fact, the maximum likelihood estimates of the parameters are shown here:

Importantly, notice in the figure above that the blue circles, which represent the best estimates of the biases in the coins, are shrunken --- and toward the middle of the data. 

But now consider what happens when the data are more extremely bimodal, as shown below. First, again, we consider choosing parameter values that match the proportions in the data and give a uniform higher-level distribution:

The maximum-likelihood estimates of the parameters are shown here:

Importantly, notice in the figure above that the blue circles, which represent the best estimates of the biases in the coins, are "shrunken" --- away from the middle of the data. The moral: Shrinkage is toward the modes of the higher-level distribution, not necessarily toward the middle.