Monday, June 26, 2017

Bayesian estimation of correlations and differences of correlations with a multivariate normal

Within days of each other I received two emails asking about Bayesian estimation of correlations and differences of correlations. Hence this post, which presents an implementation in JAGS and R.

Traditionally, a joint distribution of several continuous/metric variables is modeled as a multivariate normal. We estimate the mean vector and covariance matrix of the multivariate normal. When standardized, the covariance matrix is the correlation matrix, so pairwise correlations can be inspected. Moreover, if it's meaningful to compare correlations, then we can also examine the posterior difference of pairwise correlations.

As an example, I'll use the data regarding Scholastic Aptitude Test (SAT) scores from Guber (1999), explained in Chapter 18 of DBDA2E. We'll consider three data variables: SAT total score (SATT), spending per student (Spend), and percentage of students taking the exam (PrcntTake). Here's a plot of the data from different perspectives:

Instead of doing linear regression of SAT on Spend and PrcntTake, we'll describe the joint distribution as a multivariate normal. This is straightforward in JAGS, because JAGS has a multivariate normal distribution built in. The model specification includes the likelihood function:

  zy[i,1:Nvar] ~ dmnorm( zMu[1:Nvar] , zInvCovMat[1:Nvar,1:Nvar] )

where the "z" prefixes of the variable names indicate that the data have been standardized inside JAGS. In the code above, zy[i,:] is the ith data point (a vector), zMu[:] is the vector of estimated means, and zInvCovMat[:,:] is the inverse covariance matrix. Just as JAGS parameterizes the normal distribution by the inverse variance (i.e., the precision), JAGS parameterizes the multivariate normal by the inverse covariance.

The prior on each mean is a broad normal. The prior on the inverse covariance matrices is a generic Wishart distribution, which is a multidimensional generalization of a gamma distribution. The Wishart is a distribution over matrices. The Wishart is built into JAGS.

The R code appended below provides the complete script, including the model specification and the graphics commands. The script needs the utilities script from DBDA2E for opening graphics windows.

Here is some selected output from the program. In particular, the program produces plots of the posterior distributions of the pairwise correlations, presented with the data and level contours from the multivariate normal:








Finally, here is a plot of the posterior of the difference of two correlations:
You should consider differences of correlations only if it's really meaningful to do so. Just as with taking differences of standardized regression coefficients, taking differences of correlations is assuming that the correlations are "on the same scale" which only makes sense to the extent that one standard deviation of SAT is "the same" as one standard deviation of Spending and is "the same" as one standard deviation of Percentage Taking the Exam.

The full R script follows. Add a comment to this post if this script is useful to you. Enjoy!

# Jags-MultivariateNormal.R
# John Kruschke, November 2015 - June 2017.
# For further info, see:
# Kruschke, J. K. (2015). Doing Bayesian Data Analysis, Second Edition:
# A Tutorial with R, JAGS, and Stan. Academic Press / Elsevier.

# Optional generic preliminaries:
graphics.off() # This closes all of R's graphics windows.
rm(list=ls())  # Careful! This clears all of R's memory!

#--------------------------------------------------------------------------
# Load the data:
#--------------------------------------------------------------------------
myData = read.csv("Guber1999data.csv") # must have file in curr. work. dir.
# y must have named columns, with no missing values!
y = myData[,c("SATT","Spend","PrcntTake")]

#----------------------------------------------------------------------------
# The rest can remain unchanged, except for the specification of difference of
# correlations at the very end.
#----------------------------------------------------------------------------

# Load some functions used below:
source("DBDA2E-utilities.R") # Must be in R's current working directory.
# Install the ellipse package if not already:
want = c("ellipse")
have = want %in% rownames(installed.packages())
if ( any(!have) ) { install.packages( want[!have] ) }

# Standardize the data:
sdOrig = apply(y,2,sd)
meanOrig = apply(y,2,mean)
zy = apply(y,2,function(yVec){(yVec-mean(yVec))/sd(yVec)})
# Assemble data for sending to JAGS:
dataList = list(
  zy = zy ,
  Ntotal =  nrow(zy) ,
  Nvar = ncol(zy) ,
  # Include original data info for transforming to original scale:
  sdOrig = sdOrig ,
  meanOrig = meanOrig ,
  # For wishart (dwish) prior on inverse covariance matrix:
  zRscal = ncol(zy) ,  # for dwish prior
  zRmat = diag(x=1,nrow=ncol(zy))  # Rmat = diag(apply(y,2,var))
)

# Define the model:
modelString = "
model {
  for ( i in 1:Ntotal ) {
    zy[i,1:Nvar] ~ dmnorm( zMu[1:Nvar] , zInvCovMat[1:Nvar,1:Nvar] )
  }
  for ( varIdx in 1:Nvar ) { zMu[varIdx] ~ dnorm( 0 , 1/2^2 ) }
  zInvCovMat ~ dwish( zRmat[1:Nvar,1:Nvar] , zRscal )
  # Convert invCovMat to sd and correlation:
  zCovMat <- inverse( zInvCovMat )
  for ( varIdx in 1:Nvar ) { zSigma[varIdx] <- sqrt(zCovMat[varIdx,varIdx]) }
  for ( varIdx1 in 1:Nvar ) { for ( varIdx2 in 1:Nvar ) {
    zRho[varIdx1,varIdx2] <- ( zCovMat[varIdx1,varIdx2]
                               / (zSigma[varIdx1]*zSigma[varIdx2]) )
  } }
  # Convert to original scale:
  for ( varIdx in 1:Nvar ) {
    sigma[varIdx] <- zSigma[varIdx] * sdOrig[varIdx]
    mu[varIdx] <- zMu[varIdx] * sdOrig[varIdx] + meanOrig[varIdx]
  }
  for ( varIdx1 in 1:Nvar ) { for ( varIdx2 in 1:Nvar ) {
    rho[varIdx1,varIdx2] <- zRho[varIdx1,varIdx2]
  } }
}
" # close quote for modelString
writeLines( modelString , con="Jags-MultivariateNormal-model.txt" )

# Run the chains:
nChain = 3
nAdapt = 500
nBurnIn = 500
nThin = 10
nStepToSave = 20000
require(rjags)
jagsModel = jags.model( file="Jags-MultivariateNormal-model.txt" ,
                        data=dataList , n.chains=nChain , n.adapt=nAdapt )
update( jagsModel , n.iter=nBurnIn )
codaSamples = coda.samples( jagsModel ,
                            variable.names=c("mu","sigma","rho") ,
                            n.iter=nStepToSave/nChain*nThin , thin=nThin )

# Convergence diagnostics:
parameterNames = varnames(codaSamples) # get all parameter names
for ( parName in parameterNames ) {
  diagMCMC( codaObject=codaSamples , parName=parName )
}

# Examine the posterior distribution:
mcmcMat = as.matrix(codaSamples)
chainLength = nrow(mcmcMat)
Nvar = ncol(y)
# Create subsequence of steps through chain for plotting:
stepVec = floor(seq(1,chainLength,length=20))

# Make plots of posterior distribution:

# Preparation -- define useful functions:
library(ellipse)
expandRange = function( x , exMult=0.2 ) {
  lowVal = min(x)
  highVal = max(x)
  wid = max(x)-min(x)
  return( c( lowVal - exMult*wid , highVal + exMult*wid ) )
}

for ( varIdx in 1:Nvar ) {
  openGraph(width=7,height=3.5)
  par( mar=c(3.5,3,2,1) , mgp=c(2.0,0.7,0) )
  layout(matrix(1:2,nrow=1))
  # Marginal posterior on means:
  plotPost( mcmcMat[ , paste0("mu[",varIdx,"]") ] ,
            xlab=paste0("mu[",varIdx,"]") ,
            main=paste( "Mean of" , colnames(y)[varIdx] ) )
  # Marginal posterior on standard deviations:
  plotPost( mcmcMat[ , paste0("sigma[",varIdx,"]") ] ,
            xlab=paste0("sigma[",varIdx,"]") ,
            main=paste( "SD of" , colnames(y)[varIdx] ) )
}

for ( varIdx1 in 1:(Nvar-1) ) {
  for ( varIdx2 in (varIdx1+1):Nvar ) {
    openGraph(width=7,height=3.5)
    par( mar=c(3.5,3,2,1) , mgp=c(2.0,0.7,0) )
    layout(matrix(1:2,nrow=1))
    # Marginal posterior on correlation coefficient
    plotPost( mcmcMat[ , paste0("rho[",varIdx1,",",varIdx2,"]") ] ,
              xlab=paste0("rho[",varIdx1,",",varIdx2,"]") ,
              main=paste( "Corr. of" , colnames(y)[varIdx1] ,
                          "and" , colnames(y)[varIdx2] ) )
    # Data with posterior ellipse
    ellipseLevel = 0.90
    plot( y[,c(varIdx1,varIdx2)] , # pch=19 ,
          xlim=expandRange(y[,varIdx1]) , ylim=expandRange(y[,varIdx2]) ,
          xlab=colnames(y)[varIdx1] , ylab=colnames(y)[varIdx2] ,
          main=bquote("Data with posterior "*.(ellipseLevel)*" level contour") )
    # Posterior ellipses:
    for ( stepIdx in stepVec ) {
      points( ellipse( mcmcMat[ stepIdx ,
                                paste0("rho[",varIdx1,",",varIdx2,"]") ] ,
                       scale=mcmcMat[ stepIdx ,
                                      c( paste0("sigma[",varIdx1,"]") ,
                                         paste0("sigma[",varIdx2,"]") ) ] ,
                       centre=mcmcMat[ stepIdx ,
                                       c( paste0("mu[",varIdx1,"]") ,
                                          paste0("mu[",varIdx2,"]") ) ] ,
                       level=ellipseLevel ) ,
              type="l" , col="skyblue" , lwd=1 )
    }
    # replot data:
    points( y[,c(varIdx1,varIdx2)] )
  }
}

# Show data descriptives on console:
cor( y )
apply(y,2,mean)
apply(y,2,sd)

#-----------------------------------------------------------------------------
# Difference of correlations. Change the specification of indices as
# appropriate to your variable names and interests.
#-----------------------------------------------------------------------------
# Specify indices of desired correlations:
rhoAidx1 = which(colnames(y)=="SATT")
rhoAidx2 = which(colnames(y)=="Spend")
rhoBidx1 = which(colnames(y)=="SATT")
rhoBidx2 = which(colnames(y)=="PrcntTake")
# Make plot of posterior difference:
openGraph(height=3.5,width=4)
par( mar=c(3.5,1,2,1) , mgp=c(2.0,0.7,0) )
plotPost( mcmcMat[ , paste0("rho[",rhoAidx1,",",rhoAidx2,"]") ]
          - mcmcMat[ , paste0("rho[",rhoBidx1,",",rhoBidx2,"]") ],
          xlab=paste0(
          "rho[",rhoAidx1,",",rhoAidx2,"] - rho[",rhoBidx1,",",rhoBidx2,"]" ) ,
          main=bquote( rho[ list( .(colnames(y)[rhoAidx1]) ,
                                  .(colnames(y)[rhoAidx2]) ) ] 
                       - rho[ list( .(colnames(y)[rhoBidx1]) ,
                                    .(colnames(y)[rhoBidx2]) ) ] ) ,
          cex.main=1.5 , ROPE=c(-0.1,0.1) )

#---------------------------------------------------------------------------



Sunday, June 11, 2017

Posterior distribution of predictions in multiple linear regression

In a previous post I showed how to compute posterior predictions from multiple linear regression inside JAGS. In this post I show how to do it after JAGS, in R.

There are trade-offs doing it in JAGS or afterwards in R. If you do it in JAGS, then you know that the model you're using to generate predictions is exactly the model you're using to describe the data, because you only specify the model once. If you do it after JAGS in R, then you have to code the model all over again in R, and face the peril of making mistakes because JAGS and R parameterize distributions differently (e.g., in dnorm and dt are parameterized differently in JAGS than in R). On the other hand, if you generate the predictions in JAGS, then you have to specify all the to-be-probed x values in advance, along with the data to be fit. If instead you generate predictions in R (after JAGS), then you can re-use the already-generated MCMC chain for to-be-probed x values as much as you like, without having to run MCMC all over again. In this post, we face the peril of specifying the model in R.

Please see the previous post for background information about this specific application to predicting SAT scores from spending per student and percentage of students taking the exam. To run the R code below, please open the high-level script Jags-Ymet-XmetMulti-Mrobust-Example.R. Run the script "out of the box" so it uses the SAT data (Guber1999data.csv) and the two predictors mentioned in the previous sentence. Then, at the end of the script, append the following R code:

# Remind self of the predictors:
show( xName )
# Specify probe values of the predictors. Columns must match predictors actually
# used, in order! Specify as many probe rows as you like:
xProbe = matrix( c( 4.0 , 10 ,   # Spend , PrcntTake
                    9.0 , 10 ,   # Spend , PrcntTake
                    4.0 , 90 ,   # Spend , PrcntTake
                    9.0 , 10 ) , # Spend , PrcntTake
                 ncol=length(xName) , byrow=TRUE )
# Convert mcmc coda object to matrix:
mcmcMat = as.matrix( mcmcCoda )
# Show column names of mcmcMat. Notice they include beta0, beta[1], beta[2],
# ..., which can be extracted using grep("^beta",colnames(mcmcMat)):
colnames( mcmcMat )
grep("^beta",colnames(mcmcMat),value=TRUE)
# Now, go through rows of xProbe matrix and display predicted mu and y:
for ( xProbeIdx in 1:nrow(xProbe) ) {
  # Predicted mu is b0+b1*x1+b2*x2+... at every step in chain:
  muPred = ( mcmcMat[,grep("^beta",colnames(mcmcMat))]
             %*% matrix(c(1,xProbe[xProbeIdx,]),ncol=1) )
  # Predicted y is is pred mu plus t-distrib noise at every step in chain:
  yPred = muPred + mcmcMat[,"sigma"] * rt(nrow(mcmcMat),df=mcmcMat[,"nu"])
  # Make plot of posterior distribution of prediction. Notice that plotPost()
  # also returns numerical values to console.
  openGraph(width=3.5,height=5)
  layout(matrix(1:3,nrow=3),heights=c(1.5,2,2))
  par( mar=c(4,1,1,1) , mgp=c(2.0,0.7,0) , xpd=NA )
  plot(0,0,xaxt='n',yaxt='n',bty='n',pch='',ylab='',xlab='')
  text(0,0, adj=c(0.5,0.7),
       paste0(xName,"=",xProbe[xProbeIdx,],collapse="\n"), cex=1.25 )
  xLim = quantile( yPred , probs=c(0.005,0.995) )
  muPredInfo = plotPost( muPred ,xlab="Predicted mu" , xlim=xLim ,
                 main="" )
  show( muPredInfo )
  yPredInfo = plotPost( yPred , xlab="Predicted y" , xlim=xLim ,
                main="" )
  show( yPredInfo )


The code produces output plots such as the following (along with numerical values displayed on the command line):


N.B.: The plots only display the first three significant digits; see the numerical output at the command line for more digits.

Enjoy!


Reminders of some recent posts:

Monday, May 29, 2017

New version of BEST package (Bayesian estimation of two groups)

Mike Meredith has updated the BEST package again! 

CHANGES in v.0.5.0 (2017-05-28)

  * 'BESTmcmc' uses 'rjags' directly, instead of 'jagsUI' wrappers. This resolves 'set.seed' issues, but values returned will not be the same as with previous versions.

  * Function 'hdi' removed; imports HDInterval::hdi instead.


Mike created and updates the BEST package entirely on his own. He has implemented some wonderful functionality and protections in the BEST package that are not in the original R code I created to accompany the BEST article. Thanks Mike!


Reminders of some recent posts:

Thursday, May 25, 2017

Looking for great teachers of Bayesian statistics

I'm looking for great teachers of Bayesian statistics!
  • Have you taught a course in Bayesian statistics? 
  • Have you used DBDA2E
  • Are you an enthusiastic teacher with very high evaluations?
  • Do you have an abiding interest in a range of applied statistical methods?
If so, please send me an email (johnkruschke@gmail.com). Even if you think I should already know about you, please send me an email now so I can be sure to put you on the list. If you know of someone else who has those qualities -- maybe a colleague, or maybe a teacher in a class you've taken -- please tell me about that person.

Thanks!

P.S. This applies to instructors anywhere on the planet; just need to be fluent in English for teaching.

Thursday, May 4, 2017

Bent Coins for a Twisted Mind

Ben Motz voluntarily sat through an entire semester of my Bayesian stats course. In karmic retribution, after the conclusion of the course he surprised me with a set of biased coins that he designed and created with the amazing craftsmanship of Jesse Goode. Obverse and reverse:


Heads: In Bayes Rule We Trust!


Tails: Wagging at a (posterior) beta distribution
parameterized by mode and concentration.
(And notice the half-folded ears.)

Ben was intrigued by claims [see footnote 2, p. 73, of DBDA2E] that normal coins when flipped cannot be biased (unlike normal coins when spun), but bent coins when flipped can be biased. Ever the empiricist, he decided to conduct an experiment using progressively bent coins (while manifestly expressing his teacher evaluation at the same time).

A set of progressively bent coins!

Each coin was flipped 100 times, letting it land on a soft mat. The results are shown below:
Data from flipping each coin 100 times. Prior was beta(1,1).

Clearly the most acutely bent coins do not come up heads half the time. One paradoxical thing I like about a bent coin is that the less you can see of its face, the more its face comes up!

To preserve the apparatus of this classic experiment for posterity, and especially to give me something for show-and-tell at the old Bayesians' home, Jesse built a beautiful display box:

Protected by plexiglass from the thronging crowds of onlookers.

How did they manufacture these coins? It was quite a process. Starting with discs of metal, Jesse powder coated and baked them to get a smooth and secure coating. Then he used a computerized laser to burn off areas of the coating to reveal the shiny metal as background to the design. Finally, they used psychic telekenesis to bend the coins. (Ben assured me, however, that he withheld psychokenesis when flipping the coins.)

I've gotta admit this made my day, and it makes me smile and laugh even as I type this. I hope it gives you a smile too! Huge thanks to Ben Motz and Jesse Goode.

Sunday, April 16, 2017

Wednesday, April 12, 2017

New article: Bayesian for newcomers



Just published: Bayesian data analysis for newcomers.

Abstract: This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented, that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.

Published version at http://link.springer.com/article/10.3758/s13423-017-1272-1.
Accepted manuscript available at https://osf.io/preprints/psyarxiv/nqfr5.
View only: http://rdcu.be/rcot .