With the U.K. leaving the E.U., it's time for "Bayesian" to exit its titular role and be replaced by "Laplacian".

;-)

Various historians (e.g., Dale, 1999; McGrayne, 2011; as cited in DBDA2E) have argued that despite Bayes and Price having published the basic formula first, it was Laplace who did all the heavy lifting to develop the methods. Laplace was French, while Bayes and Price were English.

On the other hand, Bayes did attend the University of Edinburgh, and lived around London, so maybe he would have voted to stay!

This humorous implication of Brexit was suggested by a participant at a workshop I presented in St. Gallen, Switzerland, last week, after I briefly described the history above and showed the book photographed at Bayes' tomb and at Laplace's memorial. I have forgotten who it was that made the joke. If the originator of the joke is reading this, please put a comment below or send me an email!

# Doing Bayesian Data Analysis

## Monday, June 27, 2016

## Thursday, June 23, 2016

### Fixing the intercept at zero in Bayesian linear regression

In DBDA2E and in workshops, I present an example of simple linear regression: predicting an adult's weight from his/her height. A participant at a recent workshop suggested that maybe the y-intercept should be fixed at zero, because a person of zero height should have zero weight. I replied that the linear trend is really only intended as a description over the range of reasonable adult heights, not to be extrapolated all the way to a height of zero. Nevertheless, in principle it would be easy to make the restriction in the JAGS model specification. But then it was pointed out to me that the JAGS model specification in DBDA2E standardizes the variables -- to make the MCMC more efficient -- and setting the y intercept of the standardized y to zero is (of course) not the same as setting the y intercept of the raw scale to zero. This blog post shows how to set the y intercept on the raw scale to zero.

For reference, here (below) is the original model specification that does

# Standardize the data:

data {

Ntotal <- length(y)

xm <- mean(x)

ym <- mean(y)

xsd <- sd(x)

ysd <- sd(y)

for ( i in 1:length(y) ) {

zx[i] <- ( x[i] - xm ) / xsd

zy[i] <- ( y[i] - ym ) / ysd

}

}

# Specify the model for standardized data:

model {

for ( i in 1:Ntotal ) {

zy[i] ~ dt( zbeta0 + zbeta1 * zx[i] , 1/zsigma^2 , nu )

}

# Priors vague on standardized scale:

zbeta1 ~ dnorm( 0 , 1/(10)^2 )

zsigma ~ dunif( 1.0E-3 , 1.0E+3 )

nu ~ dexp(1/30.0)

# Transform to original scale:

beta1 <- zbeta1 * ysd / xsd

sigma <- zsigma * ysd

}

Now how to fix beta0 to zero: Instead of putting a broad prior distribution on zbeta0, it is set to whatever value would make raw-scale beta0 equal zero. To determine that corresponding value of zbeta0, we use the transform-to-raw-scale formula for beta0, shown at the end of the model specification above. In that formula, set beta0 to 0, then solve for zbeta0. Replace the prior on zbeta0 as follows:

When run on the height/weight data (using script Jags-Ymet-Xmet-Mrobust-Example.R), a smattering of regression lines from the posterior distribution looks like this:

You can see that the y-intercept has indeed been fixed at zero for all the regression lines.

plotMCMC( mcmcCoda , data=myData , xName=xName , yName=yName ,

compValBeta1=0.0 , ropeBeta1=c(-0.5,0.5) ,

pairsPlot=TRUE ,

saveName=fileNameRoot , saveType=graphFileType )

If showCurve=FALSE, then it tries to make a histogram which fails for the weird values of beta0 generated by the MCMC.

For reference, here (below) is the original model specification that does

*not*fix the y intercept at zero. (This is in file Jags-Ymet-Xmet-Mrobust.R.) Notice two lines below in bold font and yellow highlight, regarding the prior on the standardized intercept zbeta0 and the transformation back to the raw-scale intercept beta0:# Standardize the data:

data {

Ntotal <- length(y)

xm <- mean(x)

ym <- mean(y)

xsd <- sd(x)

ysd <- sd(y)

for ( i in 1:length(y) ) {

zx[i] <- ( x[i] - xm ) / xsd

zy[i] <- ( y[i] - ym ) / ysd

}

}

# Specify the model for standardized data:

model {

for ( i in 1:Ntotal ) {

zy[i] ~ dt( zbeta0 + zbeta1 * zx[i] , 1/zsigma^2 , nu )

}

# Priors vague on standardized scale:

**zbeta0 ~ dnorm( 0 , 1/(10)^2 )**zbeta1 ~ dnorm( 0 , 1/(10)^2 )

zsigma ~ dunif( 1.0E-3 , 1.0E+3 )

nu ~ dexp(1/30.0)

# Transform to original scale:

beta1 <- zbeta1 * ysd / xsd

**beta0 <- zbeta0 * ysd + ym - zbeta1 * xm * ysd / xsd**sigma <- zsigma * ysd

}

Now how to fix beta0 to zero: Instead of putting a broad prior distribution on zbeta0, it is set to whatever value would make raw-scale beta0 equal zero. To determine that corresponding value of zbeta0, we use the transform-to-raw-scale formula for beta0, shown at the end of the model specification above. In that formula, set beta0 to 0, then solve for zbeta0. Replace the prior on zbeta0 as follows:

**# zbeta0 ~ dnorm( 0 , 1/(10)^2 )**

# Fix y intercept at zero:

zbeta0 <- zbeta1*xm/xsd - ym/ysd# Fix y intercept at zero:

zbeta0 <- zbeta1*xm/xsd - ym/ysd

When run on the height/weight data (using script Jags-Ymet-Xmet-Mrobust-Example.R), a smattering of regression lines from the posterior distribution looks like this:

You can see that the y-intercept has indeed been fixed at zero for all the regression lines.

*One crucial trick to getting the graphs to display:*In the plotMCMC command of Jags-Ymet-Xmet-Mrobust-Example.R, set showCurve to TRUE:plotMCMC( mcmcCoda , data=myData , xName=xName , yName=yName ,

compValBeta1=0.0 , ropeBeta1=c(-0.5,0.5) ,

pairsPlot=TRUE ,

**showCurve=TRUE**,saveName=fileNameRoot , saveType=graphFileType )

If showCurve=FALSE, then it tries to make a histogram which fails for the weird values of beta0 generated by the MCMC.

## Monday, May 30, 2016

### Some Bayesian approaches to replication analysis and planning (talk video)

*Some Bayesian approaches to replication analysis and planning*. A talk presented at the Association for Psychological Science, Friday May 27, 2016. This video (below) is a pre-recording while sitting at my desk; the actual talk included some spontaneous additions about relations of the material to previous speakers' talks, and a few attempts at you-had-to-be-there humor.

Here's a snapshot of the speakers:

Left to right: Joe Rodgers, John Kruschke, Larry Hedges, Pat Shrout (symposium organizer), and Scott Maxwell. |

## Sunday, May 15, 2016

### Bayesian inference in the (abnormal) mind

The (abnormal) mind can be modeled as a Bayesian inference engine, as summarized in the post,

Note that Bayesian modeling of data is a richly valuable approach regardless of whether any particular Bayesian model of mind is accurate. See this brief blog post for the distinction between (Bayesian) descriptive models of data, psychometric models, and models of mind.

*Bayesian reasoning implicated in some mental disorders*. Excerpt:“The brain is a guessing machine [i.e., Bayesian inference engine - JKK], trying at each moment of time to guess what is out there,” says computational neuroscientist Peggy SeriÃ¨s. Guesses just slightly off — like mistaking a smile for a smirk — rarely cause harm. But guessing gone seriously awry may play a part in mental illnesses such as schizophrenia, autism and even anxiety disorders, SeriÃ¨s and other neuroscientists suspect. They say that a mathematical expression known as Bayes’ theorem — which quantifies how prior expectations can be combined with current evidence — may provide novel insights into pernicious mental problems that have so far defied explanation.For a tutorial about Bayesian models of perception and cognition, see Bayesian learning theory applied to human cognition.

Note that Bayesian modeling of data is a richly valuable approach regardless of whether any particular Bayesian model of mind is accurate. See this brief blog post for the distinction between (Bayesian) descriptive models of data, psychometric models, and models of mind.

## Thursday, April 28, 2016

### Matlab version of BEST

Nils Winter has created a Matlab interface to BEST (Bayesian estimation for two groups of metric-scale data). You can read about it at his GitHub README.

## Tuesday, April 19, 2016

### Bayesian estimation of log-normal parameters - Update

Using the log-normal density can be
confusing because it's parameterized in terms of the mean and precision
of the log-scale data, not the original-scale data. Thus, if your data,
y, are nicely described by a log-normal distribution, the estimated mean
and precision are for

This post updates the R+JAGS code from a previous post. Please see that previous post for more background. The previous post had code appropriate for the 1st edition of the book; this post updates a few details for use with the 2nd edition of the book. In particular, the new code shows new MCMC diagnostic plots and uses a more appropriate prior on meanOfLogY. Here are some of its output graphs; R code follows below.

Here is the full R script:

**(y), not y. To get the corresponding parameters for the original-scale data, you have to transform the estimated parameters. And the prior on the mean and precision have to be appropriate for log(y), as opposed to y. This post shows an example of a JAGS program (with rjags) for estimating the parameters of log-normal distributed data.***log*This post updates the R+JAGS code from a previous post. Please see that previous post for more background. The previous post had code appropriate for the 1st edition of the book; this post updates a few details for use with the 2nd edition of the book. In particular, the new code shows new MCMC diagnostic plots and uses a more appropriate prior on meanOfLogY. Here are some of its output graphs; R code follows below.

Here is the full R script:

**#------------------------------------------------------------------------------**

# Jags-Ymet-Xnom1grp-MlogNormal-Script.R

# April 19, 2016. John K. Kruschke.

# Requires auxiliary R scripts that accompany the book,

# Kruschke, J. K. (2015). Doing Bayesian Data Analysis, Second Edition:

# A Tutorial with R, JAGS, and Stan. Academic Press / Elsevier.

# graphics.off()

# rm(list=ls(all=TRUE))

source("DBDA2E-utilities.R")

fileNameRoot="Jags-Ymet-Xnom1grp-MlogNormal-Script"

graphFileType="png"

#------------------------------------------------------------------------------

# THE DATA.

# Generate some random data from known parameter values:

set.seed(47405)

trueLogM = 5.0 # true mean of log(y)

trueLogSD = 0.5 # true sd of log(y)

y = rnorm( n=125 ) # normal distribution of log-scale values

LogY = (y-mean(y))/sd(y)*trueLogSD + trueLogM

y = exp(LogY) # y is exponentiated values of log(y)

# OR JUST PUT IN YOUR ACTUAL DATA HERE:

# y = ...

# Package the data for shipping to JAGS:

dataList = list(

y = y ,

N = length(y) ,

meanOfLogY = mean(log(y)) ,

sdOfLogY = sd(log(y))

)

#------------------------------------------------------------------------------

# THE MODEL.

modelstring = "

model {

for( i in 1 : N ) {

y[i] ~ dlnorm( muOfLogY , 1/sigmaOfLogY^2 )

}

sigmaOfLogY ~ dunif( 0.001*sdOfLogY , 1000*sdOfLogY )

muOfLogY ~ dunif( 0.001*meanOfLogY , 1000*meanOfLogY )

muOfY <- exp(muOfLogY+sigmaOfLogY^2/2)

modeOfY <- exp(muOfLogY-sigmaOfLogY^2)

sigmaOfY <- sqrt(exp(2*muOfLogY+sigmaOfLogY^2)*(exp(sigmaOfLogY^2)-1))

}

" # close quote for modelstring

writeLines(modelstring,con="model.txt")

#------------------------------------------------------------------------------

# INTIALIZE THE CHAINS.

# Let JAGS do it

#------------------------------------------------------------------------------

# RUN THE CHAINS

require(rjags)

parameters = c("muOfLogY" , "sigmaOfLogY" , "muOfY" , "modeOfY" , "sigmaOfY" )

adaptSteps = 1000 # Number of steps to "tune" the samplers.

burnInSteps = 1000 # Number of steps to "burn-in" the samplers.

nChains = 3 # Number of chains to run.

numSavedSteps=20000 # Total number of steps in chains to save.

thinSteps=1 # Number of steps to "thin" (1=keep every step).

nPerChain = ceiling( ( numSavedSteps * thinSteps ) / nChains ) # Steps per chain.

# Create, initialize, and adapt the model:

jagsModel = jags.model( "model.txt" , data=dataList ,

n.chains=nChains , n.adapt=adaptSteps )

# Burn-in:

cat( "Burning in the MCMC chain...\n" )

update( jagsModel , n.iter=burnInSteps )

# The saved MCMC chain:

cat( "Sampling final MCMC chain...\n" )

mcmcCoda = coda.samples( jagsModel , variable.names=parameters ,

n.iter=nPerChain , thin=thinSteps )

#------------------------------------------------------------------------------

# EXAMINE THE RESULTS

# Display diagnostics of chain, for specified parameters:

parameterNames = varnames(mcmcCoda) # get all parameter names

for ( parName in parameterNames ) {

diagMCMC( codaObject=mcmcCoda , parName=parName ,

saveName=fileNameRoot , saveType=graphFileType )

}

# Convert coda-object codaSamples to matrix object for easier handling.

mcmcChain = as.matrix( mcmcCoda )

chainLength = NROW(mcmcChain)

openGraph(width=10,height=6)

layout(matrix(1:6,nrow=2,byrow=TRUE))

# posterior predictive

hist( dataList$y , xlab="y" , main="Data w. Post. Pred." , breaks=30 ,

col="pink" , border="white" , prob=TRUE , cex.lab=1.5)

pltIdx = floor(seq(1,chainLength,length=20))

xComb = seq( min(dataList$y) , max(dataList$y) , length=501 )

for ( chnIdx in pltIdx ) {

lines( xComb ,

dlnorm( xComb, mcmcChain[chnIdx,"muOfLogY"], mcmcChain[chnIdx,"sigmaOfLogY"] ),

col="skyblue" )

}

# param's of log(y)

postInfo = plotPost( mcmcChain[,"muOfLogY"] , xlab="mu of log(y)" )

postInfo = plotPost( mcmcChain[,"sigmaOfLogY"] , xlab="sigma of log(y)" )

# param's of y

postInfo = plotPost( mcmcChain[,"modeOfY"] , xlab="mode of y" )

postInfo = plotPost( mcmcChain[,"muOfY"] , xlab="mu of y" )

postInfo = plotPost( mcmcChain[,"sigmaOfY"] , xlab="sigma of y" , cenTend="mode")

saveGraph(file=fileNameRoot,type=graphFileType)

# Jags-Ymet-Xnom1grp-MlogNormal-Script.R

# April 19, 2016. John K. Kruschke.

# Requires auxiliary R scripts that accompany the book,

# Kruschke, J. K. (2015). Doing Bayesian Data Analysis, Second Edition:

# A Tutorial with R, JAGS, and Stan. Academic Press / Elsevier.

# graphics.off()

# rm(list=ls(all=TRUE))

source("DBDA2E-utilities.R")

fileNameRoot="Jags-Ymet-Xnom1grp-MlogNormal-Script"

graphFileType="png"

#------------------------------------------------------------------------------

# THE DATA.

# Generate some random data from known parameter values:

set.seed(47405)

trueLogM = 5.0 # true mean of log(y)

trueLogSD = 0.5 # true sd of log(y)

y = rnorm( n=125 ) # normal distribution of log-scale values

LogY = (y-mean(y))/sd(y)*trueLogSD + trueLogM

y = exp(LogY) # y is exponentiated values of log(y)

# OR JUST PUT IN YOUR ACTUAL DATA HERE:

# y = ...

# Package the data for shipping to JAGS:

dataList = list(

y = y ,

N = length(y) ,

meanOfLogY = mean(log(y)) ,

sdOfLogY = sd(log(y))

)

#------------------------------------------------------------------------------

# THE MODEL.

modelstring = "

model {

for( i in 1 : N ) {

y[i] ~ dlnorm( muOfLogY , 1/sigmaOfLogY^2 )

}

sigmaOfLogY ~ dunif( 0.001*sdOfLogY , 1000*sdOfLogY )

muOfLogY ~ dunif( 0.001*meanOfLogY , 1000*meanOfLogY )

muOfY <- exp(muOfLogY+sigmaOfLogY^2/2)

modeOfY <- exp(muOfLogY-sigmaOfLogY^2)

sigmaOfY <- sqrt(exp(2*muOfLogY+sigmaOfLogY^2)*(exp(sigmaOfLogY^2)-1))

}

" # close quote for modelstring

writeLines(modelstring,con="model.txt")

#------------------------------------------------------------------------------

# INTIALIZE THE CHAINS.

# Let JAGS do it

#------------------------------------------------------------------------------

# RUN THE CHAINS

require(rjags)

parameters = c("muOfLogY" , "sigmaOfLogY" , "muOfY" , "modeOfY" , "sigmaOfY" )

adaptSteps = 1000 # Number of steps to "tune" the samplers.

burnInSteps = 1000 # Number of steps to "burn-in" the samplers.

nChains = 3 # Number of chains to run.

numSavedSteps=20000 # Total number of steps in chains to save.

thinSteps=1 # Number of steps to "thin" (1=keep every step).

nPerChain = ceiling( ( numSavedSteps * thinSteps ) / nChains ) # Steps per chain.

# Create, initialize, and adapt the model:

jagsModel = jags.model( "model.txt" , data=dataList ,

n.chains=nChains , n.adapt=adaptSteps )

# Burn-in:

cat( "Burning in the MCMC chain...\n" )

update( jagsModel , n.iter=burnInSteps )

# The saved MCMC chain:

cat( "Sampling final MCMC chain...\n" )

mcmcCoda = coda.samples( jagsModel , variable.names=parameters ,

n.iter=nPerChain , thin=thinSteps )

#------------------------------------------------------------------------------

# EXAMINE THE RESULTS

# Display diagnostics of chain, for specified parameters:

parameterNames = varnames(mcmcCoda) # get all parameter names

for ( parName in parameterNames ) {

diagMCMC( codaObject=mcmcCoda , parName=parName ,

saveName=fileNameRoot , saveType=graphFileType )

}

# Convert coda-object codaSamples to matrix object for easier handling.

mcmcChain = as.matrix( mcmcCoda )

chainLength = NROW(mcmcChain)

openGraph(width=10,height=6)

layout(matrix(1:6,nrow=2,byrow=TRUE))

# posterior predictive

hist( dataList$y , xlab="y" , main="Data w. Post. Pred." , breaks=30 ,

col="pink" , border="white" , prob=TRUE , cex.lab=1.5)

pltIdx = floor(seq(1,chainLength,length=20))

xComb = seq( min(dataList$y) , max(dataList$y) , length=501 )

for ( chnIdx in pltIdx ) {

lines( xComb ,

dlnorm( xComb, mcmcChain[chnIdx,"muOfLogY"], mcmcChain[chnIdx,"sigmaOfLogY"] ),

col="skyblue" )

}

# param's of log(y)

postInfo = plotPost( mcmcChain[,"muOfLogY"] , xlab="mu of log(y)" )

postInfo = plotPost( mcmcChain[,"sigmaOfLogY"] , xlab="sigma of log(y)" )

# param's of y

postInfo = plotPost( mcmcChain[,"modeOfY"] , xlab="mode of y" )

postInfo = plotPost( mcmcChain[,"muOfY"] , xlab="mu of y" )

postInfo = plotPost( mcmcChain[,"sigmaOfY"] , xlab="sigma of y" , cenTend="mode")

saveGraph(file=fileNameRoot,type=graphFileType)

**#-------------------------------------------------------------------------------**## Friday, January 29, 2016

### The remains of R. A. Fisher are visited by Bayesians again

Thanks to Phillip Alday, the second edition of Doing Bayesian Data Analysis has visited the remains of R. A. Fisher at St. Peter's Cathedral in Adelaide. Here is a photo that Phillip snapped:

In a previous post, the first edition visited Fisher's remains. The book has also visited Bayes tomb, Laplace's memorial, and Jacob Bernoulli's grave. The goal here is to have some fun and pique some historical interest, not to show any (serious) disrespect. If you pose the book at sites of other relevant people, dead or not-quite-yet-dead, let me know!

In a previous post, the first edition visited Fisher's remains. The book has also visited Bayes tomb, Laplace's memorial, and Jacob Bernoulli's grave. The goal here is to have some fun and pique some historical interest, not to show any (serious) disrespect. If you pose the book at sites of other relevant people, dead or not-quite-yet-dead, let me know!

Subscribe to:
Posts (Atom)