## Sunday, October 25, 2020

### DBDA2E in brms and tidyverse

Solomon Kurz has been re-doing all the examples of DBDA2E with the brms package for ease of specifying models (in Stan) and with the tidyverse suite of packages for data manipulation and graphics. His extensive re-write of DBDA2E can be found here. It's definitely worth a look!

He has extensive re-writes of other books, too.

I've been meaning to make a post about this for ages, and have finally gotten around to it. Big thanks to Solomon Kurz!

## Monday, September 28, 2020

### Fixing a new problem in some DBDA2E scripts caused by a change in R 4.0.0

The scripts that accompany DBDA2E have worked fine, "out of the box," for years. But recently some scripts have had problems. Why? R has changed. With R 4.0.0, various functions such as read.csv() no longer automatically convert strings to factors. Some DBDA2E scripts assumed the results of those functions contained factors, but if you're now using R 4.0.0 (or more recent) those scripts will balk.

So, what to do? Here's a temporary fix. When you open your R session, type in this global option:

options(stringsAsFactors = TRUE)

Unfortunately this option will eventually be deprecated. I'll have to modify every affected script and post updated versions. This will happen someday. I hope.

https://developer.r-project.org/Blog/public/2020/02/16/stringsasfactors/index.html

## Friday, August 14, 2020

### Need help finding corrigenda for DBDA2E

UPDATE: Now solved. Big thanks to Kent Johnson for re-constructing the table of corrigenda!

The host of the DBDA2E website (Google Sites), mandated a formatting change. Turns out the automatic reformatting mangled the table of Corrigenda. You can see it here:

I'd really like a properly formatted version!

Did you print or save or copy the previously formatted Corrigenda from the DBDA2E website sometime between October 2018 and August 2020? If so, please send it to me, and I'll attach it to the website.

(I got a version from the wayback machine at web.archive.org dated 2016, but there were subsequent modifications made until Sept 2018.)

Or, do you know of a way to re-format the mangled version so it appears properly?

Thanks!

## Thursday, May 28, 2020

### Teach (and learn) Bayesian and frequentist side by side

Teach (and learn) Bayesian and frequentist side by side: a talk and an app.

A talk explaining why that's a good idea:

(Talk delivered Saturday May 18, 2019.)

The interactive Shiny App with Bayesian and frequentist side by side: click HERE.
If you consider the app, especially for teaching, please let me know how it goes.

## Thursday, July 25, 2019

### Shrinkage in hierarchical models: random effects in lmer() with and without correlation

The goal of this post is to illustrate shrinkage of parameter estimates in hierarchical (aka multi-level) models, specifically when using lmer() with and without estimated correlation of parameters. The examples will show how estimates can differ when including correlation of parameters because of shrinkage toward the estimated correlation.

# Background

## Data structure for these examples

“It all begins with the data… ” I will create multiple panels of $\langle x , y \rangle$ data values, with $x$ and $y$ being continuous metric variables.
• For instance, each panel could be data from a student in a classroom, with each datum being performance on a standardized math exam, with $x$ being time and $y$ being performance. In this scenario, each student takes a novel variant of the test repeatedly across time. The times do not need to be the same for every student, and the number of tests do not need to be the same for every student. We are interested characterizing the performance trend of each panel (i.e., each student) and the overall trend across panels (i.e., for the class as a whole).
• As another example, each panel could be data from a distinct class within a school, with each datum being a particular student's exam performance (on the $y$ axis) and family income (on the $x$ axis). Again we are interested characterizing the trend of each panel (i.e., the relation of exam performance to family income within each classroom) and the overall trend across panels (i.e., the typical relationship of the variables across classrooms).
To illustrate robust shrinkage of panel estimates, each panel will have relatively few data points, and there will be relatively lots of panels. Graphs of the data will appear in analysis results, later.

Here (below) is the structure of the data. Notice there is an X variable, a Y variable, and a Panel variable. The panel variable is actually a nominal (categorical) value, even though it appears as a numerical index.
str( myData )

## 'data.frame':    208 obs. of  3 variables:
##  $X : num 0.4158 0.3795 0.0746 0.0588 0.4503 ... ##$ Y    : num  -0.864 -0.579 0.227 -1.604 -0.895 ...
##  \$ Panel: Factor w/ 35 levels "1","2","3","4",..: 1 1 1 1 1 1 2 2 2 3 ...


## Analysis models

For simplicity, each panel will be fit with a linear trend. The hierarchical (a.k.a. multi-level) models will also estimate the typical linear trend across panels.

Parameters for panels are subject to shrinkage in hierarchical models because the panel's linear trend is trying to conform simultaneously to (a) the data in its panel and (b) the typical trend across all panels. When there are lots of panels informing the typical trend, and only a small amount of data within a panel, then the panel estimates are strongly influenced by the typical trend across panels. This makes good sense: If you don't know much about a particular panel, your best estimate should take into account what's typical across many other similar panels.

For more background about shrinkage in hierarchical models, there are lots on online sources you can search, and you can see some of my previous writings on the topic:
I will first fit a line independently to each panel, without hierarchical structure. This analysis will show the estimated intercept and slope in each panel when there is no shrinkage.

I will then fit a hierarchical model that estimates a typical intercept and typical slope across panels, but does not estimate the correlation of the intercepts and slopes across panels. This model produces some shrinkage across panel estimates, but does not shrink the estimates toward a shared correlation across panels.

Finally, I will fit a hiearchical model that also estimates the correlation of intercepts and slopes across panels. This model shrinks the panel estimates so they also conform more strongly with the estimated correlation across panels.

For the non-hierarchical analysis, I will use lm() from the base stats package of R. For the hierarchical analyses, I will use lmer() from the lme4 package in R.

# Independent line for every panel

For this analysis, each individual panel is fit with its own line, separately from all other panels, using lm() on each panel. There is no hierarchical structure and no overall line estimated.

To make this analysis most analogous to subsequent analyses with lmer() the analyses should require all panels to have the same noise variance. But this is not done here, and actually the MLE coefficients are not affected in this case.

In principle, the analysis in this section would be like using lmer() with the formula y ~ 0 + (1+X||Panel), which specifies fitting lines within panels with no estimation of correlation across panels and no global parameters. But lmer() throws an error if that specification is attempted.
Here (below) are scatter plots of the data with the lm() fitted regression lines:

Notice above:
• Two-point panels such as Panel 4 and Panel 11 have lines going exactly through the two points. This will not be the case in hierarchical models.
• The one-point Panel 35 has no regression line because it's undefined. This will not be the case in hierarchical models.
• Panels 4 and 19 are color-highlighted for easy comparison with subsequent analyses.

Notice above:
• There is correlation of intercepts and slopes across panels (r=0.65), reflecting only how the data were generated, not any estimation of correlation in the model.
• There is a lot of variation in intercepts and slopes across panels relative to hierarchical (multi-level) models below. There will be less variation in hierarchical models, hence the term shrinkage.
• Panels 4 and 19 are color-highlighted for easy comparison across analyses.

# Random intercepts and slopes, but no estimated correlation

I'll use lmer() with the formula, Y ~ 1 + X + ( 1 + X || Panel ), which is equivalent to Y ~ 1 + X + ( (1|Panel) + (0+X|Panel) ). lmer() assumes we want to estimate correlations of parameters across panels unless we tell it not to by using a double vertical bar or by explicitly coding the separate effects.

Notice above:
• Two-point panels such as Panel 4 and Panel 11 have lines not going exactly through the two points. This is because the line is trying to conform simultaneously to the data in the panel and what's typical across panels, as estimated by this particular hierarchical model.
• The one-point Panel 35 has a regression line despite having only a single point. This is because the line is generated by what's typical across panels, influenced a bit by the single data point in the panel.
• Panels 4 and 19 are color-highlighted for easy comparison with across analyses.

Notice above:
• There is correlation of intercepts and slopes across panels (r=0.814), but this reflects only how the data were generated and the separate shrinkage of intercepts and slopes, without any shrinkage from estimation of correlation.
• There is less variation in intercepts and slopes across panels relative to the previous, non-hierarchical analysis, hence the term shrinkage. Specifically, the range of slopes across panels in the non-hierarchical model was -3.48, 2.59 but the range of slopes in this hierarchical model is -2.74, 2.03.
• Panels 4 and 19 are color-highlighted for easy comparison across analyses.

# Random intercepts and slopes, with estimated correlation

Here I use lmer() with forumla Y ~ 1 + X + ( 1 + X | Panel ). Notice the single vertical bar before Panel, so lmer() estimates the correlation of parameters across panels by default.

Notice above:
There is even more shrinkage than in the previous model, because now the lines in each panel are also “trying” to conform to the typical correlation of intercept and slope across panels. Notice in particular the color-coded lines in panels 4 and 19.

Notice above:
There is a strong correlation between the estimated slopes and intercepts (r=0.998). Here the correlation is estimated and there is shrinkage of estimates toward that correlation, and the correlation is stronger than the previous model because the estimates are shrunken toward that correlation.

# What about the higher-level, "fixed" effects?

The higher-level intercept and slope are at the means of the panel intercepts and panel slopes, and those overall means are essentially the same across these analyses.

# Conclusion

Hopefully these examples have helped to illustrate shrinkage in hierarchical models, and specifically the extra shrinkage introduced by estimating correlation across parameters.

## Sunday, June 30, 2019

### Bayesian estimation of severity in police use of force

In research reported in the journal Law and Human Behavior, Brad Celestin and I used Bayesian methods to measure perceived severities of police actions. For each of about two dozen actions, we had lay people rate the action's moral acceptability, appropriateness, punishability, and physical forcefulness. We regressed the ratings on the actions, simultaneously estimating latent scale values of the action severities.

Below is a stylized graph to show the idea. The vertical axis shows the ratings, and the horizontal axis shows the underlying (latent) severity of the actions. In this graph, six actions are placed at arbitrary positions on the horizontal axis.

Below I've superimposed the regression equation. It's just linear regression, but the values of the predictors are estimated, not given.

Below is a stylized representation of the latent scale values that best fit the ratings:

Bayesian methods were especially useful for this because we obtained a complete posterior distribution on all the scale values. Bayesian methods were also very useful because the ratings were effectively censored by many respondents who pushed the response slider all the way to the top or bottom, so all we could discern from the response was that it was at least that high or low; censored dependent-variable data are handled very nicely in Bayesian analyses.

Here's the abstract from the article:
In modern societies, citizens cede the legitimate use of violence to law enforcement agents who act on their behalf. However, little is known about the extent to which lay evaluations of forceful actions align with or diverge from official use-of-force policies and heuristics that officers use to choose appropriate levels of responsive force. Moreover, it is impossible to accurately compare official policies and lay intuitions without first measuring the perceived severity of a set of representative actions. To map these psychometric scale values precisely, we presented participants with minimal vignettes describing officer and civilian actions that span the entire range of force options (from polite dialogue to lethal force), and asked them to rate physical magnitude and moral appropriateness. We used Bayesian methods to model the ratings as functions of simultaneously estimated scale values of the actions. Results indicated that the perceived severity of actions across all physical but non-lethal categories clustered tightly together, while actions at the extreme levels were relatively spread out. Moreover, less normative officer actions were perceived as especially morally severe. Broadly, our findings reveal divergence between lay perceptions of force severity and official law enforcement policies, and they imply that the groundwork for disagreement about the legitimacy of police and civilian actions may be partially rooted in the differential way that action severity is perceived by law enforcement relative to civilian observers.
A preprint of the article is here, and the published article is here. Full citation:
Celestin, B. D., & Kruschke, J. K. (2019). Lay evaluations of police and civilian use of force: Action severity scales. Law and Human Behavior, 43(3), 290-305.

## Sunday, May 19, 2019

### The Statistician's Error?

I just attended (and gave a talk at) the United States Conference on Teaching Statistics (USCOTS). Big thanks to Allan Rossman, who brilliantly MC-ed the conference.

• One keynote was about "moving beyond p < .05" in a talk by Ron Wasserstein and Allen Schirm, In their recent editorial in The American Statistician (with Nicole Lazar), a primary recommendation was Don't Say "Statistically Significant". Decisions with p values are about controlling error rates, but dichotomous decisions let people slip into "bright line" thinking wherein p < .05 means real and important and p > .05 means absent and unimportant.

• Another keynote, in a talk by Kari Lock Morgan, was about three possible explanations of an apparent effect of a manipulation, namely (i) genuine cause, (ii) random difference at baseline before manipulation, and (iii) random difference after manipulation.

I returned home from the conference this morning. To relax, after the intensive pre-conference preparation and during-conference insomnia, I opened a book of poetry and came across a poem by Aaron Fogel that (inadvertently) reflects upon both talks. It's a poem about how editors of printing make decisions regarding errors, and about three sources or errors, and distinguishing the sources of error. And about the role of editors (and perhaps of statisticians?).

The Printer's Error
Fellow compositors
and pressworkers!
I, Chief Printer
Frank Steinman,
having worked fifty-
and served five years
as president
of the Holliston
Printer's Council,
being of sound mind
though near death,
leave this testimonial
concerning the nature
of printers' errors.
First: I hold that all books
and all printed
matter have
errors, obvious or no,
and that these are their
most significant moments,
not to be tampered with
by the vanity and folly
textual editors.
Second: I hold that there are
three types of errors, in ascending
order of importance:
One: chance errors
of the printer's trembling hand
not to be corrected incautiously
by foolish professors
and other such rabble
because trembling is part
of divine creation itself.
Two: silent, cool sabotage
by the printer,
the manual laborer
whose protests
have at times taken this
historical form,
covert interferences
not to be corrected
censoriously by the hand
of the second and far
more ignorant saboteur,
the textual editor.
Three: errors
from the touch of God,
divine and often
obscure corrections
of whole books by
nearly unnoticed changes
of single letters
sometimes meaningful but
by preemptive commentary
the better.
Third: I hold that all three
sorts of error,
errors by chance,
errors by workers' protest,
and errors by
God's touch,
are in practice the
same and indistinguishable.
Therefore I,
Frank Steinman,
typographer
for thirty-seven years,
and cooperative Master
of the Holliston Guild
eight years,
being of sound mind and body
though near death
urge the abolition
of all editorial work
whatsoever
and manumission
from all textual editing
to leave what was
as it was, and
as it became,
except insofar as editing
is itself an error, and
therefore also divine.