Wednesday, June 6, 2012

Mixture of Normal Distributions

In this post I show a simple illustration of a mixture of normal distributions. For the examples, we assume we have metric values that we suppose are generated by a mixture of two different normal distributions, which I'll call clusters. We don't know which datum came from each cluster. Our goal is to estimate the probability that each score came from each of the two clusters, and the means and SD of the normal distributions that describe the clusters.

The model specification (for JAGS): The assumes that the clusters have the same standard deviation, but different means.

model {
    # Likelihood:
    for( i in 1 : N ) {
      y[i] ~ dnorm( mu[i] , tau )
      mu[i] <- muOfClust[ clust[i] ]
      clust[i] ~ dcat( pClust[1:Nclust] )
    # Prior:
    tau ~ dgamma( 0.01 , 0.01 )
    for ( clustIdx in 1: Nclust ) {
      muOfClust[clustIdx] ~ dnorm( 0 , 1.0E-10 )
    pClust[1:Nclust] ~ ddirch( onesRepNclust )

The data specification:

# Generate random data from known parameter values:
trueM1 = 100
N1 = 200
trueM2 = 145 # 145 for first example below; 130 for second example
N2 = 200
trueSD = 15
effsz = abs( trueM2 - trueM1 ) / trueSD
y1 = rnorm( N1 )
y1 = (y1-mean(y1))/sd(y1) * trueSD + trueM1
y2 = rnorm( N2 )
y2 = (y2-mean(y2))/sd(y2) * trueSD + trueM2
y = c( y1 , y2 )
N = length(y)

# Must have at least one data point with fixed assignment 
# to each cluster, otherwise some clusters will end up empty:
Nclust = 2
clust = rep(NA,N) 

clust[which.min(y)]=1 # smallest value assigned to cluster 1
clust[which.max(y)]=2 # highest value assigned to cluster 2 
dataList = list(
    y = y ,
    N = N ,
    Nclust = Nclust ,
    clust = clust ,
    onesRepNclust = rep(1,Nclust)

Results when mean of cluster 2 is 3 standard deviations away from mean of cluster 1: The posterior recovers the generating values fairly well.

Upper panel: Data with underlying normal generators.
Lower panel: For each datum, the posterior probability that it is assigned to cluster 2.

Marginal posterior on cluster means and SD.

Pairs plot of cluster means and SD.

Results when mean of cluster 2 is 2 standard deviations away from mean of cluster 1: There is lots of uncertainty. See captions for discussion.

Lower panel: Notice that the lowest and highest data values have fixed cluster assignments, but all the other data values have posterior probabilities of cluster assignment noticeably far from 0 or 1.

Notice the bimodal distribution of sigma (SD).

Notice in the in the right column that when sigma is small, around 15, then the cluster means are near their true generating values. But when sigma is large, then the cluster means get close together. Essentially, there is a bimodal posterior: Either there are two clusters, with smaller sigma and distinct means, or there is one cluster, with larger sigma and both cluster means set near the mean of the one cluster.


  1. Great post, thanks very much.

    I've computed mixture models in the standard EM framework, but Bayesian is much nicer by producing a posterior distribution.

    Any thoughts on easily extending this to an unknown number of clusters? I'm hoping that doesn't open the whole transdimensional can of worms...

    Thanks very much for any thoughts!

  2. The case of an unknown number of components in a normal mixture is covered in this article:

    "On Bayesian Analysis of Mixtures with an Unknown Number of Components (with discussion)" by Sylvia Richardson and Peter Green.

  3. Very clear Post, thank you

    However, I'm not able to run the example: JAGS is returning an error (Error in node clust[ ] , Invalid parent values).
    I would be very interested to monitor the stochastic nodes pClust and clust.

    any thought?

    thanks again

  4. Without more info about your program and error message, it's difficult to say. But it looks like the error message came during the jags.model() function, not later? I suspect it has to do with initializing. Did you make sure to use the code highlighted in red in the blog post? Did you make sure NOT to do any explicit initializing, i.e., just let JAGS just do its own initializing?

  5. Dear John,

    Thank you for the helpful comments. The error was related to the initial values that I was supplying.

    I will take advantage of this post to ask you another question related to the function dcat that might be of general interest for other readers.

    Many thanks


    Suppose that you want to sample from a distribution that is a mixture of, let's say, 5 Gaussians and suppose that the Gaussians differ only in terms of their means.

    Prior knowledge suggests that the means are:

    muOfClust[ 1 ] <- 6
    muOfClust[ 2 ] <- 7
    muOfClust[ 3 ] <- 8
    muOfClust[ 4 ] <- 9
    muOfClust[ 5 ] <- 10

    also, we assume that it is equally likely and therefore the probability of everyone is 1/5

    pClust[ 1 ] <- 0.2
    pClust[ 2 ] <- 0.2
    pClust[ 3 ] <- 0.2
    pClust[ 4 ] <- 0.2
    pClust[ 5 ] <- 0.2

    now, If I write

    clust ~ dcat( pClust[ 1 : 5 ] )
    D <- muOfClust[ clust ]
    t ~ dnorm( D , some_precision )

    doesn't work, so shall I use something like:

    pClust[ 1 : 5 ] ~ ddirch( 1 , 1 , 1 , 1 , 1 )
    clust ~ dcat( pClust[ 1 : 5] )

    D <- muOfClust[ clust ]
    t ~ dnorm( D , some_precision )

  6. What if I want to have something like :
    y[i] ~ dbimodal(mu[i],mu2[i],tau[cond[i]])

    So instead of estimating the probability that a score came from each of two clusters, I just want y be drawn from a bimodal distribution with two different means and one tau.
    Is there something like dnorm that I haven't found, or can I define a distribution myself?

    With kind regards