In a previous post from six years ago, I fit a sinusoidal trend, with auto-regressive component, to daily temperature data. (Spoiler alert: It's still getting warmer.) Recently I've received inquiries about the script for that analysis. I disinterred the ancient script, updated it, and grabbed more recent temperature data. The script and data file are linked below.
The result of the new analysis:
As you can see from the plot (above), the slope of the linear spine of the sinusoidal variation is 0.068 degrees Fahrenheit per year. The 95% HDI on the estimate spans zero, just as it did with the smaller data set in the previous post from six years ago. But I'm pretty sure that if this city were put into a big hierarchical model with lots of other cities across the globe, the high-level estimate of slope on the linear spine would be clearly greater than zero.
But evidence for global warming is not the point of this post. The point is to link the full script and data file. Here they are: R script; data. Hope this is helpful.
Dear John, I am going through you book and its very interesting. I have a couple of questions though. In your example learning of filteration and condensation. You quote that example in the NHST chapter as an example of multiple comparison. As I understood from the description filteration/condensation example all data is independent. You are asking the the same subjects to learn different group but the data is separate for each group, G1,G2, G3, G4. How is that multiple comparison? I though multiple comparison as quoted in NHST is either multiple tests on the same data or same test on multiple data ? Isn't your example as good a 4 separate test on 4 different data?
ReplyDeleteYour comments will be greatly appreciated.
First, to alert other readers so there is no confusion, this question has absolutely nothing to do with the blog post. It's not even a question about Bayesian methods, it's a question about NHST. Now, on to the question. Your question is actually quite general, and is central to controlling for multiple tests in NHST. The issue is what should be the set or family of tests over which the error rate is controlled? Should it be all the tests done on one "batch" of data? Or, as is conventional in ANOVA, should there be separate families for every main effect and every type of interaction? Or, should it be all the tests one scientist does in a lifetime? Please see more discussion in Section 11.4 of DBDA2E, which is devoted to this topic.
DeleteThanks I understand much better now. I appreciate your book a lot, I come from a NHST background and avoided Bayesian for many years because I thought it was complicated till I saw your book. Sorry for posting here but I dont know any other way to connect with you.
ReplyDeleteI wanted to ask another question. I am working on something which requires me to generate hypothesis or test from the data itself. So, if I have the data D, I split it into D1 and D2. Using D1 I generate pattern using the apriori search algorithm, which generates a lot of interesting patterns (or hypothesis). I then test them using a Bayesian model on the D2 (which is a separate dataset). One thing I find is that there is a lot of correlation in the patterns.
Let say dataset has a X1 --- XN and Y variable. Y is dichotomous , i.e. either 0 or 1 . I generate several patterns using apriori algorithm (P1 ... PN). each P is some combination of Xs. Patterns are correlated . e.g. for a row in D2 has P1 / P2 present. How do I structure the data for this? Do I have to separate the data for each Pattern so that they are mutually exclusive? That is if the a row is D1 has both P1 and P2 , can I include that as a datapoint for P1 and P2 separately? Apologies if I am not making a lot of sense.
You are confronting a fundamental issue in statistical methods, namely 'model selection'. There are tomes written on the topic of model selection, also known as model comparison. Chapter 10 of DBDA2E discusses one Bayesian approach. A case of model selection, namely variable selection in multiple regression, is discussed in Section 18.4 of DBDA2E. But for frequentist approaches and other Bayesian approaches, you'll have to explore the literature. In particular, it sounds like you are interested in something called 'cross validation'.
DeleteI translated your model into stan for some reason!
ReplyDeleteR script here.
Picture of script output.
Very cool! Thank you for sharing.
Cool. So which takes longer to run, JAGS or Stan? (all else being equal, of course, like number of parallel processors, and ESS of resulting chains)
Delete
DeleteI fit your model in both Stan and JAGS multiple times on my laptop and clocked the times. (I was using your post as an excuse to mess around with JAGS for the first time.)
Stan:
5-10 minutes
4000 post-warmup draws
ESS ranged from ~2800 for `nu` to ~3500 for `amp`
JAGS:
16-20 minutes
12000 saved steps
ESS ranged from ~2100 for `nu` to ~7600 for `beta0`
In both cases I ran 4 cores in parallel. For Stan I used the settings displayed in the posted script. I'm unfamiliar with JAGS so I just used your settings without altering them.
ReplyDeleteglobal warming is the increase in the average temperature of the Earth's near-surface air and the oceans ever since the mid-twentieth century and its projected continuation.