Monday, January 04, 2010

Are earthquakes more likely on a Sunday?
In which the earth moves

I experienced my second earthquake last month (woke me up, as did the first one in 2008), so I was interested to read this paper (via Prof. Rabbett) from Pieter Vermeesch about statistical significance.

Question to the stats-mavens (you know who you are): don't you really have 7 hypotheses (e.g. Monday is the most common, Tuesday ...) which you've selected one (Sunday) after you've looked at your data. Doesn't this need to be accounted for?

From Vermeesch, P., 2009. Eos Trans. Am. Geophys. Union, 90 (47), p.443


7 comments:

Political Scientist said...

Or thinking about, perhaps the 6 degrees of freedom deals with this? The more I think about it, the less I understand it...

pj said...

I think the chi-squared is a suitable test statistic here (we're simply testing whether there is a uniform distribution rather than whether earthquakes are more common on Sunday). But the objection to p-values here that they are dependent on sample size is spurious - we want our test-statistic to be dependent on sample size - all the author is really saying is that just because something is statistically significant doesn't mean it is physically significant - e.g. due to systematic variation in underlying data. But this is hardly a novel idea or even finding.

It is quite common in experiments to find that even things like randomly allocated case numbers can have a statistical predictive value (e.g. because the samples are always processed in the same order).

pj said...

Reading the blog it also seems that they're making a point about effect sizes - or what we'd call 'clinical significance' in my field.

LemmusLemmus said...

I'm glad that pj came along to answer the question about how appropriate the Chi-square test is. That's what I suspected, too, but the last time I thought about Chi-square tests was about 2002 and I would have had to read up to be sure. Which nicely illustrates that your characterization of myself as a statistics maven is ludicrous. I'm really not.

I also agree with everything else pj said.

Political Scientist said...

Thank you, mavens!

"It is quite common in experiments to find that even things like randomly allocated case numbers can have a statistical predictive value (e.g. because the samples are always processed in the same order)."

Is it right to say then that statistic significace is a necessary, but not sufficient, condition? For geological/clinical significance there has to be a (causitive) mechanism?

I feel a bit embarrased asking about this, but I'd rather be embarrased and understand, than keep quiet and not. I've been following the thread at Eli R's place, but discussion seems to have focused on weather earth quakes are independent. It seems to me this would not lead to clustering on a particualr day of the week, tho.

pj said...

Yeah I sort of think so - although, of course, you may still not have statistical significance for a real phenomenon so it isn't even strictly speaking necessary in anything other than a pragmatic sense.

Some people like to get all Bayesian about this - particularly when talking about things like psychic phenomena, or homeopathy, but all it is really saying is that there are issues of plausibility over and above simple frequentist statistical testing.

Independence is important for using the chi-squared test in particular - I doubt earthquakes are independent events but it would seem unlikely to cause clustering on a particular day of the week unless you are being overly affected by small numbers or, I suppose, a small number of fundamental events causing a larger number of recorded events.

The thing about the causative mechanism answer is that in many fields, and I'm thinking particularly about medicine here, although we're interested in mechanism all that really matters is the effect (i.e. efficacy). But the intrinsic plausibility of the proposed mechanism (e.g. morphine vs. homeopathy) is part of the wider consideration when interpreting a significance test as part of a sudy. For instance, if a study seems to be shitly designed (e.g. not blinded in a drug trial) then you aren't so interested in the tiny p-value.

Political Scientist said...

PJ, thanks for taking the time on this - I think I understand.