Saturday, December 13, 2008

Nasty Post IV



I've been thinking about research and the popularization of studies a lot in the last few days, for reasons which are evident a few posts down on this here blog. In particular, I've been thinking about what makes a study smell all rotten to me, and I've come up with a partial list:

1. The absolutely most awful case is the one where the researcher refuses to show you any of work which presumably led to some results, often rather sensational ones. You might not think that something like this could ever happen, but it does, my friend, it does. It's as if someone got a set of data, did some number-crunching on it and then widely posted the findings from that but refused to let anyone see how the numbers were actually crunched! Well, it's not 'as if'. This actually happened not long ago with a sensational argument widely written about in the right-wing media. When I e-mailed the researcher for the paper he told me to do my own calculations from the data set (a very large one). Stunning, is it not? And very much against the idea of transparency in academic work.

2. Not much better is the practice of omitting large chunks of analysis or data in the final paper. Both of these are a problem that resembles what happens in voting without a paper trail. You can't back-track someone's work. Indeed, you can't check it at all.

3. It's become more and more common for the press release about an article to appear BEFORE the article itself is available. This means that if the press release is popularized anyone who wants to criticize its conclusions is handicapped by not having access to the actual study. This was done with a fairly recent article purporting to show that women are less intelligent than men. I listened to the BBC on this topic, debating the article etc., when the article itself was not yet available at all. Talk about a slanting the playing field! By the time the article itself became available nobody was interested in the topic or how bad the article was.

4. Both studies and some people who popularize studies of a certain flavor present reviews of existing literature. Literature review is a standard beginning of most studies, and if you know the field at all just scanning through the references which are included may be enough to tell you that the study will be biased. The same is true of some popularizers and their work (coughDavidBrookscough).

To see what I mean, think of M&Ms. They come in all sorts of colors. Now suppose you are entertaining someone from Mars who has never seen those little pieces of chocolate. You pick out all blue M&Ms and hand them to your visitor, at the same time telling it that all M&Ms are blue. The visitor is quite likely to believe you, given the pile of blue M&Ms in its hand.

But of course blue is not the only color of M&Ms. This example is meant to tell you (in a sophisticated way) how you can pick from the existing research only those studies which support your argument and how that selection may end up looking like what the field is actually agreeing on. To know that this is not the case requires knowing more about the field.

Now, it's one thing to find something like the Blue M&M Rule (named by me!) in the hands of David Brooks. It's a totally different thing to find it used in a peer reviewed article. The latter should never happen. That it does means that someone in that field is not doing the work of proper criticism (and that applies with even more fervor to letting really bad statistical analyses get through).

5. A very common mistake in the studies I have criticized on this blog is the fallacy of assuming that if a particular theory leads to a certain prediction then finding that prediction realized in some data set means that the particular theory is true.

You have probably come across this in some other context. Suppose we call the theory A and the prediction B, and the way B is derived from A gives us:

If A, then B.

But this does not necessarily mean that

If B, then A

is also true. (Suppose that A=Echidne has just eaten a cheese sammich and B=there's food in her tummy. It's possible for 'If A, then B' to be true while not necessarily 'If B, then A' (because I may have eaten something else instead)).

This fallacy is utterly common among the narrowly defined evolutionary psychology studies (often called E.P. studies to distinguish them from general evolutionary psychology studies or e.p. studies), the ones which go out to hunt for support (B) for a particular theory (A) and come home with nothing else. This ignores all the other theories (C, D, E etc.) that might have produced the same prediction B.

6. The popularization bias. I have written about that many times before, but it's certainly true that a study telling us how similar men and women are in some respect will not be picked up by all those popularizers. Nope. But even a terrible study pretending to have found some significant gender differences will be popularized, at least if it accords with various hidden biases. It may become part of our 'received knowledge' and remain that way, even if many other studies later show it to be wrong. The debunking of study findings (such as the idea that men all over the world prefer women with a waist-to-hip ratio of 0.7) is not exciting enough for popularizations. This means that popularizations matter and that we should criticize bad popularizations, because they exert long-term influence.