Friday, September 22, 2006

On Matters Medical 1



Two health related articles gave me that internal ping of a "teachable moment" (hate the term), one on the supposed connection between breast implants and suicide rates and the other one on the new proposal to routinely test most Americans for the HIV infection. The pings in the two cases are different but they both have to do with my feeling that most people really have an excessive timidity to interrogate and to analyze medical studies. Remember this quote from an earlier post of mine:

In my opinion, the most important insight in this area right now is Deena Skolnick's demonstration of the power of neuroscience to cloud people's minds. She took explanations of psychological phenomena that had been crafted to be "awful", and which (in their plain form) were recognized as bad both by novices and by experts, and added some (totally irrelevant) sentences about brain anatomy and physiology. With the added neuroscientific distraction, the bad explanations were perceived as satisfactory ones. [The paper with the details of this research has not yet been published -- I promise to discuss it in more detail once the details are generally available. The mass media certainly offer plenty of anecdotal evidence these days for Skolnick's idea.]

It's not just neuroscience jargon that we respect. It's all science-sounding jargon, and my guess is that it's because we don't know what the hell the stuff means so it must be something totally undebatable and true. In some ways this attitude is no better than the attitude of those who deny evolution any scientific standing. They both make science into a question of faith, and that is a very bad outcome for both science and for faith.

Hence my decision to start writing short posts on some of the questions that we as educated consumers of science should understand better. This post will address correlation and causality. The next one will talk about how we decide which people should be screened for diseases. I hope that you will enjoy the ride.

The study on breast implants begins like this:

Boosting breast size with plastic surgery has been linked to a significantly higher suicide rate among women in a new 15-year study.

To the credit of the writer of the article, the rest of the story explains why this sentence is unlikely to mean that breast implants cause suicide. The more likely reason for the link between the two is this:

The researchers discovered the suicide rate is 73 percent higher in participants with breast implants relative to the control group. The connection between breast implants and suicide was not tested and no direct link was found between the two.

However, Bisson said previous studies have characterized women who receive breast implants by a low self-esteem, lack of self-confidence and more frequent mental illnesses such as depression.

Not too bad, on the whole. The snag is that so many people read only that first sentence and then start telling people that breast implants cause suicide.

In general terms, two phenomena are correlated when they seem to have a nonrandom relationship to each other. For example, they might both grow larger over time or smaller over time or one might always grow larger when the other one grows smaller. The last sentence is a very fuzzy way of defining positive correlation (when the two things move in the same direction) and negative correlation (when they move in opposite directions). It is exceedingly common in political punditry to take a correlation between two concepts, to ignore everything else that might be going on and to decide that one of these concepts is the cause and the other is the effect. Feminism is often used in these morality tales by the right-wingers and always labeled as the cause. Usually the other variable is something that is bad for the society, say, increasing crime, and it is then labeled as the effect. End of discussion.

Now this is not science, but even there many readers look at correlation over time and see causality. This is very risky for several good reasons:

First, many things correlate over time for purely random reasons or for reasons which are too obscure to understand. Statisticians often use the example of dress lengths and their correlation with all sorts of other phenomena: who wins the Super Bowl, which party gets elected to the U.S. Congress and so on.

Second, even if two things are causally related, we cannot just simply label one the cause and the other one the effect. Take a study I remember reading a few years ago which argued that parents who spent more time talking to their teenaged children had better adjusted and happier children. The writeup of the study concluded that parents must talk more to their children who will then be happier. But I can see at least an equally good case for reversing the cause-and-effect story here: It's pretty obvious that teenagers who are not doing well and are unhappy and grumpy may not want to talk to their parents at all. Or the two could be related in a more complicated fashion so that each variable is both a cause and an effect of the other! The point of this story is that the original results do NOT prove causality at all but that the representations of the findings do.

Another example of this problem is common in medical studies. One study found out that patients with serious chronical diseases died earlier if they had fewer outside social contacts (going to church or bowling or movies with someone else). The conclusion was that social support allowed patients to live longer. Perhaps. But it would also be true that someone who feels very bad (like right before dying) will not want to go out bowling and such.

You may have been following the recent debate about weight and life expectancy. Well, the studies on that might also suffer from this reversed causality problem as people tend to lose a lot of weight right before death. Better studies can reduce the confusion of deciding when correlation indeed is causality, but the problem doesn't go away completely. One way to get more clarity is by employing a very simple observation: Usually causes take place before effects. Thus, if a study can start with people only manifesting the likely cause (say, a certain body weight) and then later get data on the likely effect (say, changes in health) we are on somewhat firmer ground in talking about causality.

Third, only in laboratory circumstances can we really be certain that whatever we are studying might not be caused by some other variable that we are not measuring at all.
It's a little bit like saying that Jim has been killed, Jane has been taken to court for the crime, but perhaps in reality it was Jeremiah who did the killing. Or perhaps persons Jill and Jack hired Jane. Or perhaps Jim tried to kill Jesse who is Jane's son and Jane defended Jesse. And so on.

Often the underlying pattern of causes might be something of the kind the breast implant study suggested: The same third variable (here low self-esteem and emotional problems) might cause both of the variables we are studying (here getting body enhancements and committing suicide). But even here we might speculate much further by asking what it is that caused the low self-esteem in the first place.

An older example of this "missing third variable" case is a study which the religious right still seems to be disseminating on its websites. The study found that couples who lived together before getting married had higher rates of divorce than those who didn't cohabit first, and the conclusion was that living in sin causes divorces. It is much more probable that some couples who are opposed to cohabitation are also opposed to divorce, even when their marriages are not working out.

Fourth, note that we shouldn't bow in front of the altar of laboratory experiments, either. There is a fairly well-established literature pointing out that putting something human or animal into a laboratory doesn't just remove other external causes from consideration; it also puts the living being into what is pretty much an austere and unnatural prison, and this new possible cause must be taken into account in judging the results. For example, the sexual or maternal behavior of animals in a metal cage doesn't necessarily tell us how they would act out in their natural habitat, and having students play stock market games on computers doesn't tell us how they'd invest in the actual messy real life.

All this comes across very sceptical. So I hasten to add that I am a great fan of science, and that is the reason why I want to treat it with its own standards of logic and transparency and experiments which can be repeated. But you could use my four points (and any others I forgot) as a checklist when you read the next study that seems to have found another causal link.