In a blog entry today, economist Arnold Kling cites four examples of what he sees as wrong but commonly held ideas: the notion that Kerry won the 2004 election but it was stolen; the belief that men have more sex partners than women; the belief that an epidemiological study is the scientific equivalent of a clinical study; and the belief that happiness doesn't increase with income beyond a certain point.
Kling ties all four of these ideas together with the common thread that they come from survey data, and concludes that "survey availability bias" leads people to believe a survey simply because it's there even when an alternative explanation is more credible.
Is that really a fair critique? Let's look at the four items in more detail:
1) While there was some muttering about exit poll results from Democratic activists after the 2004 elections, it's not clear that the "Kerry won" idea has much traction at all any more. Leaving that aside, though, the source of the meme was because leaked preliminary exit poll results seemed to show Kerry winning in Ohio. The final data, corrected for known sampling errors, didn't show Kerry winning Ohio. All this proves is that preliminary data is preliminary for a reason, and confirmation bias (the tendency to only believe evidence which supports your preexisting opinion) is alive and well.
2) Surveys of sexual behavior have consistently shown that men claim to have more sexual partners than women over their lifetimes. If the sample is unbiased and only heterosexual sex is counted, this is supposedly a mathematically impossible result (* but not really--see below), therefore it must be wrong.
What's going on here is the well-known fact that people tend to shade the truth on surveys to conform to social norms (some of us call it "lying"). As long as we're on the topic, I'll point out that survey-takers also lie about their age, income, weight, and whether they get the oil changed in their cars every 3,000 miles.
But to be an example of "Survey Availability Bias," Kling would have to show that because of the survey people actually believe that men have more lifetime partners than women. Very few people actually argue that these surveys are literally correct. On the other hand, the surveys do shed interesting light on the social norms of our culture, and that's surely worth discussing.
(* The result that men have more lifetime partners than women is not actually mathematically impossible if you consider the fact that the survey doesn't ask how many partners someone had in his or her life, but rather how many partners someone had in his or her life up to that point. If young men are more promiscuous than young women, and women are much more promiscuous than men later in life, then the low number of partners for young women will drag down the average for the whole group, even though a "deathbed sample" would give identical results for men and women. This would imply lots of hookups between younger men and older women, though I haven't done the field research to know if this is actually happening. Realistically, even though "cougars" could hypothetically explain the survey data, the real reason is almost certainly that people lie.)
3) It's certainly true that a lot of epidemiological studies get hyped in the media, and a significant fraction of them later turn out to not hold up under close scrutiny. Even though many epidemiological studies aren't "surveys" in the "go ask a bunch of people some questions" sense, this is a legitimate criticism of the way health reporting is usually done in this country. Reporters are often not skeptical enough of the results, and overlook the fact that epidemiological studies can usually show only correlation, not causation.
On the other hand, you can find lots of examples of the media overselling a story which has nothing to do with surveys or epidemiology: early claims that the iPhone would sell a million units in its first weekend (Apple only manufactured something like 350,000); periodic panics over sharks, jellyfish, or syringes on our public beaches; child abduction by strangers; and many others. All this proves is that the media hates to look too closely at a good story.
But do people really consider epidemiological studies to be the scientific equivalent of clinical studies? I suspect if you asked the question of the general population, about 85% would answer "huh?" Among the small fraction of people familiar with the terms, I suspect most would say, "no," even if they couldn't articulate the reason. "Clinical" carries a connotation of precision and control which "epidemiological" doesn't.
Among true experts, of course, the answer would be "it depends on the clinical study and the epidemiological study." The point that Kling is missing here is that there is good and bad research of both types. Clinical studies have the advantage of being able to show causation where epidemiological studies can usually only show correlation. On the other hand, clinical studies are orders of magnitude more expensive per participant, so it's possible to have a much larger and more diverse sample in an epidemiological study. This lets the researchers tease out much more subtle effects or look at a wider range of phenomena. It is too simplistic to say that one type of study is better than the other.
4) Happiness research is a new and somewhat faddish area of social sciences, and one of the more intriguing findings has been that more income does not necessarily increase happiness, all else being equal. Kling, apparently, doesn't believe this (probably because he's an economist, and traditional economics sees money as the primary motivator for decision making), and uses it as an example of something which must be wrong despite the survey data.
Kling's argument, which I reproduce in full, is: "some economists take seriously the notion that people are not happier at higher income levels, even as they point out that people have a choice of whether to earn higher income or take more leisure." In other words, people with higher incomes chose to have more income rather than sacrifice some income for more leisure time, therefore that must be the decision which makes them happier.
This is simply wrong on many levels.
First, merely having the choice between income and leisure (as opposed to not having that choice) may actually make you less happy. It is true that wealth and income bring more choices, but it's also well established that too many choices can lead to lower satisfaction and decision paralysis. So we can't assume that being able to choose income over leisure will make people happy (or vice-versa).
Second, Kling implicitly assumes that high income people are choosing the path which leads to greater happiness. I'm not sure why he thinks this is a reasonable assumption, since I've seen very little evidence that people in the real world make choices which will make them happy. On the other hand, I've seen lots of instances of people making choices which make them unhappy, either because of mistakes, short-sightedness, inertia, misplaced loyalty, or a desire for more money.
Third, Kling also assumes that people with high income actually can choose to incrementally sacrifice some income for some leisure. Most of the high income people I know are in jobs where the choice is strictly binary: either lots of income and no leisure, or no income and lots of leisure. Either put in 80 hours a week and collect the big bucks, or get fired.
So with the happiness surveys we have a case of survey data on the one hand (which has its limits), and a purely hypothetical and rather dubious claim on the other hand. From my perspective, I'd have to say that the point goes to the survey data.
Where does this leave Kling's "Survey Availability Bias?" Of his four examples, one is confirmation bias among a vanishingly small number of true believers, another is a classic bias where nobody believes the data is literally true, a third is bad reporting combined with an apparent bias (on Kling's part) towards clinical research, and the fourth is an example of the data contradicting an axiom of traditional economics where Kling declares the data the loser.
Of course, I'm not an unbiased source, since my company sells surveys. But Kling's claim that "Surveys add noise rather than signal to our society" is nonsense. Surveys are a tool for understanding human behavior, and like any tool they can be used properly or improperly. Sometimes surveys are the best way to collect data, sometimes they're not, and often surveys--however flawed--are the only way to collect data on a particular subject. But like any other scientific instrument, a survey needs to be analyzed carefully, calibrated properly, and used in an appropriate way.