Lessons of the New Hampshire polling fiasco.

Lessons of the New Hampshire polling fiasco.

Lessons of the New Hampshire polling fiasco.

Science, technology, and life.
Jan. 11 2008 3:41 AM

Bad Calls

Lessons of the New Hampshire polling fiasco.

Illustration by Robert Neubecker. Click image to expand.

Based on what you've heard this week about the performance of pollsters in New Hampshire, are you somewhat likely, quite likely, or very likely to ignore political surveys for the foreseeable future?

William Saletan William Saletan

Will Saletan writes about politics, science, technology, and other stuff for Slate. He’s the author of Bearing Right.

I recommend "none of the above." You can learn a lot from polls, especially when they're wrong. The New Hampshire pollsters are full of excuses, and their excuses are full of lessons. Let's look at a few of them.

Advertisement

1. It's standard polling error. The polls' "margin of sampling error … made an eight-point error in either direction possible," argues Janet Elder, editor of news surveys for the New York Times. If there were only one poll, that explanation might fly. But in this case, nine polls converged on an Obama advantage of 5 to 13 points. The probability of sampling error producing that kind of convergence is infinitesimal.

2. Privately, we had it right. "My polling showed Clinton doing well on the late Sunday night and all day Monday—she was in a 2-point race in that portion of the polling," pleads pollster John Zogby. "But since our methods call for a three-day rolling average, we had to legitimately factor the huge Obama numbers on Friday and Saturday—thus his 12 point average lead."

Zogby had Hillary pulling even? Let's check the headline on the press release he issued Tuesday morning: "Obama, McCain Enjoy Solid Leads As Election Day Dawns." Here's his first sentence: "The big momentum behind Democrat Barack Obama … continued up to the last hours before voters head to the polls." And here's the first quote from Zogby: "Obama's margin over Clinton has opened up."

Let's be real. Zogby, like most of us, expected an Obama blowout. His Sunday-Monday subsample didn't match that expectation or anyone else's polling. He decided it was too small to report. Now that it matches the election returns, he's touting it.

Advertisement

Lesson: Tell us your private numbers before the election, including breakdowns of your rolling sample by day. Give us your warnings about sample size, and let us do the judging.

3. We told you, sort of. The president of the American Association for Public Opinion Research points out that on Monday, "CBS News Polls cited that '28% of Democratic voters say their minds could still change.'" But that warning, which was featured high in the polling unit's six-page data report, was buried in the network's press release. " CBS Poll: Obama Leaps Ahead In N.H.," the release shouted. The first sentence said Obama had "opened up a seven-point lead" on Clinton.

Lesson: Don't oversimplify the data.

4. We misjudged the turnout. Rasmussen Reports, another firm that blew the primary, speculates that "polling models used by Rasmussen Reports and others did not account for the very high turnout." For instance, "Rasmussen Reports normally screens out people with less voting history and less interest in the race. This might have caused us to screen out some women who might not ordinarily vote in a Primary but who came out to vote due to the historic nature of Clinton's candidacy." The firm allotted 54 percent of its final weighted sample to women. In reality, women cast 57 percent of the votes.

Advertisement

You weren't aware that pollsters screen out respondents, or discount their stated preferences, based on sex, race, religion, and other "demographics"? You thought polls were raw data? Silly you. Read the pollsters' post-New Hampshire explanations, and you'll learn about all the formulas they use to "refine" their data before you see it. They apply "likely voter screens," "demographics," "turnout models," and "allocation of undecideds." In this case, their big mistake was underweighting responses from older women and overweighting responses from independents and young voters.

Lesson: Polls aren't raw data. They're data modified by assumptions. Pollsters should publish their assumptions so we know what we're eating.

5. Preferences were unstable. "New Hampshire voters' opinions were very much in flux," Elder theorizes. They were "buffeted by the intense media coverage until the moment they finally stepped into the voting booth and registered what pollsters call 'considered opinion,' the kind of opinion born of reflection rather than one elicited in an instant by a poll taker."

Elicited in an instant by a poll taker. Think about that. It's a confession that horse-race polls are inherently artificial. Voting is an act. You go to the polling place, enter the booth, make your selections, and mark them. Survey responses are words, not acts. They're elicited. Your phone rings; somebody asks you questions; you answer them. You're doing what they want, not what you want. You have seconds to answer the poll question. You have hours, days, or weeks to decide how, and whether, you'll actually vote.

Advertisement

Lesson: Polls can't predict votes any more than elicited words can predict voluntary acts.

6. The ballot order didn't match the polls. The Washington Post notes that in previous New Hampshire elections, "the state rotated candidate names from precinct to precinct, but this year the names were consistently in alphabetical order, with Clinton near the top and Obama lower down." The net effect could have been a 3-point boost for Clinton—more than her victory margin. Polls missed this effect because they rotate the names. Your neighbor is offered "Clinton or Obama"; you're offered "Obama or Clinton."

But wait a minute. If rotation causes a gap between polls and returns, which one more accurately reflects the voters' will? Pollsters rotate candidate names to avoid bias. By that logic, New Hampshire should change its ballot protocol to emulate their methodology.

Lesson: When polls and ballots differ, the fault may sometimes lie with the ballots.

Advertisement

7. Respondents lied. One version of this complaint blames race: To avoid conveying or revealing bias, people who privately voted against Obama must have told pollsters they were going to vote for him. Another version blames horse-race conformity: Post-Iowa Obama hype "led to a feeding frenzy of media coverage that was very favorable to Obama and very negative towards Clinton, which depressed her support in the polls but oddly did not lower her actual vote."

Again, the implication is that voting and answering a poll are different things. And again, it's not clear which is better. Why did polls correctly predict Iowa but not New Hampshire? One theory is that in Iowa, the votes were public. You couldn't pretend to support Obama while secretly voting against him. From a scientific standpoint, the upshot is that polls in secret-ballot contests should be made more private. If you're polled online instead of by a human being, maybe you'll admit that you don't trust Obama. But from a moral standpoint, maybe the part of you that wants to sound like an Obama supporter is your better half. Maybe we need more caucuses and fewer primaries. Maybe elections should be more like polls, requiring you to declare your choice to a fellow human being.

Lesson: Polls sometimes reflect not what people will do, but what they should do.

8. Late deciders surprised us. CBS News' final press release said its survey showed Obama leading Clinton "on the eve of the New Hampshire primary." The Times describes this as "last-minute polling by CBS, which ended Sunday." Last-minute polling? Sunday? That's two days before the primary ended—and three days after it began, if you start counting from the Iowa earthquake. You can't shut down your phone bank 60 percent of the way through an election and expect to predict the outcome. Gallup made the same mistake, wrapping up its poll for USA Today at 4 p.m. Sunday afternoon. Three days later, shocked by the returns, Gallup editor Frank Newport marveled, "This is unusual. In most pre-election environments, voter statements of their vote intentions in the days before an election are good indicators of how they actually vote." True. But in this case, the days before the election were the election.

In the short run, the pollsters should have adjusted to the schedule. But in the long run, it's the schedule that must change. The schedule is unusual because it defies human nature. Mentally, we're not equipped to pick a president in five days. The absurd pressure generated by this timetable accounts for many of the confounding factors outlined above: the unstable preferences, the late deciders, the surprising turnout. The polls were confused because we were confused.

Lesson: An election calendar that's too fast for pollsters is too fast for voters.

9. The late deciders were defying a juggernaut. Looking back, directors of the errant Marist Poll argue, "If the pollsters and media pundits erred, it was not in their weekend numbers but in not polling Monday and missing the impact of the unrelenting media coverage that characterized the Clintons as finished." Other pollsters offer similar theories suggesting an "underdog" effect and a rejection of "pre-election coronations." But guess who drove the Obama coronation and the unrelenting talk of Clinton's doom? The same pollsters and their media clients.

In short, the pollsters screwed themselves. Their numbers were refuted on Tuesday precisely because they were published on Sunday and Monday. And this isn't a problem they can solve by tinkering with methodology or elections. There's only one sure way to avoid another backlash: by keeping their numbers quiet till the votes are in. Music to my ears.