I am predicting a lot of “missed shots” in the public polling game this November. Every pollster wants to put the ball in the hole and bask in the glory that comes with this moment of truth every four years, when the real votes are tabulated. Reputations for accuracy are made and destroyed, but I am betting on far more of the latter. It’s not that the pollsters aren’t trying, or that they’ve forgotten how to practice their craft. It’s just that lots of loose ends are unraveling in the real world that pollsters confront.
For starters, cellphones are a bigger problem than everyone is letting on. Many pollsters are adding cellphone supplements to their traditional land-line samples. Typical practice is to aim 20 percent or so of calls to known cellular numbers. That way, the pollsters can bluff and say they are dealing with the problem. But no one has a fully validated methodology for merging the cellphone and land-line subsamples. In some states, the task is made easier by the fact that election clerks and secretaries of state keep track of the two types of numbers and can help with weighting. But in other states, it’s just a guess.
{mosads}The cellphone problem is not only muddled from a sampling perspective, it also creates a lot of expense to try and manage the mess. If you ask respondents enough questions about their cellphone and land-line use habits, sometimes at significant expense for the added time on the phone, you might sort out a decent plan for merging the two types of samples. Cellphones also must, by law, be dialed by hand, a much more expensive proposition that the robo-callers and bargain-basement operators won’t undertake. Purchased cellphone sample numbers are more expensive, too, than land-line samples. It’s just one expense after another to do cellphone supplements “right,” whatever that is, so a lot of corner-cutters won’t even really try.
The unique challenges of 2012 don’t stop with cellphones. We’ve always known that the hardest thing to do in polling is predict whether a respondent will actually vote or not. Frankly, asking voters directly is the least effective means of making that judgment. Past research has demonstrated that the best ways to judge likely turnout are based on facts gleaned from a voter’s recorded history. How long has someone been registered to vote at his or her current address? (The longer, the better, for turnout.) And did he or she vote in the last election of the same type, in this case the 2008 election? A potential problem using these criteria in 2012 is that 2008 brought out hordes of new voters who seemed mostly motivated by anti-Bush sentiment and the pro-change imagery of Barack Obama.
Is there anything to give us assurance that the new voters of 2008 will return in 2012, like the models say they should?
I am starting to see uneven evidence for this. In some states, like California, they might. In other states, say, Iowa, they might not. So a lot of pollsters could get fooled on turnout.
And because many of these new voters are also cellphone users, problems one and two merge.
I also think this could be a close election down to the last few days. Swing voters and independents are vacillating in their choices. Undecided voters hate to openly oppose the incumbent, and I would guess that some won’t want to be perceived as racists or backsliders from their 2008 vote. So they’ll hang back or even lie about their vote until in the polling place.
It’s going to be an ugly election night and depressing Wednesday morning for public pollsters who trip over all these, the pitfalls of 2012.
David Hill is a pollster that has worked for Republican candidates and causes since 1984.