Pollsters defend craft amid string of high-profile misses
Can the public trust the polls?
The question is gaining intensity following a string of misses — most recently in Kentucky’s gubernatorial race — that has fueled doubts about survey accuracy and forced the industry into some soul-searching amid an election season when polls are both ubiquitous and increasingly influential to the process of choosing candidates.
{mosads}“There are so many polls — with many polls being produced by partisan organizations — it’s hard for the public to know what or what not to believe,” said Julian E. Zelizer, a history professor at Princeton University and a CNN contributor. “This ain’t the era of Gallup and Harris anymore.”
Pollsters widely acknowledge the challenges and limitations taxing their craft. The universality of cellphones, the prevalence of the Internet and a growing reluctance among voters to respond to questions are “huge issues” confronting the field, said Ashley Koning, assistant director at Rutgers University’s Eagleton Center for Public Interest Polling.
But they’re also quick to defend their task as a vital instrument in gauging public sentiment. The focus on a handful of polls that didn’t match the results of closely contested races, they say, doesn’t detract from the underlying importance of testing voter opinion on the top issues of the day.
Such surveys, they argue, that are much more meaningful and predictive than who’s leading the primary contests a year out from the election.
“These blunders, and just because polling may seem overhyped right now, does not make polling any less crucial to our democracy,” Koning said. “To assess accuracy, we need to learn how to become better poll consumers, especially if polls are going to play an increasingly larger role in the electoral process and our general news cycle.”
“Not every poll,” Koning added, “is a poll worth reading.”
Scott Keeter, director of survey research at the Pew Research Center, agreed. Placing too much trust in early surveys, when few voters are paying close attention and the candidate pools are their largest, “is asking more of a poll than what it can really do.”
“A high degree of skepticism is certainly warranted,” he said.
Indeed, Gallup has opted out of surveys for close races this primary season and has yet to say whether it will do so in next year’s general elections. The move was a big one for an organization that built its reputation on just that sort of polling. But a string of misses surrounding the 2008 presidential race, when it predicted Mitt Romney narrowly defeating President Obama, led the group to shift its focus to more issues-based surveys.
The challenges facing the industry were on display more recently in Kentucky, where polls this month showed Jack Conway, the state’s Democratic attorney general, with a small lead heading into Election Day. Instead, Republican Matt Bevin ran away with the contest, 53 percent to 44 percent. Surveys of recent elections in Israel and Britain have also been widely off the mark.
Pollsters are near universal in their diagnosis of the Kentucky episode, saying smaller, off-year elections are notoriously difficult to predict because most people don’t vote.
“Trying to estimate the outcome of a low-turnout election in a state is always tricky,” said Frank Newport, editor-in-chief at Gallup. “I don’t think that impugns the underlying ideology of polls as much as it exposes the challenges of trying to predict turnout. It’d be a lot different if 100 percent of voters voted.”
Patrick Murray, director of the Monmouth University Polling Institute, named another culprit: bad modeling. He said the Kentucky polls relied on random-digit dialing, or RDD, which targets random samples, typically on landline telephones. That method, he argued, places too must trust in a respondent’s claim that he or she intends to vote.
“I won’t get them on the phone to give them a chance to lie to me about getting out to vote,” Murray said.
Monmouth did not poll in Kentucky, but the group’s model involves scouring past voting records to identify those who actually go to the polls and when they do.
“You can see an incredible consistency in the types of voters who show up,” Murray said.
Experts also warned of another trend that muddies the polling waters: internal surveys.
The perils of such polls — which have long fought charges of inherent bias — were evident last year ahead of former Rep. Eric Cantor’s (R-Va.) shocking primary upset at the hands of Dave Brat. An internal poll sponsored by Cantor’s campaign found the former majority leader up by more than 30 points two weeks before the election. He lost by a wide margin, 56 percent to 45 percent, and Brat now holds the seat.
“Most of those [internal polls] you don’t hear about, and when you do you always have to ask the question, ‘Why am I hearing about this?’ ” said Keeter.
The growing importance of polls has been accentuated by this year’s Republican presidential primary race, when positioning in the debates has hinged on the candidates’ standing in national surveys, a process many pollsters and political experts have warned against.
“Polling is not really up to the task of determining the candidates for debate, and certainly not who should be standing in the middle,” said Keeter.
He suggested polling should be just one in a number of criteria, including fundraising and credibility with party elites, to decide debate placement.
Kathryn Bowman, a public opinion specialist at the American Enterprise Institute, also downplayed the importance of early primary polls, saying they have “very little predictive value at this stage of the campaign.” Still, she said, the blame is widespread, lamenting the rise of pollsters who prioritize close races to gain coverage, journalists too eager to cover those results and news consumers who flock to those types of stories.
“We are all complicit,” she said.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.