The views expressed by contributors are their own and not the view of The Hill

Polling misfired in 2020 — and that’s a lesson for journalists and pundits

After months of examining data from more than 2,800 surveys, a task force of polling experts has failed to reach unequivocal explanations about why polls went awry in last year’s U.S. presidential election. While the inconclusive result was unsatisfying, the task force report released last week has value in providing reminders, explicit and otherwise, about the prominence, complexities and vulnerabilities of election polls in presidential campaigns. 

The report, commissioned by the American Association for Public Opinion Research, an industry organization known by the acronym AAPOR, noted that collectively the performance of national pre-election polls in the 2020 presidential race was the worst in 40 years

Discrepancies between poll results and vote outcomes were even greater, the report noted, in many down-ticket races in 2020. 

Specifically, polls overall underestimated popular vote support for then-President Trump, as well as for Republican gubernatorial and U.S. Senate candidates. 

Polls were accurate in pointing to Joe Biden’s winning the popular vote for president, but the outcome was closer than what polls had indicated. Biden’s popular vote advantage over Trump was 4.5 percentage points. But polls in the closing two weeks of the campaign underestimated Trump’s support by 3.9 points nationally and by 4.3 points in state-level polls, the task force reported.

Some individual national polls were well off-target, pegging Biden’s lead at 10 points or more at campaign’s end.

It was vexing that the task force said it could not determine precisely what factors gave rise to the sharp discrepancies — a finding that will do little to mitigate the wariness and skepticism that run deep in the United States about election polling. 

Despite its lack of definitive conclusions, the report underscored how challenging election polling can be, and how surveys can be distorted by any number of variables, including uneven response rates among Republicans and Democrats. 

The report also offered an implicit reminder that polling errors can vary markedly and do not spring from a common template. In that sense, they are akin to Tolstoy’s observation that unhappy families are unhappy in their own way. Errant polls likewise tend to err in their own way.

The 2020 polling embarrassment was no rerun of that of 2016, when wayward polls in key states upended confident predictions that Hillary Clinton would defeat Donald Trump handily. On the eve of the 2016 election, for example, HuffPost’s polls-based forecast gave Clinton a 98.2 percent chance of winning the election and declared that Trump had “essentially no path to an Electoral College victory.” Trump won by 304 electoral votes to 227.

Adjustments that pollsters made following the 2016 surprise — notably, to make sure their data included views of non-college-educated white voters who heavily supported Trump — failed to produce accuracy in 2020. “We made all the corrections that we were supposed to make for the 2016 issues, or at least we thought we did, and it didn’t necessarily help us,” one media pollster said at AAPOR’s conference in May.

The 2020 polling error may have been attributable to some Republican voters’ unwillingness to participate in pre-election surveys. Or, that Republicans who did answer pollsters’ questions were somehow different in their opinions and voting tendencies from Republicans who did not participate in polls. 

The AAPOR task force considered those and other hypotheses but was unwilling to embrace any of them, given the meagerness of supporting data. It’s obviously difficult to verify such theories if little is known about non-respondents to pre-election polls. As the report stated, “Knowing what really happened is impeded by not knowing how those who participated in the polls compare to those who did not.” 

The task force report also represented a cautionary reminder to journalists and pundits about the risks of leaning too heavily on election polls and the certainty their numbers seem to project. Polls have shaped the narratives about U.S. presidential races since the 1930s. Survey data have been central to how journalists and pundits interpret the ebb and flow of campaigns. It’s not surprising, then, that polling misfires can produce journalistic error.

The report assuredly will not turn journalists and pundits away from polling data in assessing political campaigns as they unfold. But its pointed observation that the 2020 elections were marked by “polling error of an unusual magnitude” — that the discrepancy between polls and outcome was the greatest since Ronald Reagan defeated Jimmy Carter — ought to encourage wariness among journalists and pundits. 

After all, the task force noted, “Poll results often are discussed in a way that provides a misleading characterization of polling performance and accuracy.” 

While the report did not consider the point in great detail, the 2020 polls collectively erred in the same direction — in overstating support for Democratic candidates — regardless of the survey methodology employed or how close to the election the poll was conducted. In fact, 2020 was the fourth presidential election of the last five in which polls, at least modestly, overstated support for Democratic candidates, a phenomenon that likely merits greater scrutiny among national pollsters.

W. Joseph Campbell is a professor at American University’s School of Communication and the author of seven books including, most recently, “Lost in a Gallup: Polling Failure in U.S. Presidential Elections” (University of California Press, 2020). Follow him on Twitter @wjosephcampbell.

Tags 2020 election Donald Trump Hillary Clinton Jimmy Carter Joe Biden Polling Public opinion

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more