Polls are Still Useful

2008 was a watershed moment in how elections were reported. 

With a near perfect prediction of the electoral map, blogger Nate Silver’s presidential election forecast was the punditry equivalent of a 600-pound gorilla stampeding through a supermarket with the words “there is another way” painted in red on its back. 

Essentially, the allure was that the United States’ reporting of politics was ready to be driven by data grounded in science. Again in 2012, Silver’s forecast performed remarkably well, and we continued on believing that politics had left the Dark Ages and entered some sort of 21st-century, polished white laboratory of statistics. 

I mention this because this era of polling is easy to forget in our current climate. Now, when “the polls are useless” and “I don’t trust Nate Silver” are gratuitously fashionable sentiments, we tend to forget that uttering these words among the liberal intelligentsia half a decade earlier would’ve been grounds for exile. Within two election cycles, we’ve slammed the door on polling—even as high schoolers, we’ve figured out what to say when someone mentions a poll: “Yeah, but weren’t they wrong the last time? I don’t think that means much.” We might not know what ballot question one means, and we might still believe in shy Trump voters (a myth that has been consistently debunked), but dammit, we know we’re supposed to bash the polls!

As most of us would put it plainly: they don't work.

The first issue with this belief is that it isn’t necessarily true. The 2018 midterm polling was remarkably accurate, a return to the 2012 heyday of precise forecasts. And even in a 2020 general election that seems to have missed the mark, state polls in Arizona and Georgia were almost perfectly true to voting. As Silver wrote the day before November 3rd, “Biden could suffer a 2016-size polling error and still win,”which is exactly what happened. Polls don’t have perfect certainty and they do sometimes miss the mark, but they can give us useful information if we know where to look.

The more significant issue, though, is that to think that polls can be right or wrong is to fundamentally misunderstand their role in election forecasting. That’s not necessarily our fault because they tend to be presented that way in the media. But polls are not the real-life equivalent of a Harry Potter character—an oracle making definitive claims about an outcome. Even in its polished form, a poll is still simply data: not a prediction, but an observation. The polished white laboratory is misunderstood. There’s no machine, no centrifuge, or microscope that delivers a single, perfect result. Polling requires a grain of salt and a heaping spoonful of analysis and contextualization and still isn’t always entirely accurate. It might not be as appealing as a precise prediction, but it can still be useful. Given the choice between complicated, imprecise data and no data at all, I know which one I’d take.

Silver’s breakout predictions may also have been his undoing. After eight years of accurate, targeted predictions, I think we became spoiled, tricked into taking up the unreasonable expectation that polling must be devilishly precise and consistently in agreement with election results. And when we learned what it looks like when polling doesn’t have a good day and misses the mark, it was easier to throw it by the wayside than to accept the uncertainty and nuance it has always come with.

Perhaps this is for the best. Studies have found that most Americans have trouble understanding probabilities, and they struggle to understand the broad margin of error in polling. In 2016, FiveThirtyEight’s 30/70 chancing in favor of Clinton certainly didn’t rule out a victory for Trump—his path to victory was just somewhat unlikely. However, this concept was underreported in media coverage, with reporting that coalesced around the broader narrative of Clinton being a favorite and in the lead. That wasn’t the whole story—it lacked nuance and context—and as a result, the public’s understanding of the election suffered. 

Reporting polling results directly to readers might be akin, then, to publishing the broadly unanalyzed results of a vaccine trial: ultimately too confusing to be informative, with too high a risk of oversimplification or misunderstanding. If the public is in the process of turning away from polling, then it’s hard to stand on a soapbox and claim that they’d be better off misinterpreting useful data than simply ignoring it and avoiding faulty conclusions. If you’re still reading this, though, and want to know more about our politics than the other people in the room, start by researching how polls should be interpreted. Don’t throw the baby out with the bathwater—give polling a chance.

by IAN CARSON

Lex PerspectivesComment