Should polls be trusted? Yes … and no.

pollsters
The problem with polls: Most people — and journalists — don’t understand what they mean.

The headlines dominated cable news and social media for an entire day: 

“New poll shows Biden falling badly, three-way tie for Democratic lead.”

One poll, of just 298 Democrats scattered across the country, threatened to change the entire narrative of the 2020 primary with its surprising conclusions.

The only problem: one day later, three more polls came out that all contradicted the narrative that Joe Biden had lost his healthy primary lead.

Oh, one more problem with the narrative of that day: polls aren’t designed to make conclusions.

Hardly a morning goes by without a new survey fueling news talk’s 24/7 thirst for fresh fodder ahead of 2020. But horse race polls are wildly misinterpreted by news anchors, panel analysts, and the millions of social media junkies who get their daily fixes of political discord from Facebook or Twitter.

It’s not that polls are inaccurate or failing us; it’s that we’re failing to understand what the data means.

Frankly, most people are simply lousy at math; that includes journalists who, unintentionally, tend to exaggerate the importance of individual polls. We treat survey results like the pulse of the nation on any given day, rather than what they really are: indicators of possible trends over time.

As it turns out, most polls provide very accurate information — and that includes during the 2016 presidential election (see below) — but it’s important to recognize why our takeaways from that information can be so inaccurate:

Methodology

Does the poll survey all individuals, registered voters, or only likely voters? Obviously, the most significant findings would come from polls that survey likely voters, but there’s no perfect science to determine who actually shows up on election day. Decisions on the methodology can easily swing a poll several points in either direction or even render accurate numbers irrelevant to a presidential contest.

Sample size

In Monmouth’s headline-grabbing outlier poll last week, it took just five respondents to get candidates to the 2 percent qualification threshold set for the September and October Democratic debates. Nobody should be making any sweeping conclusions from such small samples; the margin of error in these polls is not talked about nearly enough. Reliable sample sizes start at 1,000 respondents for a presidential poll.

Geography

If you didn’t notice in Nov. 2016 (I know Hillary Clinton did), not all votes are created equal; we don’t pick presidential candidates based on national votes, so why do we put so much stake in national polls? In hindsight, we see 2016’s national polling gave Democrats a false sense of security leading up to election day; however, the problem wasn’t the polling, but in how results were interpreted.

“The national average of polls of the Clinton/Donald Trump race was only off by 1 percent,” said Matt Florell, the brains behind Florida-based polling company St. Pete Polls. “Clinton (was projected) +3 percent and she won the national popular vote by +2 percent … but we don’t elect presidents by popular vote.”

St. Pete Polls has earned praise from peers in recent years for its methodology and accuracy in the Sunshine State. But Florell says the lack of local polling in other key swing states led to a lack of accurate polling averages, and thus, a lack of indicators when those states started to turn for Trump.

According to a Washington Post story last week, just four swing states might determine the entire 2020 election, yet there has been little polling done so far at the state level. The same can be said for early primary states, where tiny constituencies in places like Iowa and New Hampshire will have oversized influences on which Democrat emerges from the bunch. So basically, ignore the national polls.

Outliers and other limiting factors

Good pollsters will caution against reading too much into a single report; Monmouth’s polling director, Patrick Murray, issued a statement acknowledging his poll was an aberration/outlier last week.

Other pollsters say that’s not enough to ensure the public understands the data.

“(Monmouth was) a perfect example of a poll that never should have been released,” Florell said. “For Florida statewide polls, you really shouldn’t pay attention to any poll that has less than 300 respondents. The more accurate ones usually have over 1,000 respondents. This is even more (important) for national polls, which should always be over 1,000 respondents.”

Florell says oversaturation has also made a pollster’s job more complicated in recent years, particularly once the national spotlight shines on a swing state.

“Polling fatigue is a real issue for polling companies. Our response rates on polling year-over-year have been going down consistently for the last decade, so we have to place more and more calls to get the same sample sizes.”

As for the challenge of reaching likely voters once they ditch their landlines for cellphones, Florell says it’s been another recent development that’s adversely impacted polling accuracy.

“We have tried some limited manual dialing of cellphones and email polling of voters as ways of dealing with the replacement of landlines, but these methods are more expensive and do not always effectively replace the demographics that have been lost to declining landline use,” Florell said.

“Some other pollsters have tried paying respondents for their participation, as well as gathering a wider set of voters that they will poll repeatedly over the election season. Both of these other options have negative effects on their polls being true random sample polls, but there aren’t too many other viable options.”

Despite the challenges, many professional pollsters are still doing a remarkable job in predicting voter behavior. Unfortunately, voters aren’t as good at interpreting what those polls mean.

Noah Pransky

Noah Pransky is a multiple award-winning investigative reporter, most recently with the CBS affiliate in Tampa. He’s uncovered major stories such as uncovering backroom deals in the Tampa Bay Rays stadium and other political investigations. Pransky also ran a blog called Shadow of the Stadium, giving readers a deep dive into the details of potential financial deals and other happenings involving the Tampa Bay- area sports business.


3 comments

  • Mellissa Evans

    September 4, 2019 at 5:27 pm

    BIG FAT NO!!

    • Steve Forseth

      September 5, 2019 at 12:35 pm

      BIG FAT YES! … and NO! Read the whole article, Melissa! If polls are done correctly, using proper random sampling sizes (using over 1,000 respondents for national polls), the margin of error is reduced and confidence in reliability is increased significantly. If you don’t know how to interpret a poll’s details on sample size, population, and margin of error, it’s probably best that you NOT be drawing any conclusions from polls until you’ve finished a Statistics class.

  • MAGA2020

    September 5, 2019 at 1:38 pm

    IF THEY ARE FROM DEMOCRAT SOURCES LIKE MEADIA, AND KNOWN DEMOCRAT FUNDED POLLING COMPANIES…. NNNNNNNNNNNNNOOOOOOOOOOOOOOOOO! FIVE THIRTY EIGHT IS THE BIGGEST POS!

Comments are closed.


#FlaPol

Florida Politics is a statewide, new media platform covering campaigns, elections, government, policy, and lobbying in Florida. This platform and all of its content are owned by Extensive Enterprises Media.

Publisher: Peter Schorsch @PeterSchorschFL

Contributors & reporters: Phil Ammann, Drew Dixon, Roseanne Dunkelberger, A.G. Gancarski, Anne Geggis, Ryan Nicol, Jacob Ogles, Cole Pepper, Gray Rohrer, Jesse Scheckner, Christine Sexton, Drew Wilson, and Mike Wright.

Email: [email protected]
Twitter: @PeterSchorschFL
Phone: (727) 642-3162
Address: 204 37th Avenue North #182
St. Petersburg, Florida 33704




Sign up for Sunburn


Categories