When it comes to political polls, here’s how the news cycle usually goes: Poll gets released, we write about it, readers read it and (because who doesn’t love a poll?) traffic goes up.
This week’s publication of the recent Saint Leo University poll didn’t go that way.
It released the poll, we wrote about it, and instead of traffic going up, my phone blew up. (Off I go back to the Apple Store to buy a new one; did I mention I have a new Apple Watch? It’s awesome.)
My phone blew up because I was being told that the Saint Leo University poll was not worthy of coverage because it had so many technical issues. Many said it was a worthless (insert colorful language here).
So I asked one of Florida’s better and more sophisticated pollsters, Steve Vancore, to help us make heads or tails of the recent St. Leo product.
Here is what Steve had to say:
Peter, this poll has so many technical and methodology problems it’s really hard to list them all, but I will give it a try:
• You can’t have “margin of error” unless your sample is both random and representative. This poll was neither. Respondents selected themselves (a randomness problem) and the demographics are extremely far off from what a likely electorate would look like (a representativeness problem.) For example, 57% of respondents in the poll were 18-44 years old, when in reality that number should be closer to 30%. Further, this sample was far – far – more educated than the Florida electorate with more than 2/3rds holding a post-high school degree, when in reality that number among working adults is below 40%. Finally, gender and party balance (too many Democrats) were also off although not as dramatically as those listed above. Based on those factors alone, this is not even close to a representative sample.
• It’s a poll of “general population adults.” Oh my! Not voters? Not likely voters? The sample should have been composed of only likely voters in order to be valid.
• The primary race test ballots are also problematic. The respondents consist of self-identified partisans (not confirmed on the voter file), are not screened but self-identified for likelihood of voting in primaries, and the results consist of subgroups (n = 166 and 146 respectively) that are so small they are virtually meaningless.
• It included titles to describe the candidates in the primary test ballots (“Congressman”, “former Senator” etc.) and as that is information that will not actually be on the ballot, it makes the results far less reliable
In short, this poll has a variety of methodology problems and if you are applying the SPB saltshaker test, you are going to need a backup beeper to haul the stuff to your front door.
Hope this helps.
I trust Vancore and value his judgment in these matters. His analysis meets with what I was thinking and IMHO, we can conclude that Saint Leo polls kind of suck.
With that we are hereby changing our policy at FloridaPolitics.com and here is my commitment to our readers: Unless and until Saint Leo University polling gets its act together, we are no longer covering its polls.
So anyway, I really like the capabilities of my Apple Watch, it’s a lot more natural to simply turn your wrist and see whose calling and …