The Well Q&A with UNC Hussman's Joe Cabosky: Can we trust election polling?


UNC Hussman Associate Professor Joseph Cabosky, who also holds a doctorate in mass communication and media studies from Hussman, is in the news this week for his insights on polling ahead of Election Day next Tuesday. Cabosky, whose research focuses on diversifying and disrupting strategic communication, public relations and advertising, spoke with The Well about whether we can trust the polls this time around. As Gaby Iori writes, "Presidential election polling took a reputational hit in 2016. But was that justified?"

Read Cabosky's answer to that and more burning polling questions on The Well or below.
 

Can we trust election polling?

Gaby Iori, The Well, Thursday, October 29th, 2020

Presidential election polling took a reputational hit in 2016. But was that justified? And can we trust polling in the current election? For answers, The Well spoke with UNC Hussman School Assistant Professor Joseph Cabosky, who holds a doctorate in mass communication and media studies from UNC Hussman and a law degree from Michigan State University. Cabosky’s research focuses on public relations involving politics, among other topics, including polling surveys and data analysis.

What is polling?
Cabosky: Polling is part of social science that allows you to make inferences from a sample of an overall population. There are a lot of different ways to poll. Even the U.S. Census is an attempted poll, or an attempted survey. If you were to survey a thousand people in North Carolina, you could be pretty accurate in terms of extrapolating out to a general population, assuming you surveyed properly.

Within the political space, polling is the notion that we survey a group of people from a certain population — whether that’s a national or state population, or even more localized than that — and make generalizable claims based on who we talk to. It can tell us everything from who people are voting for to how they feel about different kinds of issues and anything in between.

How is polling done today compared to years prior?
Cabosky: We still have traditional landline polling, which is very important for older voters who make up a large part of our voting population. Cellphone polling also became much more common in 2012 and 2016.

Cellphone polling can be a recorded conversation or it could actually be through text message. Traditional cellphone polling, though, is just using the same kind of random digit dialing from landline days, so it’s the same method on a different kind of phone. There are certain populations that won’t answer a phone call, especially with Google and Apple having blocking software to filter out unknown numbers, but they might answer a text.

There is also internet-based survey polling, which has become more prevalent over the last three or four election cycles because it’s cheaper and faster. These days, you have these huge market research firms that might have 20- or 30,000 people in their survey, so you have a lot more of what we call statistical power. But the limits are that many Americans still don’t have proper internet or would never use the internet to do something like that.

What went wrong with polling during the 2016 presidential election?
Cabosky: I think it’s a bit of an exaggerated myth that 2016 was just terrible polling. There was one region of the country where it was bad, but overall, some of the most accurate polls were the national polls. These mostly showed Clinton up by three, three-and-a-half points. She ended up winning the popular vote by 2.1 points, which was well within what we call the margin of error.

The one region of the country which just completely changed the election was everything from Pennsylvania to Minnesota. Minnesota stayed blue, but Wisconsin, Michigan, Pennsylvania, Ohio and Iowa — all of which people thought would slightly lean Trump by a point or two — ended up leaning by eight- to nine-point margins. So, that was one region where pollsters were off by five or six points and completely changed an entire election.

We also can’t discount qualitative insights. I had a family member who was a single woman in her 30s that had never voted, but I knew from her social media posts that she was a Trump supporter. The way polling works would suggest that she’s unlikely to vote because she’s in her 30s and hadn’t voted before, but I remember seeing on Facebook that she had posted a photo with her “I Voted” sticker. I thought about how if this family member is like many of the untraditional voters in parts of the country that were not covered much, then that would definitely sway things on election night.

What do you think the biggest problem with polling is?
Cabosky: I think we use demographics way too broadly, where we try to consolidate very diverse groups of people by race or ethnicity. There are places even here in North Carolina that we don’t give nearly enough attention to. When smaller groups of people make big shifts, it can be harder to pick up. For example, the biggest swing voters in 2016 in North Carolina were voters from the Lumbee Tribe. But a poll may only sample about three of them, based on sample sizes. Yet, if there’s a big shift, that could lead to a net change of, say, 10,000 votes statewide.

A white voter in a small town in North Carolina might be very different from a white voter in Burlington. These small little variations can add up if they all shift one particular way.

What sort of measures have been taken to ensure accurate polling with the 2020 presidential election this year?
Cabosky: During every cycle we have to make some adjustments, especially because the way we communicate with people has been changing so quickly from landlines to text. I always wonder about things like response rates from people that are getting cellphone calls, because the technology that Apple and Google use to help trim out those spam calls could disproportionately impact things, at least on the margins of who is not seeing those messages. So, we are always making those one- or two-point shifts to accommodate for that.

I think polling continues to be pretty good, even in something like the Democratic primaries when it’s very competitive. But there’s nothing glaring about the 2020 election that I’m super worried about. We’re always overly analytical and there are always things that I’m curious about. We’ll see one or two places where the polling is wrong, but with sampling error we’re basically saying that it’s right 19 times out of 20. Out of 20 states, statistically, there’s always one state that’s just probably going to be a little bit off.