"Wrong Numbers" graphic with hands on cords hanging

Polls_01

By Ronan O’Beirne

Welcome to the poll on polls. To begin, please press 1

“What is a poll?” David Akin asks in the makeup room at the Sun News Network studio in downtown Toronto. He doesn’t need to think about his answer. “It is a snapshot backward in time.”

This photo of public opinion is a hallmark of Akin’s show, Battleground, which specializes in election coverage. For the final week of the 2013 Nova Scotia provincial campaign, Sun has commissioned a tracking poll from Abacus Data at a total cost of $12,000, devoting daily segments to analyzing the company’s numbers. The network has brought CEO David Coletto to the studio tonight—election night, October 8—to explain the results as they come in.

It’s a big night for Coletto. At 31, he’s a rising star in polling: his final figures in the 2011 federal and Ontario elections matched the actual votes almost perfectly, but Nova Scotia is his first test since a May 2013 provincial election in British Columbia, when his numbers did not match the results. “It’s a nervous night for pollsters—I know it is for you, David,” Akin says during rehearsal. He chuckles; he remembers B.C.

It’s also an important night for journalists. After the B.C. election, they faced harsh criticism from colleagues and comment-section- dwellers for the way they covered public-opinion surveys. All firms’ polls were so inaccurate, in fact, they nearly put Canada’s polling news website, Éric Grenier’s ThreeHundredEight.com, out of business. Like many, Grenier had projected an NDP victory; when the Liberals won, he asked himself, “Why run a site about polling when polling in Canada is so horrid?”

But with the polling industry under fire for poor performance, he thought there was still a place for his work. “In its own tiny little way,” he wrote, “ThreeHundredEight can be part of the solution.” Still, bad information and wonky predictions in Nova Scotia could mean further erosion of public trust in journalists. Akin isn’t nervous, though, because he accepts that the numbers might be off. “Polls aren’t necessarily predictive things,” he says. Snapshots aren’t crystal balls, and polls may or may not tell the future. “The last time we asked Nova Scotians was two days ago. Did they decide to do something different today? Well, they could have.”

He’s about to find out. The polls close in 20 minutes.

Thorough coverage from Battleground and a perfect showing from Abacus would be a good start, but fixing poll reporting will require much more. Over the past two decades, too many Canadian journalists have reduced the science of public opinion to a quick and catchy story at the expense of depth and nuance. Under constant pressure to produce content, they report on statistically insignificant shifts, ignoring margins of error and previous polls. Editors don’t carry out the due diligence that a poll’s numbers and methodology require, and newsroom budgets have less room than before for exhaustive, high-quality surveys, often settling for abbreviated freebies that pollsters use as advertising. A good poll, as Akin says, is a clear portrait of the public’s mood, but the stories these snapshots generate tend to crop and Photoshop the numbers beyond recognition.

This doesn’t necessarily mean journalists should give up trying. Despite lean budgets and tight news cycles, reporters can produce solid poll coverage—but it requires skepticism, scrutiny and resisting the temptation of a quick story with a clickable headline.

If you think the golden age of poll stories is over, please press 2

An unscientific survey of journalists and pollsters suggests that poll reporting wasn’t always bad. On January 19, 1984, The Globe and Mail did it right: a Globe-CROP poll, conducted by Environics Research and Montreal-based research agency CROP, ran on the front page, above the fold. The results were unremarkable: the federal Liberals were gaining on Brian Mulroney’s Tories. But compared to modern poll coverage, it was a 14-megapixel panorama. Apart from the statistical rigour—a sample size twice as big as most polls today—the authors, who also oversaw the poll, thoroughly dissected the results, including an in-depth explanation of the methodology and the exact wording of the questions. A second article, published the same day, explained the difference between the Globe-CROP poll and a recent Gallup survey, and cautioned that polls conducted before election campaigns do not necessarily predict voting results. The page also displayed a table with party-support numbers dating back four years.

This close partnership between newspaper and pollster is a relic of a different time, when more surveys were conducted in person and Canadians actually answered their phones, rather than relying on caller ID to screen out strangers. It was also a time when pollsters went from being geeks to oracles, led by characters like Martin Goldfarb, whom Saturday Night dubbed “the most influential private citizen in Canada,” and Allan Gregg, whose signature long hair and diamond earring earned him the label “the punk pollster.” Journalists loved them: profiling Gregg in Saturday Night in 1985—a year after the magazine had flattered Goldfarb—Robert Fulford called him “the sort of man who makes you want to buy what he’s selling, whatever it is.”

Newspapers and broadcasters bought into Gregg and his ilk. According to Claire Hoy’s 1989 book, Margin of Error, CBC paid $167,000 for polls during the 1988 federal campaign. Other outlets paid up to $70,000, but that was only a fraction of market value (even the Globe-CROP poll covered only the pollsters’ costs). Then, as now, media polling was a small and unprofitable portion of the market researchers’ work—but it was worth it to get a firm’s name on the evening news.

Despite the money and care that went into them, stories based on thorough polls were not immune to criticism. Hoy wrote that polls are “not news in the sense that other campaign activities, such as speeches or announcements, are.” Stories about them are self-generated, created by the journalists, rather than by actual events. He also criticized Environics, whose pollsters had co-written the Globe-CROP stories, for “the same data distortions they have criticized other media outlets for in the past.”

If you are skeptical about Rob Ford’s popularity, please press 3

The man who co-wrote the Globe-CROP poll story in 1984 was Michael Adams, co-founder of Environics. At 67 years old, he doesn’t do much work for news outlets anymore—his new venture, the Environics Institute, works in “social research,” which suits his professorial style and sociology background.

But he hasn’t turned his back on media polls entirely. Today, he’s dissecting a recent Forum Research poll with his executive director, Keith Neuman, in a noisy bookstore-turned-Starbucks in midtown Toronto. The poll, which has made the news everywhere from Front Street to Fleet Street, suggests that after Toronto police Chief Bill Blair confirmed that his force had a video of Mayor Rob Ford appearing to smoke crack, the mayor’s approval rating went up, from 39 percent to 44. But in the same poll, 60 percent of Torontonians said he should resign.

“So I guess the question is, do you really conclude that people in Toronto are satisfied with his explanation or not?” Neuman says. “It’s not that conclusive.”

Neuman and Adams are graduates of the old school. At the Environics office, a half-block from the coffee shop, they have a copy of The Pulse of Democracy—George Gallup and Saul Rae’s sacred text on public opinion, published in 1940 and long out of print. Gallup and Rae warned that this might happen: “The answers may be inconsistent and confused,” they wrote. “But surely it is wise to know that such inconsistencies exist.” Was it wise, though, for journalists to elevate one answer over a contradictory one—especially when the jump in Ford’s approval rating was questionable?

“Was that the same poll?” Adams asks. “Yes.”

“Okay, and that he went up was compared to the same methodology?”

“Yeah,” Neuman says, “but it was a five-percentage-point difference.”

“Ohhh,” Adams says. There’s the rub.

“So, the margin of error is three or four percent,” Neuman continues.

“Okay. So, in fact, there’s no change,” Adams says. The Ford stories hinged on a bump that might not even exist.

But it didn’t take a man with 40 years of experience to notice this data distortion. Grenier, who has run ThreeHundredEight for five-and-a-half years, caught the problem too, and dismantled the “Ford’s numbers went up” narrative on his website. The mayor’s approval rating had hovered between 44 and 49 percent throughout the summer and dipped to 39 percent only in late October. “It seems much more likely,” Grenier wrote, “that his poll numbers decreased.” Within a day, it was the second-most-viewed post in the site’s history.

The first stories about Ford’s allegedly higher numbers were simple and rushed, like the poll itself. Forum’s robocall questionnaire was in the field within hours of Blair’s press conference, and the headlines (“Rob Ford’s approval rating ticks upward with news of crack video”) appeared the next day. Some of the stories—including those by the Toronto Star and CBC—also reported that 60 percent of respondents wanted the mayor to resign, but mentioned it later on in the story. It’s hard to beat a headline that says voters like a crack-smoking mayor.

If you’re comfortable blaming technology, please press 4

“It’s basically a half-hour job,” says Gloria Galloway, a reporter with the Globe’s parliamentary bureau. She’s been covering elections since 1997 and knows how to avoid the common pitfalls of poll stories. “I tend to do them between actually doing real stuff.” (This is partly because some survey results are embargoed, so a journalist can’t talk about them to anyone but the pollster.) They are an easy, if low-reward, solution for slow news days, says Susan Delacourt, the Star’s senior political writer. “The temptation is, ‘Oh, crap, we’ve got nothing today,’” she says. “‘Let’s just throw a poll into that space.’”

And journalists will probably never lack surveys for that space. In the past 10 years, pollsters have increasingly relied on interactive voice response (IVR), or “robocalls,” and online polls. Both are faster and cheaper than “live” phone calls, because machines don’t take paid lunch breaks or call in sick. In the age of caller ID, response rates have plunged into the single-digits, so it takes longer (and thus costs more) for a real person with a real voice to conduct a poll. A robocall, meanwhile, can easily reach a sample size of 1,000 in an evening. This is how, a day after Blair said he had a video of the mayor appearing to smoke crack, readers across Toronto knew what the city thought of that. In a 24-hour news cycle, this speed is invaluable for journalists.

Grenier often ignores the 24-hour cycle, despite the volume of available data. Instead of writing up each poll, he aggregates and analyzes them, looking for trends rather than insignificant shifts. It’s a model inspired by the work of Nate Silver, who rose from data-head to rock star in 2008, when he accurately predicted the winner of the U.S. presidential election in 49 of 50 states on his blog, FiveThirtyEight. Grenier—who, like Silver, has no formal training in polling but is a politics junkie—saw there was no Canadian equivalent and set out to become a neutral observer: his site is all about the numbers, rather than the people moving them. (ThreeHundredEight, which refers to the number of seats in the House of Commons, is a clear nod to FiveThirtyEight—the number of votes in the U.S. Electoral College.)

Five-and-a-half years later, Grenier is still analyzing polls in an attempt to cut through the simplistic “Party X is gaining on/ losing ground to/neck-and-neck with party Y” stories. He believes there’s often too much focus on top-line numbers: “You get a couple of quotes from the pollster and there’s your article.” Facing problems in polling and poll reporting, he hopes that ThreeHundredEight— and articles he writes for the GlobeThe Hill Times and The Huffington Post Canada—are the way forward.

Grenier tries to determine a political party’s true support by weighting polls by factors such as when they were conducted and the research company’s track record. He believes this method is more accurate than individual polls—it’s the difference between looking at one snapshot of the Grand Canyon and flipping through an album. He has written at length about his methodology and those of marquee research firms, including thorough post-mortems of his own election forecasts after the votes have been tallied. “If Éric Grenier says the research is pretty good,” Neuman says, “I believe it and you should believe it.”

Not everyone is convinced, though. Nik Nanos of Nanos Research thinks Grenier does important work, but is concerned that he doesn’t have access to as much good data as Silver does. The technology that made polls faster has also thrown the quality of the results into question. The data’s speed is helpful for reporters; it also means they are publishing opinions about events that voters have had no time to digest—something Gallup and Rae warned about decades before the 24-hour news cycle became a reality.

That’s not the only hazard. No lunch breaks aside, robocalls have notoriously low response rates, produce unverifiable demographic data (“to lie about your income, press 2”) and skew toward people who are willing to talk to a machine. Michael Marzolini of Pollara, another research firm, says (half-jokingly) that the only people who finish robocall surveys are “shut-ins and convicts.”

Online polls may not reach the same demographics as IVR polls do, but they, too, present challenges to journalists. Many online poll respondents are self-selected—people who click on banner ads or sign up on a research company’s website. A pool of this kind of respondents does not constitute a random sample, so a margin of error can’t be calculated, and the anonymity the web offers makes it easier to lie about age, sex and anything else. (Some polling firms verify their respondents’ profiles in order to have more confidence in their data. For example, Nanos Research recruits people to its online polls with a phone call.) Self-selection can also lead to some journalistic slip-ups. In September 2013, Postmedia News reported on an online Environics Research Group poll that surveyed 807 women. The author noted that “a sample of this size would yield a margin of error of plus or minus 3.5 percent, 19 times out of 20.” The Environics poll was not a random sample—meaning that no margin of error should have been given.

That’s not to say online polls and IVR surveys inherently lack credibility. Both have scored victories: Abacus’s pre-election polls in the 2011 federal and Ontario elections, both conducted online, were close to the actual popular vote. Using robocalls, EKOS Research was within the margin of error in its final survey before the 2011 Ontario election. “People tend to hear these really vulgar generalizations about IVR polls or online polls,” says EKOS president Frank Graves. “There are good and bad examples of each.”

If you think thoroughness is underrated, please press 5

The problems with poll stories lie not only in the writing, but also in the reporting. When Nanos sent survey results to journalists 20 years ago, he’d get two calls: one from a reporter, looking for analysis, and a second from an editor, who’d ask, “Nik, are these all the questions in the study? Who paid for this poll? Was this a random sample or not?” Nanos was glad to take the calls. “We’re not talking about anything onerous,” he says, “but basically doing a quality check on the research.”

With less time to turn stories around and fewer eyeballs to vet them, this quality check has fallen by the wayside—and pollsters have noticed.

During the 2011 Ontario election campaign, Darrell Bricker and John Wright of Ipsos Public Affairs wrote an open letter calling for “better, more informed reporting” of polls, arguing journalists need to do a better job of “kicking the tires” on a survey before driving it into print.

Bricker was so passionate about it that, starting in 2012, he recorded five video tutorials that demonstrate proper tire-kicking technique and posted them to YouTube; he posted the sixth video (about how pollsters weight data to make their respondents look more like the general population) late last year. Meanwhile, Neuman says that while pollsters deserve some blame for bad coverage, journalists deserve the bulk of it “for not being cautious and applying some standards.”

Standards do exist for reporting on polls, but they are weak. The Canadian Press Stylebook is an exception; it devotes four pages to polling. CBC’s polling guidelines occupy two small sections of its standards and practices. Jack Nagler, CBC News’s director of journalistic public accountability and engagement, says reporters also work with the research department, which applies rigorous standards, based on industry guidelines, when determining whether a poll is fit to print. But the broadcaster’s ombudsman, Esther Enkin, recently found that “There is a lack of rigour in the process to ensure a consistent adherence to CBC’s policies and high standards.” The Star’s standards, on the other hand, run only three paragraphs and contain the basics, like how a poll story should include the sample size, margin of error and exact wording of the questions.

Better standards are easy to find. Outlets such as The New York Times, ABC News and The Washington Post are the ne plus ultra of poll journalism. Nanos insists that if Canadian outlets adopted the same standards as those outlets have, fewer polls would make it out of the newsroom. He’s right: the Times’s news surveys division effectively bans reporting on political parties’ internal polls, online surveys and robocalls. (Unsolicited robocalls from pollsters to cellphones are banned in the U.S., which means any IVR poll misses a growing segment of the population.)

The Times also exercises more control over its surveys. “We don’t commission a poll,” says Marjorie Connelly, the paper’s head of news surveys. “We do a poll.” TheTimes does everything on its own or with CBS News (the outlets have a long-standing partnership that keeps costs down), except for the polling itself. Several Canadian news outlets have relationships with pollsters— Sun News and Abacus, CP and Harris/Decima, CTV and Ipsos Reid—but journalists aren’t as involved in survey creation. Some major outlets used to have reporters and editors with polling expertise on staff to work with pollsters on questionnaire design, but that kind of training costs money, which has evaporated over the past decade.

Canadian journalists, meanwhile, play a less regulated version of the numbers game. In the 2011 federal election, the Globe and CTV commissioned a tracking poll from Nanos and reported on the results almost every day. In the final week of the campaign, all of the Globe’s reports included the margin of error and sample size, but only two mentioned the methodology. An April 27 story noted that the NDP was “firmly in second place,” though the gap between it and the Liberal Party was well within the margin of error—it was a statistical dead heat.

If you’re tired of the horse race stories, please press 6

Journalists often use polls to explain political manoeuvres (if the Conservatives attack the Liberals instead of the NDP, check the numbers to find out why). But political parties rely on private polls; they pay top dollar for surveys that are more thorough and precisely targeted than the free polls that journalists usually get.

Marzolini says a typical media poll will ask five or six questions, and only the “horse race” question makes the evening news: “If an election were held tomorrow, who would you vote for?” An internal poll for a candidate, on the other hand, might ask as many as 250 questions.

In her new book, Shopping for Votes, Delacourt explains that over the last 10 years, parties have increasingly relied on “micro- targeting” specific ridings—and even specific neighbourhoods within those ridings— in their polling, rather than casting a wide net. The shift from politics by poll to “politics by postal code” is a serious obstacle for journalists, who are left looking at inferior data. Delacourt thinks that, absent a serious investment in a detailed poll, reporters should get off the campaign bus and talk to voters about why they’re changing their minds. The horse race numbers can’t tell that story, she says, “just like a snapshot can’t capture something as well as a video.”

For some good news from Nova Scotia, please press 7

There wasn’t much movement to capture in Nova Scotia. The opposition Liberals led in every poll between the writ drop and election day, and in its final survey, Abacus pegged their support at 46 percent to 27 each for the Tories and the NDP. An hour after the polls closed, that looked pretty bang-on; during a live hit from the Liberals’ party in Bridgetown, N.S., Sun News Network reporter Paige MacPherson told Akin she’d just seen a tweet praising Abacus for getting it right. “You heard it here first!” she said.

Akin replied with the same chuckle he’d used on Coletto: “Well, I hope the best for our friends at Abacus, but I am waiting until all the votes are counted.” He knew better than to call it early: one of the first elections the show covered was B.C., where the polls had suggested that the NDP would sweep to victory. The incumbent Liberals won.

But he couldn’t ignore the numbers for long. Seven minutes later, Akin, feigning uncertainty, said to Coletto, “I’m looking at the percentage vote there and—gee, where have I seen those numbers before? Let’s see. It might have been a certain poll there, David.”

When things wrap up, Abacus’s final numbers will prove to be very close to the mark: 45.7 percent for the Liberals, 26.8 for the NDP and 26.3 for the Tories. Coletto was quick to note this. “Sun News/Abacus Data poll gets it right,” read the headline on his post-vote analysis for the Abacus website. A week after the election, Akin said on Battleground that Abacus’s final poll was “about as bang-on as you can get.” Grenier’s projections were also close, although the Tories’ vote share of 26.3 percent was just barely within his forecast.

One successful election call isn’t statistically significant, but another glimmer of hope appeared a month later in the Winnipeg Free Press. A Forum poll, released the day before a federal by-election in Brandon-Souris, suggested the Liberals held an astonishing 29-point lead over the Conservatives, who’d held the riding for all but four of the previous 60 years. But reporter Mary Agnes Welch raised questions about the survey’s methodology, spurred by residents claiming Forum had called them up to six times during the campaign. Her skepticism was vindicated on election night when the 29-point Liberal lead became a 1.4-point Conservative victory.

If you think poll stories can be rescued, please press 8.

If you remain undecided, please press 9. . . . . . . 

[beep]

This piece was published in the Spring 2014 issue of the Ryerson Review of Journalism.

(Visited 288 times, 1 visits today)

About the author

Ronan O'Beirne was the Blog Editor for the Spring 2014 issue of Ryerson Review of Journalism.

1 comment

Comments are closed.

Sign Up for Our Newsletter

Keep up to date with the latest stories from our newsroom.