On Oct. 21, Gallup released a national poll that dropped like a neutron bomb: Mitt Romney had a whopping 7-point lead over President Barack Obama, according to the much-respected polling company whose operational headquarters is in Omaha.
Sixteen days later — on Election Night — that presidential bombshell looked very much like a dud. Obama coasted to a second term and ended up winning by more than three percentage points.
Now Gallup is taking incoming fire from both independent experts and the Obama campaign for what they see as the company's errant presidential polling. The criticism comes during a postelection reckoning inside Gallup, as its East Coast pollsters and Omaha statisticians grapple with a crucial question.
What, if anything, went wrong?
“It may be that the methods we use might be too oriented to the history of voting,” said Frank Newport, Gallup's editor in chief, during an interview this week. “The old patterns may be changing. ... We're looking at all of it very carefully.”
A look at the recent history of Gallup's national tracking poll shows something needs to change, say two high-profile critics.
One, statistician Nate Silver, has achieved national prominence by using polling data to predict almost perfectly the past two presidential elections. Silver, who runs the New York Times-affiliated blog FiveThirtyEight, crunched the numbers and determined that Gallup was the least accurate of 23 major pollsters in the three weeks preceding the 2012 presidential election.
And this isn't Gallup's first whiff, Silver wrote recently — its national tracking poll predicted too large an Obama win in 2008 and then overestimated the size of the Republican takeover of the House of Representatives in 2010.
“Perhaps it won't be long before Google, not Gallup, is the most trusted name in polling,” Silver wrote in a blog post about how online polling proved more accurate than traditional telephone polling in 2012.
Silver didn't return emails seeking comment.
Newport doesn't believe that Gallup's pre-election polling was as inaccurate as Silver claims. He points to Gallup's final poll, which showed Romney up a single point, 49 to 48 — a result within the margin of error.
And he argues that, while not perfect, Gallup's polls reflected a back-and-forth narrative: Obama started a bit ahead. Then, after the first debate, Romney surged ahead. Then, after Hurricane Sandy, Obama reversed Romney's momentum.
“Obama gained over that last week, and he gained in our final polls,” Newport says.
Another Gallup critic, Princeton University professor Sam Wang, sees only one problem with this narrative:
The president was never actually behind, he says.
“The overwhelming evidence is that there was not a single day when Romney led Obama, either in the popular vote or in the Electoral College,” Wang said, citing the aggregated data of hundreds of national and state polls done in October.
Wang, like Silver, has made his name out of averaging hundreds of different polls and then accurately predicting the national and state-by-state results of the past two presidential elections.
Silver predicted the presidential winner in all 50 states this year and has missed only one state during the past two elections. Wang has actually done a bit better: During the past two presidential elections, he's wrongly predicted only one electoral vote — he had John McCain, not Obama, winning Nebraska's 2nd District in 2008.
This time around, both Wang's analysis and Silver's forecast did show Romney gaining on Obama in October, but neither ever showed the Republican challenger pulling ahead. The reason is fairly straightforward: The majority of state polls always suggested an overall Obama lead, though the size of that lead fell a bit early in the month and then rose slightly in late October.
The headline-grabbing polls — like that October Gallup poll trumpeting Romney's 7-point lead — obscured the real story, Wang said. Beyond that, he says, many polls were simply wrong.
“I estimate that Gallup's results in October-November were biased by 4 to 8 points (in terms of the margin between the two candidates) from the voting public's true sentiments,” said Wang, a Princeton neuroscience professor, in an email to The World-Herald.
At the heart of the Gallup polling controversy is the company's “likely voter” model, which tries to determine who will actually head to the polls on Election Day.
Joel Benenson, Obama's lead pollster, blasted Gallup and that model in a postelection interview with the website Politico, suggesting that the model routinely misses voting blocs who tend to vote for Democrats.
“I think it's long overdue for an organization with a name as well known as Gallup to recognize what the demographics of the American electorate actually are and figure out why their model has continued to skew too old, too white and less likely to be college-educated than the nation's voters,” Benenson said, according to Politico.
Gallup doesn't actually try to make assumptions about how much of the electorate will be white or young on Election Day, Newport says. Instead, the Gallup telephone interviewer — often operating from a cubicle in Omaha or Lincoln — asks a series of questions meant to gauge if the survey respondent seems motivated enough to vote.
The problem with the likely voter model may be that it doesn't reflect the new world of campaign turnout operations, Newport says.
In 2012, both the Obama and Romney campaigns developed micro-targeting techniques meant to identify and then repeatedly make personal connections with registered voters who tended to agree with their candidate but may not be sufficiently fired up to cast a ballot. The Obama campaign in particular was renowned for registering and then turning out these seemingly unlikely voters.
“What if you tell a pollster some variation of 'No, I probably won't be voting,' and then on Election Day there's a 'knock, knock, knock' and it's someone from a campaign offering you a ride to the polls?” Newport asks. “We don't impose assumptions on our data, but the method of identifying likely voters may need to be adjusted.”
That's not the only adjustment Gallup is looking at, Newport says.
He said Gallup's pollsters and methodologists will meet in Omaha soon, look back at the 2012 election cycle and then peer forward.
Should Gallup start to use Internet polling? Should it send text messages in addition to calls? Can these newer technologies exist side by side or become part of Gallup's tracking poll, which is done by telephone?
“People in the 1980s said you just couldn't do a good poll by telephone. You had to knock on doors,” Newport says. “There's always resistance to new methods, but I can tell you that anything is possible for us in the future.”
If it doesn't change, Wang said Gallup runs the real risk of slipping from its historic place atop the election polling mountaintop.
To illustrate, he brings up the Literary Digest, a influential weekly magazine whose pre-election presidential poll was seen as the gold standard after it started correctly predicting elections in 1916.
Then came 1936. Literary Digest's poll predicted that Alf Landon would win 57 percent of the vote and blow out President Franklin Roosevelt in Roosevelt's bid for re-election.
Election Day did bring a blowout. Roosevelt won all but two states and defeated Landon by 24 percentage points.
Literary Digest was soon out of business, while a young statistician who made his name by correctly predicting Roosevelt's win began to reinvent and then refine modern polling in the United States.
That statistician's name: George Gallup.
Wang hopes today's Gallup heeds that cautionary tale.
“Gallup has so much good human capital,” Wang says. “I hope they take 2012 to heart as a lesson for regaining leadership in the field.”
Contact the writer: 402-444-1064, email@example.com