"Could we be on the edge of a Literary Digest moment, or a 'Dewey Defeats Truman' embarrassment for the industry? Certainly," I wrote here at the Examiner one week ago.

Turns out we were. I've been skeptical of traditional polling for some time now, but — columnist accountability time — I wasn't skeptical enough this year.

Looking at the statewide polling averages showing states such as Michigan and Wisconsin nearly out of reach for Donald Trump, I thought it was more likely than not that Hillary Clinton would be president, and would win just over 320 electoral votes. In places that were a toss-up, I bought the Democrats' spin that their ground game would get out their vote. In order for Trump to win, I thought, he'd need states like Nevada, a place where early voting suggested he was weak, and thus he needed an awful lot to break his way in order to win.

To their credit, Trump's team saw a different path, talked of an "undercover Trump vote," and went to upper Midwest states to find them, piecing together an Electoral College victory in the process.

And while neither the Clinton team, the Republican National Committee, or even the Trump team predicted outright that Trump would win, it was Trump's folks who rightly saw an opportunity that everyone else thought was off the table.

How did Trump do it, and how did the polls miss it?

There are two ways you can "miss" Trump's voters; either you don't call them, or when you do call them, you don't get the full story about their views.

I was dismissive of the idea that there was an "undercover Trump vote," in part because studies like this one from Morning Consult found that to the extent it existed, it probably wouldn't swing the election. I often raised the possibility of social desirability bias — the factor that makes someone less likely to give an answer in a poll they think is "wrong" or unacceptable — but noted that, short of reading minds, it can be hard to identify.

But Trump's pollsters picked up on hints that it could be there. Instead of reading minds, they asked questions that tried to achieve a similar end. While the public polling that folks like me were consuming focused on one number — the "topline" or "horserace" question — private polling by Trump's team saw hints of where voters might go based on answers to other questions that got at the heart of Trump's appeal, according to Trump pollster Adam Geller.

Different questions might elicit more illuminating answers that might get around some of the dreaded "social desirability bias" that makes studying controversial topics so hard. And with nearly 60 percent of Clinton voters saying they'd find it hard to ever respect someone who voted for Trump, you can see why someone might just keep quiet. (One notable trend I saw in the exit polls: College educated white women, a group thought to be ripe for Clinton's coalition, only broke for her by six points, despite pre-election polling suggesting she had them by historic margins. It's not hard to imagine these voters being more reluctant to weigh in for Trump.)

This suggests to me that pollsters need to think long and hard about the full suite of questions they are asking, and what questions in addition to the horse race can give a full picture. Polling is an art as well as a science, and the art of crafting good questions is still vital.

The other element of "missing" Trump voters comes down to sampling. Even assuming everyone was candid with pollsters, were pollsters simply sampling too many Democrats or Obama supporters?

With some votes still left to be counted, it appears possible that Trump will win the White House with fewer votes than Mitt Romney received in 2012. (Romney lost with 60.9 million votes; Trump is currently sitting at 59.7 million.) The story may be less that Trump people were being missed and more that people who were being considered "likely voters" for Clinton were not actually going to turn out at all.

Nowadays, the political analytics pros often build their models of the electorate based on the "voter file," the list of registered voters. They look at someone's past vote history and make an assumption about future likelihood to vote. Someone who voted in the last two presidential elections would probably be "scored" as a fairly likely voter.

But as Democrats are discovering, you can't take the customer base for Brand Obama, change up the formula, and expect the same level of sales. As a result, it may not just be that pollsters were ignoring or screening out Trump fans, but rather may have been allowing too many folks into their "likely voter" pools who were not actually going to turn out for Clinton.

Take the city proper of Philadelphia, where Hillary Clinton won nearly 563,000 votes. Donald Trump, meanwhile, won 105,000. Trump improved slightly over Romney's haul, which was about 9,000 votes lower. But Clinton fell 25,000 votes shy of Obama's standing there. Or in Wisconsin, in Milwaukee County, Trump won 28,000 fewer votes than Mitt Romney, but Clinton won 43,000 votes fewer than Obama.

It's not just about Obama's coalition evaporating; turnout isn't down everywhere. Credit is due to Trump for boosting turnout in places like Luzerne County, Pa., a place where Clinton got 12,000 fewer votes than Obama, but Trump also got 20,000 more votes than Romney had.

The overall proportions of the electorate in some of these states may not have shown a surge in white voters. (The exit polls aren't the best tool for this, but until the voter file is available, they're what we've got.) Pennsylvania exit polls show a slight uptick in the white vote, but Michigan and Wisconsin's show no such thing.

It's not clear that some kind of unique demographic projection would have made a huge difference, which is why those other questions to which Geller refers are so important. Not all white voters, or all black voters, or all female voters, are alike, and even within the big demographic buckets pollsters weight on, it's clear that Clinton turnout within many of these groups was overestimated.

How can polling be fixed then? It's important to note that not all polls were off. In New Hampshire and Virginia, the polling averages were freakishly on the money, and in Florida, Colorado, Pennsylvania and North Carolina the misses were largely "within the margin of error."

But nationally, the polling average gave Clinton a lead by three points, and her likely final margin in the popular vote will not be that large. And in a handful of absolutely essential states, the polls simply blew it.

The big public polling misses in Wisconsin and Michigan, paired with the belief that Pennsylvania is always fool's gold for the GOP, led many — myself included — to think Trump's path was much narrower than it was. Kudos to Team Trump for seeing that those states were more in play than any of us realized.

There's a lot of work to be done in the polling world, and a need to continue to rethink how we do what we do. We also need to be more open to the idea that any one input — in this case, polls — may not be the only way to hear what people are saying. And while I don't advocate that we turn to tarot cards or mask sales as a replacement for our industry, we ought to be open to integrating more signals into how we learn. Whether digital insights such as search trends or social media chatter, or new ways to ask people for their opinions, we have to evolve as an industry.

I for one am looking forward to finding out what's next.

Kristen Soltis Anderson is a columnist for The Washington Examiner and author of "The Selfie Vote."