If it wasn’t for Brett Kavanaugh Thunderdome in the Senate Judiciary Committee last week, the national conversation surrounding social media rules of the road would have entirely consumed America’s attention. And rightly so.
Social media titans Sheryl Sandberg from Facebook and Jack Dorsey from Twitter came to the Hill and faced heavy criticisms and hard questions from both chambers of Congress. As you’d expect, the questions were primarily technically and legally oriented: What constitutes free speech and the public sphere? What policing do you do of the content you post, curate, and allow to be seen? What are your obligations to the customer and the marketplace?
[Also read: Trump warns Google, Facebook, Twitter: ‘Better be careful’ about political bias]
Left unasked was a core human question: What is it about social media that provides such rich soil for vitriol and polarization? What, if anything, can be done do stabilize behaviors back toward the comparative civility of the tangible, actual world? (A depressingly low bar, I know). In theory, the promise of having all the content and opinions in the world a follow or click away could be the rising tide that lifts all boats and promotes a solid dialogue, right?
Not quite yet.
Then, as if on cue, there were major press articles in the Washington Post and the New York Times over the weekend discussing how researchers are finding that being exposed to the other side of an issue actually inflames the partisan polarization of social media. So, the argument would contend, the marketplace of ideas was driving folks to the polar extremes of this Red/Blue New American Civil War.
According to the Washington Post piece about a sociologist named Christopher Bail:
Bail took a dose of his own medicine and followed both accounts — which retweeted people such as @LouDobbs and @EricTrump to Democrats or @SenKamalaHarris and @ariannahuff to Republicans. Bail said he was surprised to see how frequently the accounts weren’t in conflict but were simply tweeting about entirely different issues. He said can’t be sure how the tweets had their effect in his study, but he pointed to a recent, counterintuitive body of research chronicling backfire effects.
Based on these and other findings, one of the takeaways from the research was that exposure to “the opposing viewpoint” makes readers dig in their heels and become more bellicose.
But that’s not how I read it. And that’s what led me to thinking of a new model for social media content navigation and curation.
My conclusion based on the media accounts of the research: exposure not just to garden variety political opinions, but the partisan firebrands you consider “the enemy” or “the opposition” (read: not most of us) makes average readers more entrenched in their partisan views. Which is quite different from a more overarching finding that “hearing the other side increases polarization.”
Naturally, an avid Fox News Channel viewer will have their hackles raised by a Rachel Maddow type, much like an MSNBCer would have their eyes roll into their skull after ten minutes of Sean Hannity. It’s part of the media personalities’ brands to do so, they embrace this, and it is a component of their success.
Not so online, though. On social media, as any user knows, there is so much content that seems tugged towards the edges of the political spectrum, often in ugly or coarse ways. Living on Twitter is like existing in a perpetual political primary campaign, where the angriest and most bellicose voices are amplified and enabled.
But most Americans aren’t in the “political red zones” of a football field, meaning the areas furthest apart on the gridiron, they live between the 25-yard lines. Being so doesn’t make them “centrist” or “moderate,” it just makes them complicated, interesting, and willing to have conversations and hear a new or different take on a political issue.
Which takes me all the way back to the issue of social media polarization. When Facebook and Twitter appeared before Congress, they were asked about what views were permissible on their platforms — and which ones were too extreme. The entire premise of the dialogue was that social media needed to provide some guardrails against extremism, as well as verification that accounts belong to actual humans.
I’d propose another tool to curb some potential polarization: rethink the algorithms they deploy for profit.
Algorithms, as most of us know, are the technological tools that have Netflix make suggestions about “If you watched this, you may like this …” or allow Twitter to suggest accounts you may want to follow. They are based on your viewing habits, what you have clicked on, what you have bought, who else you may be following.
What if algorithms were used to bring new, alternate perspectives to political discourse — and not just “let’s lump people into sameness bubbles.” The same technologies which help advertisers and content companies distribute and market their product could help give users the option of another point of view.
Imagine a platform that could find out you were interested in a ballot initiative, tax legislation in Congress, or the Bob Woodward book, based on you clicking on a conservative or liberal link to content covering the issue from a certain point of view. Then through a sophisticated, proactive algorithm, the social media platform would suggest “other related content” on the same issue from various perspectives. “If you read Paul Krugman, you may get additional information from Robert Samuelson,” and vice versa. And it could be configured in a way that provided content not wholly confrontational to the original piece. So if you’re a center-right reader, there could be a centrist opinion at the start; if you’re solidly on the left, perhaps a center-left one could begin to slowly recondition our views of political reality.
The entire framework could also be made more comprehensive by adding in considerations like “values-oriented” versus “data-oriented.” If you are a reader who is particularly attuned to stats and indicators, you would benefit from additional voices who use that framework; if you respond and appreciate principles and vision, the technology could seek content that touched on the same issues in the format and tone you prefer.
This wouldn’t be a silver bullet to America’s polarization problems, but it would be a tool to offer a sizable contingent of politically oriented, but open-minded users. Much like the book Nudge encouraged something called “choice architecture” that where solid options are laid out that gently guide people towards decisions they agree with and will benefit them, it’s very possible that such a small technical feature could raise awareness on major issues, but also expose additional interested social media users to the different pieces of each political puzzle.
If Netflix and its “If you watched this …” algorithms could change the way we watch TV, Twitter and Facebook could change the way we engage in the political dialogue by using its data processing to promote a wider understanding of the issues at play.
Matthew Felling (@matthewfelling) is a contributor to the Washington Examiner’s Beltway Confidential blog. He is a former print/TV/radio journalist, media critic, and U.S. Senate communications director, now serving as a public affairs and crisis consultant with Burson-Marsteller in Washington.