The term of art for when some speech is removed from a magazine or broadcast or book or web article is, depending on how and why it is done, “expurgation” or “redaction” or, more to the point, “censorship.” We give a bit of a different spin on it depending on the case. If it’s a network bleeping out swear words in a “live” news broadcast (which actually runs on a few-second delay for just this reason), that raises fewer concerns than if it’s pulling books out of stores because they criticize the president. It is, shall we say, a spectrum.
It’s a matter of judgment, and it can therefore be subject to abuse or simple incompetence. Today, though, censorship is increasingly outsourced to artificial intelligence. And this political choice is being pushed as inevitable and innocuous under the guise of euphemisms such as “content moderation.” People saying cruel and nasty or misinformative things to one another predates the internet. But you’d never know that when reading about the crisis of “toxic misinformation” that isn’t being sufficiently “moderated” online.
Take, say, a Vice article titled “Intel’s Dystopian Anti-Harassment AI Lets Users Opt In for ‘Some’ Racism,” about Bleep, which is “an end-user application that uses AI to detect and redact audio based on your user preferences.” In other words, it will listen to your video-game chat or videoconference and bleep out any use of certain words or (if the marketing is to be believed) certain concepts. The Vice write-up complains that “Bleep feels like an attempt by Intel to twist the giant racism dial until it gets its levels just right.” Per Vice, if this is possible, it is outrageous that it would not always bleep as much arguably offensive audio as possible — as though not using this surveillance robot to edit your life in real time is the creepy thing.
In the Atlantic, Evelyn Douek enthuses over an analogous tech issue, what she calls a “toxic content dial” at Facebook. Facebook, she recounts, had introduced emergency AI “content moderation” measures during the last election and then after the Derek Chauvin trial verdict. Facebook’s censor-bot promoted “authoritative information” and reduced “problematic content.” But, she asks, “which level is [the toxic content dial] set at on a typical day? On a scale of one to 10, is the toxicity level usually a five — or does it go all the way up to 11?”
Douek’s thrust is: Why not make the emergency censorship measures permanent if they make sure people see the “authoritative information” she prefers and not the content they prefer? “If there’s a reason turning down the dials on likely hate speech and incitement to violence all the time is a bad idea, I don’t see it.” Notice the word “likely” there? This is broken-windows speech policing or a sort of stop-and-frisk approach to content moderation.
I find the way this metaphor is being used where it comes to speech issues to be unsettling, frankly. It wasn’t long ago that people would argue some identity issue was a “spectrum” to stake out their right to live between its extremes. I liked that impulse. Now, if something is a “dial” for reducing expression, we are expected to crank it all the way to one side? This is an extremist approach, by definition, and I’d like to see a lot less of that intuition.