The “Geena Davis Institute on Gender in Media has partnered with Walt Disney Studios to deploy a new digital tool that uses AI technology to assess film and television scripts for gender bias,” according to the Hollywood Reporter.
So, what’s the plan here? A censor-bot: “Named ‘GD-IQ: Spellcheck for Bias,’ the new tool leverages patented machine learning technology … to rapidly analyze the text of a script to determine its number of male and female characters and whether they are representative of the real population at large. The technology also can discern the numbers of characters who are people of color, LGBTQI, possess disabilities or belong to other groups typically underrepresented and failed by Hollywood storytelling.”
Now, the problem this program aims to address seems to me to be a real one. Snow White, for example, is the film studies 101 example of screwed up moral messaging about girls and women. The titular hero, when you think about it, basically succeeds whenever she passively (often literally unconsciously) looks pretty while males around her do stuff, and bad things happen to her when she takes any agency.
But good intentions don’t make for good ideas, and an algorithm that enforces representation by self-censorship is a bad idea. Let’s put aside how instrumental and empty the conception of art here is. AI-based onscreen representation raises some serious practical questions.
Are we seriously supposed to believe that a computer program can discern the “appropriate” level of representation of Q folks when there’s no serious agreement on exactly how to define “queer”? Anyway, according to the New York Times, it’s not totally settled whether that letter in LGBTQIA+ stands for “queer” or for “questioning.”
For another, Americans think there are roughly five times as many gay people as there really are according to the best data. Should the robot make sure to correct unconscious bias by dramatically reducing inclusion of gay characters? How can or should GD-IQ “count” the roughly half of Hispanic/”Latinx” Americans who are white-identifying? Seems tricky. Will it “spellcheck” a movie about great black women artists for not including representation of members of those groups in STEM fields?
But the worst part of “Spellcheck for Bias” is the inbuilt idea about language: that bias is ungrammatical or misspelled, rather than wrong. GD-IQ is hardly the first example. In April, I covered a computer application called Catalyst, which autocorrects messages deemed by a computer program to contain “offensive” ideas about demographic groups as though they are writing errors. Back then, I said “the language police got a Robocop.” Now it seems more like Skynet. Since April, Microsoft’s Office announced a similar feature for Word, on the same thinking that “problematic” and prejudicial ideas about historically marginalized demographic groups are a special category of error that can be run out of human thought if we treat them as a misuse of language rather than a morally wicked and factually false category of beliefs.
Censor-bot technology promises that people can have unconscious biases; they just can’t message them to one another, record them in word processors, or include them in a script. The pitch is that eventually, without the words, the ideas will go away too. But this “theory of change,” to use Davis’s consultant-speak cliché, is a false promise. It’s certain to lead to more censorship but unlikely to lead to less bias.