Lawmakers are considering a bill to address the problem posed by video editing tools that can make anyone, including politicians, appear to say things they didn’t really say or do things they didn’t really do.
So-called deepfake videos use powerful editing software to paste one person’s face onto another person’s body or to make a person’s mouth appear to say different words. The technology has become more sophisticated recently, making it increasingly difficult to spot faked videos.
Rep. Yvette Clarke, D-N.Y., introduced on June 12 the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act, which would require faked videos to be labeled with an irremovable watermark, allow victims of deepfakes to sue creators, and require social media outlets to deploy better detection tools.
Clarke said in a statement that the bill is designed to combat the use of deepfakes in disinformation campaigns during U.S. elections. The following day, the House Intelligence Committee began hearings on deepfake videos, during which some technology experts expressed skepticism of Clarke’s bill.
Armand Cucciniello III, a former senior strategist for the Defense Department, said the bill would be unlikely to deter amateur video creators living outside the U.S. or those looking to disrupt political campaigns.
“Demanding that the creators of deepfakes disclose that their work is fabricated is like Congress saying it will force a burglar to alert the homeowner before the break-in,” he said, adding that the proposed bill is “arguably the silliest proposal I’ve read thus far.”
“There’s no easy solution, and it’s likely to get much worse before it gets better,” acknowledged David Doermann, director of the Artificial Intelligence Institute at University at Buffalo. Keeping up with the artificial intelligence technologies that create deepfakes is a “race that may never end.”
“Congress’ attempt to crack down on the use of deepfakes is the first sign that legislators are acknowledging the dangers that this technology poses,” Ray Walsh, a digital privacy expert at ProPrivacy.com, told the committee. “Deepfakes are improving at an alarming rate, and they have already begun to prove that they can fool both the general public and members of the press.”
Election hacks don’t need to target voting machines, he added. “Social engineering attacks carried out en masse using deepfake videos can easily cause scandal, and, if timed correctly to coincide with fake data leaks and the use of social media to distribute fake news, it could wreak havoc on the outcome of elections,” he said.
In late May, a doctored video of House Speaker Nancy Pelosi, D-Calif., made to look like she was slurring her words, was shared and viewed millions of times on social media. After Facebook declined to remove the Pelosi video, a group of artists and an advertising company uploaded a fake video of Mark Zuckerberg onto Instagram, showing the Facebook CEO claiming to control the future with the help of fictional villain organization SPECTRE.
Still, forcing creators to watermark faked videos will mainly affect joke or parody videos not meant to fool people, according to Walsh. “New regulations seem unlikely to deter fake videos created for criminal or malicious ends,” he said.
Adam Dodge, a speaker and attorney focused on helping victims of technology-related crimes, called the legislation a good first step. Legislation that criminalizes the creation of malicious deepfakes is an “important piece to the coordinated response that will work in concert with the public and private sectors to deter and address deepfake harm,” he said.
Dodge called for legislation to address pornography-related deepfakes specifically. “Any legislation should focus not just on the future threats to democracy and the electoral process, but on the abuse of today — deepfake pornography targeting women,” he said. “This type of harm is essentially revenge porn 2.0, allowing a bad actor to insert a victim at will into a pornographic movie.”