Meta will promote new standards for generative artificial intelligence that will allow the public and social media companies to identify AI-generated content.
Meta Vice President of Global Affairs Nick Clegg announced on Tuesday that the owner of Facebook and Instagram had been collaborating with several technology partners to create a series of commonly visible markers and invisible metadata that will identify images as AI-generated. Facebook and Instagram will now use these standards to identify automatically whether a post is AI-generated. This new feature is designed to help establish a normalized practice across social media companies that can be used to avoid falsified images as the 2024 election approaches.
“People value transparency around this new technology, so the more we can do to demystify, the better, especially as we head deeper into this election year,” Clegg wrote on Threads.
These markers were developed through the Partnership on AI, a professional forum featuring Big Tech companies such as Apple, Amazon, Google, OpenAI, and several other academic and advocacy institutions. Members of POA have agreed to adopt IPTC and C2PA, open standards that define the sort of metadata used to identify an image as AI-generated. These standards will allow Meta to identify quickly whether an image is AI-generated by products such as DALL-E or Shutterstock.
However, this will not account for all AI-generated content. Most technology companies have not started including invisible markers or metadata in video or audio at scale, which means Meta platforms cannot detect those automatically. The company has added a feature that will let users disclose if they share AI-generated content so that Meta can properly label it.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
Meta is also determining ways to make it harder for designers to strip AI-generated images of the invisible markers and to ensure that it can detect if an image is AI-generated regardless of whether it has the invisible markers.
Meta’s adoption of these standards reflects Silicon Valley’s efforts to rein in AI-generated misinformation without Congress passing legislation. While Big Tech CEOs such as Meta’s Mark Zuckerberg and OpenAI’s Sam Altman have called for the passage of legislation establishing guardrails for AI technology, Congress has been slow to do anything to address AI-generated misinformation in 2024.