Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, Meta’s president of global affairs, called a nascent effort to detect artificially generated content “the most urgent mission” facing the tech industry today.
On Tuesday, Mr Clegg proposed a solution. Meta said it would promote technology standards that companies across the industry could use to identify indicators in photo, video and audio material that would mean the content was created using artificial intelligence.
The standards could allow social media companies to quickly identify AI-generated content posted on their platforms and allow them to tag that material. If widely adopted, the standards could help identify AI-generated content from companies like Google, OpenAI and Microsoft, Adobe, Midjourney and others that offer tools that allow people to quickly and easily create artificial posts .
“While this is not a perfect answer, we didn’t want to let the perfect be the enemy of the good,” Mr. Clegg said in an interview.
He added that he hopes this effort will be a rallying cry for companies across the industry to adopt standards for identifying and signaling that content is artificial, making it easier for everyone to recognize it.
As the United States enters a presidential election year, industry observers believe artificial intelligence tools will be widely used to publish fake content to misinform voters. Over the past year, people have used artificial intelligence to create and spread fake videos of President Biden making false or inflammatory statements. The New Hampshire attorney general’s office is also investigating a series of robocalls that appeared to use an AI-generated voice of Mr. Biden urging people not to vote in recent primaries.
Meta, which owns Facebook, Instagram, WhatsApp and Messenger, is uniquely positioned to develop technology to encourage widespread consumer adoption of AI tools while being the world’s largest social network capable of distributing content generated by AI. Mr Clegg said Meta’s position gave him special insight into both the production and distribution side of the issue.
Meta is based on a set of technology specifications called the IPTC and C2PA standards. It is information that identifies whether a piece of digital media is authentic in the metadata of the content. Metadata is the underlying information embedded in digital content that provides a technical description of that content. Both standards are already widely used by news organizations and photographers to describe photos or videos.
Adobe, which makes Photoshop editing software, and a number of other technology and media companies have spent years lobbying their peers to adopt the C2PA standard and have formed the Content Authenticity Initiative. The initiative is a collaboration between dozens of companies — including The New York Times — to fight misinformation and “add a layer of obvious falsification to all types of digital content, starting with photos, videos and documents,” according to the initiative.
Companies that offer AI creation tools could add the templates to the metadata of the videos, photos or audio files they helped create. This would mean to social networks such as Facebook, Twitter and YouTube that such content was artificial when uploaded to their platforms. Those companies, in turn, could add tags noting that those posts were generated by AI to inform users who saw them on social networks.
Meta and others also require users who post AI content to indicate whether they have done so when they upload it to the companies’ apps. Failure to do so will result in penalties, although the companies have not specified what those penalties might be.
Mr Clegg also said that if the company determined that a digitally created or altered post “creates a particularly high risk of materially misleading the public on an important matter”, Meta could add a more prominent tag to the post to give the audience more information and context about its origin.
AI technology is advancing at a rapid pace, which has prompted researchers to try to keep up with the development of tools to detect fake content online. Although companies like Meta, TikTok and OpenAI have developed ways to detect such content, technologists have quickly found ways to circumvent these tools. Artificially generated video and audio have proven to be even more difficult to detect than AI photos.
(The New York Times Company is suing OpenAI and Microsoft for copyright infringement over the use of Times articles to train AI systems.)
“Bad actors will always try to circumvent the standards we create,” Mr Clegg said. He described technology as a “sword and shield” for the industry.
Part of that difficulty comes from the fragmented nature of how tech companies approach it. Last fall, TikTok announced a new policy that would require its users to tag videos or photos they upload that are created using artificial intelligence. YouTube announced a similar initiative in November.
Meta’s new proposal will try to tie some of these efforts together. Other industry efforts, such as the Partnership on AI, brought together dozens of companies to discuss similar solutions.
Mr Clegg said he hoped more companies agreed to join the standard, especially when it comes to presidential elections.
“We felt particularly strongly that during this election year, waiting for all the pieces of the puzzle to fall into place before acting would not be warranted,” he said.