Google, whose work in artificial intelligence has helped make it much easier to create and distribute AI-generated content, now wants to ensure that such content is also discoverable.
The tech giant said Thursday it was joining an effort to develop credentials for digital content, a kind of “food label” that identifies when and how a photo, video, audio clip or other file was created or modified — among other AI The company will work with companies such as Adobe, BBC, Microsoft and Sony to improve technical standards.
The announcement follows a similar promise announced Tuesday by Meta, which, like Google, has made it easy to create and distribute artificially generated content. Meta said it will promote standardized tags that identify such material.
Google, which has spent years pouring money into its AI initiatives, said it would explore how to integrate digital certification into its products and services, though it did not specify timing or scope. Its Bard chatbot is connected to some of the company’s most popular consumer services, such as Gmail and Docs. On YouTube, which is owned by Google and will be included in the digital credentials effort, users can quickly find videos featuring lifelike digital avatars addressing current events with voices powered by text-to-speech services.
Recognizing where online content comes from and how it’s changing is a high priority for policymakers and technology watchers in 2024, when billions of people will vote in major elections around the world. After years of misinformation and polarization, realistic AI-generated images and audio and unreliable AI detection tools made people further doubt the authenticity of things they saw and heard online.
Formatting digital files to include a verified record of their history could make the digital ecosystem more trustworthy, according to those who support a universal certification standard. Google sits on the steering committee for one such group, the Coalition for Content Provenance and Authenticity, or C2PA. C2PA standards have been supported by news organizations such as the New York Times as well as camera manufacturers, banks and advertising agencies.
Laurie Richardson, Google’s vice president of trust and security, said in a statement that the company hopes its work will “provide important context to people, helping them make more informed decisions.” He noted Google’s other efforts to give users more information about the online content they encounter, including flagging artificial intelligence material on YouTube and offering details about images in Search.
Efforts to attach credentials to metadata—the underlying information embedded in digital files—are not flawless.
OpenAI said this week that AI image creation tools will soon add watermarks to images according to C2PA standards. Starting Monday, the company said, images created by its online chatbot ChatGPT and its autonomous image creation technology, DALL-E, will include a visual watermark and hidden metadata designed to identify them as created by artificial intelligence. The move, however, “is not a silver bullet for dealing with provenance issues,” OpenAI said, adding that tags “can easily be removed either accidentally or on purpose.
(The New York Times Company is suing OpenAI and Microsoft for copyright infringement, accusing the tech companies of using Times articles to train AI systems.)
There is “a shared sense of urgency” to strengthen trust in digital content, according to a blog post last month by Andy Parsons, the senior director of Adobe’s Content Authenticity Initiative. The company released AI tools last year, including AI art production software Adobe Firefly and a Photoshop tool known as Generative Fill, which uses AI to expand a photo beyond its borders.
“The stakes have never been higher,” Mr Parsons wrote.
Cade Metz contributed to the report.