For nearly 30 years, Oren Etzioni has been one of the most optimistic researchers in artificial intelligence.
However, in 2019 Dr. Etzioni, a University of Washington professor and founding CEO of the Allen Institute for AI, became one of the first researchers to warn that a new breed of AI would accelerate the spread of misinformation online. And in the middle of last year, he said, he was dismayed that AI-generated deepfakes would lead to an important election. He founded a nonprofit organization, TrueMedia.org in January, hoping to combat this threat.
On Tuesday, the agency released free tools to spot digital disinformation, with a plan to put them in the hands of journalists, fact-checkers and anyone else trying to figure out what’s real online.
The tools, available from the TrueMedia.org website to any nonprofit approved by the nonprofit, are designed to detect fake and distorted images, audio and video. They examine links to media files and quickly determine if they should be trusted.
Dr. Etzioni sees these tools as an improvement over the patchwork defense currently used to detect misleading or deceptive AI content. But in a year when billions of people around the world are set to vote in elections, it continues to paint a bleak picture of what lies ahead.
“I’m terrified,” he said. “There’s a very good chance we’re going to see a tsunami of misinformation.”
In the first few months of the year alone, AI technologies helped create fake voice calls from President Biden, fake Taylor Swift image and audio ads, and an entire fake interview that appeared to show a Ukrainian official claiming credit for a terrorist attack in Moscow. Detecting such misinformation is already difficult — and the tech industry continues to release increasingly powerful AI systems that will create ever more convincing deepfakes and make detection even more difficult.
Many AI researchers warn that the threat is gathering steam. Last month, more than a thousand people — including Dr. Etzioni and several other prominent AI researchers — signed an open letter calling for laws that would hold developers and distributors of audio and video AI services liable if their technology was easily used to create harmful deepfakes.
At an event hosted by Columbia University on Thursday, Hillary Clinton, the former secretary of state, interviewed Eric Schmidt, the former CEO of Google, who warned that videos, even fake ones, could “ drive voting behavior, human behavior, moods, everything.”
“I don’t think we’re ready,” Mr. Schmidt said. “This problem will get much worse in the coming years. Maybe or maybe not until November, but definitely in the next cycle.”
The tech industry is well aware of the threat. Even as companies race to develop artificial intelligence systems, they try to limit the damage these technologies can cause. Anthropic, Google, Meta, and OpenAI have all announced plans to limit or flag election-related uses of their AI services. In February, 20 tech companies — including Amazon, Microsoft, TikTok and X — signed a voluntary pledge to prevent misleading AI content from disrupting voting.
This could be a challenge. Companies often release their technologies as “open source” software, meaning that anyone is free to use and modify them without restrictions. Experts say the technology used to create deepfakes – the result of massive investment by many of the world’s biggest companies – will always outpace technology designed to detect disinformation.
Last week, during an interview with the New York Times, Dr. Etzioni showed how easy it is to create a deepfake. Using a service from a sister nonprofit, CivAI, which relies on artificial intelligence tools readily available online to demonstrate the dangers of these technologies, he immediately created photos of himself in prison—somewhere he’d never been.
“When you see yourself pretending, it’s very scary,” she said.
Later, he created a deep fake of himself in a hospital bed – the kind of image he believes could sway the election if applied to Mr Biden or former President Donald J. Trump just before the election.
TrueMedia’s tools are designed to detect fakes like these. More than a dozen startups offer similar technology.
But Dr. Etzioni, while noting the effectiveness of his team’s tool, said no detector was perfect because it was driven by probabilities. Deepfake detection services have been tricked into declaring images of robots kissing and giant Neanderthals to be real photos, raising concerns that such tools could further damage society’s trust in facts and evidence.
When Dr. Etzioni gave TrueMedia tools a known deepfake of Mr Trump sitting in a crouch with a group of young black men, they called it “highly suspicious” – their highest level of confidence. When he uploaded another well-known deepfake of Mr Trump with blood on his fingers, they were “unsure” whether it was real or fake.
“Even with the best tools, you can’t be sure,” he said.
The Federal Communications Commission recently banned AI-generated robocalls. Some companies, including OpenAI and Meta, now mark AI-generated images with watermarks. And researchers are exploring additional ways to separate the real from the fake.
The University of Maryland is developing a cryptographic system based on QR codes to authenticate unaltered live recordings. A study released last month asked dozens of adults to breathe, swallow and think while speaking so their speech pause patterns were compared to the rhythms of cloned audio.
But like many other experts, Dr. Etzioni cautions that image watermarks are easily removed. And although he has dedicated his career to fighting deepfakes, he acknowledges that detection tools will struggle to outpace new AI technologies.
Since creating TrueMedia.org, OpenAI has unveiled two new technologies that promise to make its job even harder. One can recreate a person’s voice from a 15-second recording. Another can create full-motion videos that look like something out of a Hollywood movie. OpenAI is not yet sharing these tools with the public as it works to understand potential risks.
(The Times sued OpenAI and its partner, Microsoft, over claims of copyright infringement involving artificial intelligence systems that generate text.)
Ultimately, Dr. Etzioni said, combating the problem will require broad cooperation between government regulators, the companies that create artificial intelligence technologies and the tech giants that control the browsers and social networks where misinformation is spread. He said, however, that the likelihood of that happening before the fall election was slim.
“We’re trying to give people the best technical assessment of what’s in front of them,” he said. “They still have to decide if it’s real.”