Artificial intelligence companies are at the forefront of the development of transformative technology. Now they are also fighting to set limits on how artificial intelligence is used in a year filled with important elections around the world.
Last month, OpenAI, the maker of the ChatGPT chatbot, said it was working to prevent its tools from being misused in elections, in part by banning their use to create chatbots that pretend to be real people or institutions. In recent weeks, Google also said it would restrict its AI chatbot, Bard, from responding to certain election-related prompts “out of an abundance of caution.” And Meta, which owns Facebook and Instagram, promised to better flag AI-generated content on its platforms so voters could more easily distinguish which material was real and which was fake.
On Friday, 20 tech companies — including Adobe, Amazon, Anthropic, Google, Meta, Microsoft, OpenAI, TikTok and X — signed a voluntary pledge to prevent misleading AI content from disrupting voting in 2024. The agreement, announced at the Munich Security Conference, included commitments by companies to cooperate on AI detection tools and other actions, but did not call for a ban on election-related AI content.
Anthrophic also said separately on Friday that it would ban the use of its technology in political campaigns or lobbying. In a blog post, the company, which makes a chatbot called Claude, said it will warn or suspend any users who violate its rules. It added that it uses tools trained to automatically detect and block disinformation and influence businesses.
“The history of artificial intelligence development has also been one full of surprises and unexpected results,” the company said. “We expect that in 2024 we will see amazing uses of artificial intelligence systems – uses that were not anticipated by their own developers.”
The efforts are part of a push by AI companies to gain access to a technology they’ve popularized as billions of people head to the polls. At least 83 elections worldwide, the largest concentration for at least the next 24 years, are expected this year, according to Anchor Change, a consultancy. In recent weeks, people in Taiwan, Pakistan and Indonesia have gone to the polls, with India, the world’s largest democracy, scheduled to hold its general election in the spring.
How effective the restrictions on AI tools will be is unclear, especially as tech companies move forward with increasingly sophisticated technology. On Thursday, OpenAI unveiled Sora, a technology that can instantly create realistic videos. Such tools could be used to produce text, sounds and images in political campaigns, blurring fact and fiction and raising questions about whether voters can tell which content is real.
AI-generated content has already appeared in the US political campaign, prompting regulatory and legal backlash. Some state lawmakers are drafting bills to regulate political content generated by artificial intelligence.
Last month, New Hampshire residents received robo-messages preventing them from voting in the state’s primary election with a voice likely artificially created to resemble President Biden. The Federal Communications Commission last week banned such calls.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, impersonate celebrities and misinform voters,” FCC Chairwoman Jessica Rosenworcel said at the time.
AI tools have also produced misleading or deceptive portrayals of politicians and political issues in Argentina, Australia, Britain and Canada. Last week, former prime minister Imran Khan, whose party won the most seats in Pakistan’s elections, used an AI voice to declare victory while in prison.
In one of the most consistent election cycles in memory, the misinformation and deception that artificial intelligence can create could be devastating to democracy, experts said.
“We’re behind the eight ball here,” said Oren Etzioni, a University of Washington professor who specializes in artificial intelligence and founder of True Media, a nonprofit that works to identify online disinformation in political campaigns. “We need tools to respond to this in real time.”
Anthropic said in its announcement Friday that it plans tests to identify how the Claude chatbot could produce biased or misleading content related to political candidates, political issues and election administration. These “red team” tests, which are often used to bypass a technology’s safeguards to better identify its vulnerabilities, will also investigate how the AI ​​responds to harmful queries, such as prompts asking for suppression tactics voters.
In the coming weeks, Anthropic is also rolling out a test that aims to redirect US users with questions about voting to authoritative sources of information such as TurboVote from Democracy Works, a nonpartisan nonprofit group. The company said its AI model was not trained often enough to provide reliable real-time data for specific elections.
Similarly, OpenAI said last month that it planned to point people to voting information through ChatGPT, as well as flag AI-generated images.
“Like any new technology, these tools come with benefits and challenges,” OpenAI said in a blog post. “It’s also unprecedented, and we’ll continue to evolve our approach as we learn more about how our tools are used.”
(The New York Times sued OpenAI and its partner Microsoft in December, alleging copyright infringement in news content related to AI systems.)
Synthesia, a start-up with an artificial intelligence video generator that has been linked to disinformation campaigns, also bans the use of technology for “news-type content,” including false, polarizing, divisive or misleading material. The company has improved the systems it uses to detect misuse of its technology, said Alexandru Voica, Synthesia’s head of corporate affairs and policy.
Stability AI, an image creation tool start-up, said it prohibited the use of its technology for illegal or unethical purposes, worked to prevent the creation of unsafe images and applied a imperceptible watermark to all images.
Big tech companies have also weighed in on the joint commitment in Munich on Friday.
Last week, Meta also said it was working with other companies on technology standards to help identify when content was created with artificial intelligence. Ahead of European Union parliamentary elections in June, TikTok said in a blog post on Wednesday that it will ban potentially misleading fake content and require users to tag realistic AI creations.
Google said in December that it would also require YouTube video creators and all election advertisers to disclose digitally altered or created content. The company said it is preparing for the 2024 election by restricting its AI tools, such as Bard, from returning answers to certain election-related questions.
“Like any emerging technology, artificial intelligence presents new opportunities as well as challenges,” Google said. AI can help fight abuse, the company added, “but we’re also preparing for how the disinformation landscape may change.”