Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s AI technologies.
The hacker extracted details from discussions on an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its artificial intelligence.
OpenAI executives disclosed the incident to employees during a board meeting at the company’s San Francisco offices in April 2023, according to the two people, who discussed sensitive company information on condition of anonymity.
But executives decided not to share the news publicly because no customer or partner information had been stolen, the two people said. Officials did not consider the incident a national security threat because they believed the hacker was a private individual with no known ties to a foreign government. The company did not notify the FBI or anyone else in law enforcement.
For some OpenAI employees, the news sparked fears that foreign adversaries such as China could steal the AI ​​technology, which — while now primarily a work and research tool — could eventually endanger national security. of the USA. It also led to questions about how seriously OpenAI takes security and revealed fractures within the company over the risks of artificial intelligence.
After the breach, Leopold Aschenbrenner, director of OpenAI’s technical program focused on ensuring that future AI technologies don’t cause serious harm, sent a memo to OpenAI’s board of directors arguing that the company had not done enough to deter the Chinese government and other foreign rivals from stealing his secrets.
Mr. Aschenbrenner said that OpenAI fired him this spring for leaking other information outside the company and argued that his dismissal was politically motivated. He mentioned the breach in a recent podcast, but details of the incident have not been previously reported. He said OpenAI’s security was not strong enough to protect against the theft of key secrets if outside actors infiltrated the company.
“We appreciate the concerns Leopold raised while at OpenAI and this did not lead to his separation,” said OpenAI spokeswoman Liz Bourgeois. Referring to the company’s efforts to build artificial general intelligence, a machine that can do anything the human brain can do, he added: “While we share his commitment to building safe AGI, we disagree with many of the claims he has made since for work.”
Fears that a hack of a US tech company may have links to China are not unreasonable. Last month, Microsoft chairman Brad Smith testified on Capitol Hill about how Chinese hackers used the tech giant’s systems to launch a broad attack on federal government networks.
But under federal and California law, OpenAI cannot prevent people from working at the company because of their nationality, and policy researchers have said banning foreign talent from U.S. projects could prevent significantly the advancement of artificial intelligence in the United States.
“We need the best and brightest minds working on this technology,” Matt Knight, OpenAI’s chief security officer, told the New York Times in an interview. “It comes with some risks and we have to understand them.”
(The Times sued OpenAI and its partner, Microsoft, alleging copyright infringement of news content related to artificial intelligence systems.)
OpenAI isn’t the only company building increasingly powerful systems using rapidly improving AI technology. Some of them – notably Meta, the owner of Facebook and Instagram – freely share their designs with the rest of the world as open source software. They believe that the risks posed by today’s AI technologies are small and that sharing code allows engineers and researchers across the industry to identify and fix problems.
Today’s AI systems can help spread misinformation online, including text, still images and, increasingly, video. They are also starting to take away some jobs.
Companies like OpenAI and rivals Anthropic and Google are adding guardrails to their AI apps before offering them to individuals and businesses, hoping to prevent users from using the apps to spread misinformation or cause other problems.
However, there is little evidence that today’s AI technologies pose a significant national security risk. Studies by OpenAI, Anthropic and others over the past year have shown that AI was not significantly more dangerous than search engines. Daniela Amodei, Anthropic’s co-founder and the company’s president, said its latest AI technology wouldn’t be much of a risk if its designs were stolen or freely shared with others.
“If it was someone else’s property, could that be extremely harmful to a lot of society? Our answer is “No, probably not,” he told The Times last month. “Could it speed something up for a bad actor down the road? It can. It’s really profitable.”
But researchers and tech executives have long worried that AI could one day fuel the creation of new bioweapons or help break into government computer systems. Some believe it could destroy humanity.
Some companies, including OpenAI and Anthropic, have already locked down their technical operations. OpenAI recently created a Safety and Security Committee to explore how it should handle the risks posed by future technologies. The committee includes Paul Nakasone, a former Army general who led the National Security Agency and the Cyberspace Administration. He has also been appointed to the board of directors of OpenAI.
“We started investing in security years before ChatGPT,” said Mr Knight. “We are on a journey not only to understand risks and stay ahead of them, but also to deepen our resilience.”
Federal officials and state lawmakers are also pushing for government regulations that would prevent companies from releasing certain artificial intelligence technologies and fine them millions if their technologies cause harm. But experts say those risks are still years or even decades away.
Chinese companies build their own systems that are almost as powerful as the top US systems. By some measures, China has eclipsed the United States as the largest producer of AI talent, with the country producing nearly half of the world’s top AI researchers.
“It’s not crazy to think that China will soon be ahead of the US,” said Clément Delangue, CEO of Hugging Face, a company that hosts many of the world’s open-source artificial intelligence projects.
Some researchers and national security leaders argue that the mathematical algorithms at the heart of today’s AI systems, while not dangerous today, could become dangerous and require tighter controls on AI labs.
“Even if the worst-case scenarios are relatively low-probability, if they have a big impact, it’s our responsibility to take them seriously,” said Susan Rice, former domestic policy adviser to President Biden and former national security adviser to President Barack Obama. during an event in Silicon Valley last month. “I don’t think it’s science fiction, as many like to claim.”