When Elon Musk sued OpenAI and its CEO, Sam Altman, for breach of contract on Thursday, he turned the claims of the start-up’s closest partner, Microsoft, into a weapon.
He repeatedly cited a controversial but highly influential paper written by researchers and top Microsoft executives about the power of GPT-4, the revolutionary OpenAI artificial intelligence system released last March.
In the “Sparks of AGI” paper, Microsoft’s research lab said that — although it didn’t understand how — GPT-4 had shown “sparks” of “artificial general intelligence,” or AGI, a machine that can do everything in the human brain . can.
It was a bold claim and came as the world’s biggest tech companies raced to introduce artificial intelligence into their own products.
Mr. Musk is turning the tables on OpenAI, saying it shows how OpenAI has backtracked on its promises not to commercialize truly powerful products.
Microsoft and OpenAI declined to comment on the case. (The New York Times has sued both companies, alleging copyright infringement in the GPT-4 training.) Mr. Musk did not respond to a request for comment.
How did the research paper come about?
A team of Microsoft researchers, led by Sébastien Bubeck, a 38-year-old French expatriate and former Princeton professor, began testing an early version of GPT-4 in the fall of 2022, months before the technology is released to the public. Microsoft has committed $13 billion to OpenAI and has negotiated exclusive access to the underlying technologies that power its AI systems.
As they chatted with the system, they were amazed. He wrote a complex mathematical proof in the form of a poem, created computer code that could draw a unicorn, and explained the best way to stack a random and eclectic collection of household items. Dr. Bubeck and his fellow researchers began to wonder if they were witnessing a new form of intelligence.
“I started out very skeptical — and that evolved into a sense of frustration, annoyance, maybe even fear,” said Peter Lee, Microsoft’s head of research. “You’re thinking: Where the hell is this coming from?”
What role does paper play in Mr. Musk’s suit?
Mr Musk argued that OpenAI had breached its contract because it had agreed not to commercialize any product that its board had deemed to be AGI
“GPT-4 is an AGI algorithm,” Mr. Musk’s lawyers wrote. They said this meant the system should never have been licensed to Microsoft.
Mr. Musk’s complaint repeatedly cited the Sparks newspaper to argue that GPT-4 was an AGI His lawyers said: “Microsoft’s own scientists acknowledge that GPT-4 is ‘acquiring a form of general intelligence’ and given ‘the breadth and depth of GPT- In terms of its 4 capabilities, we believe it could reasonably be considered an early (but still incomplete) version of an artificial general intelligence (AGI) system.
How was the reception?
The paper has been hugely influential since it was published a week after the release of GPT-4.
Thomas Wolf, co-founder of the high-profile AI start-up Hugging Face, He wrote at X the day after the study “had absolutely stunning examples” of GPT-4.
Microsoft’s research has since been cited in more than 1,500 other publications, according to Google Scholar. It is one of the most cited articles on artificial intelligence in the last five years, according to Semantic Scholar.
It has also faced criticism from experts, including some within Microsoft, who worried that the 155-page paper supporting the claim lacked rigor and fueled an AI marketing frenzy.
The work was not peer-reviewed and its results cannot be reproduced because it was conducted on early versions of GPT-4 closely guarded by Microsoft and OpenAI. As the authors noted in the paper, they did not use the version of GPT-4 that was later released to the public, so anyone else replicating the experiments would have different results.
Some outside experts said it was unclear whether GPT-4 and similar systems exhibited behavior that was anything like human logic or common sense.
“When we see a complex system or machine, we anthropomorphize it. Everyone does this — people who work in the field and people who don’t,” said Alison Gopnik, a professor at the University of California, Berkeley. “But thinking of this as a constant comparison between AI and humans — like some kind of game show competition — is not the right way to think about it.”
Were there any other complaints?
In the paper’s introduction, the authors first defined “intelligence” by citing a 30-year-old Wall Street Journal opinion piece that, defending a concept called the Bell Curve, argued that “Jews and East Asians” were more likely to they have a higher IQ. rather than “black and Hispanic.”
Dr. Lee, who is listed as an author on the paper, said in an interview last year that when researchers were looking to define AGI, “we took it from Wikipedia.” He said that when they later found out about the Bell Curve connection, “we were really disappointed by it and made the switch immediately.”
Eric Horwitz, the chief scientist at Microsoft who was the lead contributor to the paper, wrote in an email that he personally took responsibility for inserting the reference, saying he had seen it referenced in a paper by a co-founder of the artificial intelligence lab DeepMind of Google and had not noticed the racist references. When they learned about it, from a post on X, “they were shocked because we were just looking for a fairly broad definition of intelligence from psychologists,” he said.
Is this AGI or not?
When the Microsoft researchers originally wrote the paper, they called it “First Contact with an AGI System.” But some members of the team, including Dr Horwitz, disagreed with the characterization.
He later told the Times that they weren’t seeing what he would “call ‘artificial general intelligence’ — but more of detectors and surprisingly strong results at times.”
GPT-4 is far from doing everything the human brain can do.
In a message sent to OpenAI employees Friday afternoon seen by the Times, OpenAI’s chief strategist Jason Kwon specifically said that GPT-4 was not an AGI
“It is capable of solving small tasks in many jobs, but the ratio of work done by a human to the work of GPT-4 in the economy remains surprisingly high,” he wrote. “Importantly, an AGI will be a highly autonomous system capable enough to devise new solutions to long-standing challenges — GPT-4 cannot do that.”
However, the paper fueled claims by some researchers and experts that GPT-4 represented a major step toward AGI, and that companies like Microsoft and OpenAI would continue to improve the technology’s reasoning skills.
The field of artificial intelligence is still very divided about how smart the technology is today or will be soon. If Mr. Musk succeeds, a jury can settle the argument.