It would be easy to dismiss Elon Musk’s lawsuit against OpenAI as a case of sour grapes.
Mr. Musk sued OpenAI this week, accusing the company of breaching the terms of its founding agreement and violating its founding principles. As he said, OpenAI was established as a non-profit organization that would build powerful artificial intelligence systems for the good of humanity and provide its research for free to the public. But Mr. Musk argues that OpenAI broke that promise by establishing a for-profit subsidiary that took billions of dollars in investment from Microsoft.
An OpenAI spokeswoman declined to comment on the lawsuit. In a memo sent to employees on Friday, Jason Kwon, the company’s chief strategy officer, denied Mr. Musk’s claims and said: “We believe that the claims in this suit may stem from Elon’s regret that he did not have relationship with the company today,” according to a copy of the memo I saw.
On one level, the lawsuit reeks of personal beef. Mr Musk, who founded OpenAI in 2015 with a group of other tech heavyweights and provided much of its initial funding but left in 2018 amid disagreements with leadership, resents being sidelined in discussions about AI. AI projects haven’t gained nearly as much traction as ChatGPT, OpenAI’s flagship chatbot. And Mr. Musk’s feud with Sam Altman, the chief executive of OpenAI, has been well documented.
But amid all the animus, there’s one point worth making, because it illustrates a paradox at the heart of today’s AI debate — and one where OpenAI is really talking out of both sides of its mouth, insisting that its AI systems are incredibly powerful and that they are no match for human intelligence.
The claim centers on a term known as AGI, or “artificial general intelligence.” Defining what constitutes AGI is very difficult, although most people would agree that it means an AI system that can do most or all of the things that the human brain can do. Mr Altman has defined AGI as “the equivalent of an average human you could hire as a colleague”, while OpenAI itself defines AGI as “a highly autonomous system that outperforms humans in the most economical work”.
Most AI company leaders claim that AGI is not only possible, but imminent. Demis Hassabis, the CEO of Google DeepMind, told me in a recent podcast interview that he believed AGI could arrive by 2030. Mr. Altman said AGI might be just four or five years away.
Building AGI is OpenAI’s clear goal, and it has plenty of reasons to want to get there before anyone else. A true AGI would be an incredibly valuable resource, capable of automating vast amounts of human labor and making a lot of money for its creators. It’s also the kind of shiny, audacious goal that investors love to fund and that helps AI labs recruit top engineers and researchers.
But AGI could also be dangerous if it is able to outwit humans, or if it becomes deceptive or misaligned with human values. The people who started OpenAI, including Mr. Musk, were concerned that an AGI would be too powerful to be owned by a single entity, and that if they ever got around to creating one, they would have to change the control structure around it. to prevent it from doing harm or concentrating too much wealth and power in the hands of a single company.
That’s why when OpenAI partnered with Microsoft, it specifically gave the tech giant a license that only applied to “pre-AGI” technologies. (The New York Times sued Microsoft and OpenAI for using copyrighted work.)
Under the terms of the agreement, if OpenAI ever built something that met the definition of AGI — as determined by OpenAI’s nonprofit board — Microsoft’s license would no longer apply, and OpenAI’s board could decide to everything he wanted to ensure that OpenAI’s AGI would benefit all of humanity. This could mean many things, including open-sourcing the technology or disabling it altogether.
Most AI commentators believe that modern state-of-the-art AI models do not qualify as AGI because they lack complex reasoning skills and often make boneheaded mistakes.
But in his legal filing, Mr. Musk makes an unusual argument. He argues that OpenAI has already achieved AGI with its GPT-4 language model, released last year, and that future technology from the company will even more clearly qualify as AGI
“On information and belief, GPT-4 is an AGI algorithm and therefore expressly outside the scope of Microsoft’s September 2020 exclusive license with OpenAI,” the complaint states.
What Mr. Musk is arguing here is a bit complicated. Basically, it says that because AGI has succeeded with GPT-4, OpenAI is no longer allowed to license it to Microsoft, and that its board should make the technology and research more freely available.
His complaint refers to the now-infamous “Sparks of AGI” paper by a Microsoft research team last year, which argued that GPT-4 showed early signs of general intelligence, including signs of human-level reasoning.
But the complaint also notes that OpenAI’s board is unlikely to decide that its AI systems actually it qualifies as an AGI because once it does, it needs to make big changes in how it develops and profits from technology.
In addition, he notes that Microsoft – which now has a non-voting observer seat on OpenAI’s board, after an upheaval last year that resulted in Mr Altman’s temporary dismissal – has a strong incentive to deny that OpenAI’s technology qualifies as AGI. license this technology in its products and risk potentially huge profits.
“Given Microsoft’s huge financial interest in keeping the portal closed to the public, OpenAI, Inc.’s new board. registered, conflicting, and compliant will have every reason to delay in finding that OpenAI has achieved AGI,” the complaint states. “In contrast, OpenAI’s achievement of AGI, like ‘Tomorrow’ in ‘Annie,’ will always be a day away.”
Given his history of contentious litigation, it’s easy to question Mr. Musk’s motives here. And as the head of a competing AI start-up, it’s no surprise that he’d want to tie OpenAI into messy litigation. But his lawsuit points to a real conundrum for OpenAI.
Like its competitors, OpenAI is keen to be seen as a leader in the race to build AGI, and has a vested interest in convincing investors, business partners and the public that its systems are improving at a breakneck pace.
However, due to the terms of its agreement with Microsoft, OpenAI’s investors and executives may be reluctant to admit that its technology qualifies as AGI, if and when it does.
This has put Mr. Musk in the odd position of asking a jury to decide what constitutes AGI and decide whether OpenAI’s technology has met the threshold.
The suit has also put OpenAI in the odd position of downplaying the capabilities of its own systems while continuing to fuel anticipation that a major AGI breakthrough is just around the corner.
“GPT-4 is not AGI,” OpenAI’s Mr. Kwon wrote in a memo to employees on Friday. “It is capable of solving small tasks in many jobs, but the ratio of work done by a human to the work of GPT-4 in the economy remains surprisingly high.”
The personal feud fueling Mr. Musk’s complaint has led some people to view it as a frivolous suit — one commentator compared it to “suing your ex for renovating your post-divorce house” — that will quickly be dismissed.
But even if dismissed, Mr. Musk’s lawsuit raises important questions: Who gets to decide when something qualifies as AGI? Are tech companies exaggerating or exaggerating (or both) when it comes to describing how capable their systems are? And what motivations lie behind various claims about how close or far away from AGI we might be?
A lawsuit by a billionaire with a grudge is probably not the right way to resolve these questions. But it’s good to ask, especially as AI advances continue to accelerate.