The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Battle of the Tech Titans, Musk v. Altman
Musk is also represented by I/P litigation titan Morgan Chu of Irell & Manella. You can read the Complaint; an excerpt:
Together with [Gregory] Brockman, [Musk and Altman] agreed that this new lab: (a) would be a nonprofit developing AGI for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits; and (b) would be open-source, balancing only countervailing safety considerations, and would not keep its technology closed and secret for proprietary commercial reasons (The "Founding Agreement"). Reflecting the Founding Agreement, Mr. Musk named this new AI lab "OpenAI," which would compete with, and serve as a vital counterbalance to, Google/DeepMind in the race for AGI, but would do so to benefit humanity, not the shareholders of a private, for-profit company (much less one of the largest technology companies in the world)….
OpenAI's initial research was performed in the open, providing free and public access to designs, models, and code. When OpenAI, Inc. researchers discovered that an algorithm called "Transformers," initially invented by Google, could perform many natural language tasks without any explicit training, entire communities sprung up to enhance and extend the models released by OpenAI, Inc. These communities spread to open-source, grass-roots efforts and commercial entities alike….
In 2023, Defendants Mr. Altman, Mr. Brockman, and OpenAI set the Founding Agreement aflame. In March 2023, OpenAI released its most powerful language model yet, GPT-4…. At this time, Mr. Altman caused OpenAI to radically depart from its original mission and historical practice of making its technology and knowledge available to the public. GPT-4's internal design was kept and remains a complete secret except to OpenAI—and, on information and belief, Microsoft. There are no scientific publications describing the design of GPT-4. Instead, there are just press releases bragging about performance. On information and belief, this secrecy is primarily driven by commercial considerations, not safety. Although developed by OpenAI using contributions from Plaintiff and others that were intended to benefit the public, GPT-4 is now a de facto Microsoft proprietary algorithm, which it has integrated into its Office software suite.
Furthermore, on information and belief, GPT-4 is an AGI algorithm, and hence expressly outside the scope of Microsoft's September 2020 exclusive license with OpenAI. In this regard, Microsoft's own researchers have publicly stated that, "[g]iven the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system." Moreover, on information and belief, OpenAI is currently developing a model known as Q* (Q star) that has an even stronger claim to AGI.
As noted, Microsoft only has rights to certain of OpenAI's pre-AGI technology. But for purposes of the Microsoft license, it is up to OpenAI, Inc.'s Board to determine whether OpenAI has attained AGI, and a Board coup took place in November 2023. On November 17, 2023, OpenAI, Inc.'s Board fired Mr. Altman after losing "confidence in his ability to continue leading OpenAI" because "he was not consistently candid with the board." In a series of stunning developments spanning the next several days, Mr. Altman and Mr. Brockman, in concert with Microsoft, exploited Microsoft's significant leverage over OpenAI, Inc. and forced the resignation of a majority of OpenAI, Inc.'s Board members, including Chief Scientist Ilya Sutskever. Mr. Altman was reinstated as CEO of OpenAI, Inc. on November 21. On information and belief, the new Board members were hand-picked by Mr. Altman and blessed by Microsoft. The new Board members lack substantial AI expertise and, on information and belief, are ill equipped by design to make an independent determination of whether and when OpenAI has attained AGI—and hence when it has developed an algorithm that is outside the scope of Microsoft's license.
These events of 2023 constitute flagrant breaches of the Founding Agreement, which Defendants have essentially turned on its head. To this day, OpenAI, Inc.'s website continues to profess that its charter is to ensure that AGI "benefits all of humanity." In reality, however, OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft. Under its new Board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity. Its technology, including GPT-4, is closed-source primarily to serve the proprietary commercial interests of Microsoft. Indeed, as the November 2023 drama was unfolding, Microsoft's CEO boasted that it would not matter "[i]f OpenAI disappeared tomorrow." He explained that "[w]e have all the IP rights and all the capability." "We have the people, we have the compute, we have the data, we have everything." "We are below them, above them, around them."
This case is filed to compel OpenAI to adhere to the Founding Agreement and return to its mission to develop AGI for the benefit of humanity, not to personally benefit the individual Defendants and the largest technology company in the world.
I don't know whether the allegations are sound, but the lawsuit certainly bears watching.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
There is too much money in it to keep it non-profit.
Disagree. You can be non-profit, and still have fabulous amounts of money. Many of the largest institutions in the world are non-profit.
Case in point: Harvard.
It's how you dodge taxes.
Or impose them....
Sorry, who imposes taxes by putting their billions in non-profits?
Sorry, don't you know that money is fungible, and if some people dodge paying their "fair share", others have to make up the balance?
"Charity is bad, really," is definitely a take. I assume you're an evangelical Christian.
Funnily enough it was rather my point.
Here's a question for you Nige.
What's the largest "non-profit" in the world?
Actually I was referencing Hillary's comment on legalizing drugs:
"It is not likely to work. There is just too much money in it"
If Open-AI kept the code open source then it would take a lot of the money out, but likely spawn companies like Red-hat that monetized open source Linux, by maintaining it and adding value.
I guess I'm way too subtle sometimes.
A cynic would note that the exact same phrase also explains why criminalizing drugs hasn't worked.
There was a lot of money in slavery, but eradicating it (at least in this country) was still the right thing to do.
Yeah, that's way too subtle, given it's an entirely different field (AI versus drugs), different speakers (Businessmen versus Politicians), and not close to the original quote.
But it's not about "keeping the money out". It's about the fact that you can have non-profits in areas that still make fabulous amounts of money. Especially if you expand the definition of "nonprofit" to those organizations a that are not for-profit companies.
The love of money is the root of all evil.
It's an interesting conundrum, and it's worth going back into the history of OpenAI, and why it exists.
The short story is this. Basic/applied research, like this, is most useful to society as a whole when it is published and put into the open domain. Then other researchers can look at the results, and use them in their own research. Additionally, an "open" environment (as opposed to a closed corporate environment) encourages cooperation, innovative thinking, and more. A more corporate-type environment leads to silo-ing, replication of results, and situations where each party may have 1/2 of the answer, but they can't share with the other 1/2
AI (LLM) was a particularly troublesome area here. Top talent in the area couldn't be attracted by non-profit type salaries. Microsoft was reporting that top talent was getting "NFL QB prospect" type salaries (ie, low millions of dollars). And of course, all that research would be siloed.
Elon, (bless his heart), realized this issue, and donated a large sum of money so that a real non-profit (OpenAI) could be developed that would actually pay corporate type salaries to AI researchers. And it was a donation...not an investment (or at least not a traditional investment...it was more an investment into society).
Microsoft got a leg in however, and, well...
Musk's $50 million investment got turned into a $90 billion asset value, and Musk got zero. And a Delaware court nullified his Tesla compensation for the past 7 years. The poor guy is being cheated.
Do you really come here to simp for billionaires?
You sure got in a vicious personal attack on a straw man, without refuting anything anyone else said! This is quite the step up for you.
The poor guy
Simp-like posting detected
I seriously doubt any claims to artificial general intelligence. That AGI could be mimicked well enough to fool people, perhaps. But enough to be actually sentient? An independent mind? Nope.
I'm not exactly sure what is going on -- I don't even know who is on first base -- but this would be interesting even if it wasn't at the top of Drudge, which it is:
https://www.yahoo.com/entertainment/inside-crisis-google-203000169.html
If Musk wins, a great many non-profits are going to be in a lot of trouble.
Non-profit hospitals, universities, cultural organizations, and a great many other institutions also claim to operate for the general good of humanity, yet in practice also function and make decisions very much like businesses.