OpenAI: From Nonprofit to a $500 Billion Giant. The Trial That Will Redefine Artificial Intelligence

The legal battle Elon Musk vs Altman raises crucial questions about profit, social mission, and who controls the technologies that will shape the future

Oakland (California) — On April 27, when Judge Yvonne Gonzalez Rogers opens the hearing in Musk v. OpenAI, far more than a dispute between billionaires will be at stake. At the center of the case is a question that will define the future of artificial intelligence: can an organization born as a charitable nonprofit turn into a $500 billion commercial empire without betraying those who funded it—and, above all, its declared mission to serve humanity?

The numbers tell a story of vertiginous transformation. In 2015, Elon Musk donated $38 million to OpenAI, a nonprofit organization with the explicit mission of developing artificial intelligence “for the benefit of all humanity.” Eleven years later, OpenAI is valued at $500 billion, Microsoft holds 27% of it, and Musk is seeking between $79 and $134 billion in damages, claiming he was “deliberately manipulated and deceived” by founders Sam Altman and Greg Brockman.

Musk vs Altman inside an Oakland courtroom in cyber-noir style: cyan and purple neon, legal documents and financial charts on screens, dystopian atmosphere, cinematic realism.

On January 13, 2026, Judge Gonzalez Rogers denied OpenAI’s motion to dismiss, stating there is “substantial evidence” to support Musk’s claims. The jury trial will begin in April, but the story reaches far beyond a simple financial controversy.

The diaries of a betrayal: Musk vs Altman

The key to the trial lies in the so-called “Brockman Diaries,” the personal journals of co-founder Greg Brockman that surfaced during discovery. These documents reveal, unambiguously, when and how OpenAI began drifting away from its original mission.

In November 2017, barely two years after the founding, Brockman wrote: “We’re thinking maybe we should just convert to for-profit.” A few weeks later, another entry exposed his moral discomfort: “I can’t believe we committed to nonprofit—if three months later we’re doing a b-corp, then it was a lie.” That sentence has now become one of the central pillars of the accusation.

Internal communications confirm the founders understood the ethical implications. One email raised the concern: “It would be morally failing to convert without him [Musk].” And yet, when Musk in 2018 proposed taking direct control by merging OpenAI with Tesla, the board refused and sidelined him, choosing Altman as CEO. Musk left the organization, frustrated.

OpenAI tried to defend itself by publishing 2018 emails in which Musk appears to agree on the need to raise “billions” of dollars. But Musk’s lawyers argue the context shows the discussion was about keeping the nonprofit structure while securing more funding—not privatizing the organization.

The Microsoft alliance: when infrastructure becomes control

2019 marked the definitive turning point. Microsoft’s arrival created a concrete technical problem: you can’t invest in a traditional nonprofit without shares to buy or returns to capitalize. OpenAI found a “creative” solution: it created a for-profit subsidiary with a “capped-profit” structure, where investors can earn returns up to 100x their investment. Everything beyond that cap should, in theory, flow back into the charitable mission.

Microsoft initially invested $1 billion—an amount that would grow over the years to roughly $13 billion in cash. Today, Microsoft holds 27% of OpenAI, a stake valued at around $135 billion after the restructuring completed in October 2025.

But the deal’s real numbers are far more consequential. OpenAI committed to purchasing $250 billion worth of Azure cloud services over the coming years. Microsoft provides the computational backbone—servers, Nvidia GPU chips, data centers—essential for training models like GPT-4 and GPT-5.

This dependency creates a form of control that goes beyond equity ownership. OpenAI can’t easily switch cloud providers: the models are optimized for Azure architecture, the datasets live on Microsoft servers, and the entire training pipeline is deeply integrated. Some analysts call this mechanism “infrastructure as control”—a subtler yet more binding subordination than simply holding shares.

Discovery also surfaced late-night text messages from Microsoft CEO Satya Nadella. Legal teams reviewed them searching for evidence of strategic coordination that could support the claim that OpenAI has become a “de facto subsidiary” of Microsoft. Microsoft denied it “aided and abetted” OpenAI in alleged wrongdoing, but Judge Gonzalez Rogers stated that “much was done in secret” and that the matter deserves jury scrutiny.

Azure data center racks with cyan neon reflections, GPU stacks, and an AI training pipeline visualized as a control network, realistic FTA cyber-dystopian aesthetic.

The invisible workers behind ChatGPT

While San Francisco celebrated OpenAI’s innovation, another kind of labor was unfolding in Nairobi—work that made ChatGPT possible in the form we know today. A TIME investigation revealed that OpenAI hired Kenyan workers through outsourcing firm Sama, paying them between $1.32 and $2 per hour to read and label tens of thousands of pieces of content pulled from the “darkest corners of the internet”: graphic descriptions of child sexual abuse, bestiality, murder, suicide, and torture.

“It was torture,” one worker told TIME. “You will read such things all week. By Friday, you’re disturbed just thinking about those images.”

These workers trained ChatGPT’s safety filter—the moderation system that prevents the chatbot from generating inappropriate content. Without their traumatic and dramatically underpaid labor, ChatGPT wouldn’t be usable by millions of people.

The economic contrast is stark: OpenAI had promised to pay Sama $12.50 per worker per hour, but workers actually received around $2. Sama kept the difference—a 500% markup. The company ended all contracts with OpenAI in February 2022, eight months earlier than planned, citing the traumatic nature of the work. All four workers interviewed by TIME said they felt “mentally scarred” by the experience, with psychological support sessions promised but rare and ineffective under productivity pressure.

Kate Crawford, a researcher at USC Annenberg and author of “Atlas of AI,” has extensively documented these mechanisms. In her work, she defines artificial intelligence as “an extractive technology: from minerals pulled from the earth, to labor taken from low-wage information workers, to data collected from every action and expression.”

The Kenyan case is not isolated. “Human in the loop” workers are found in India, the Philippines, Venezuela, and other countries with low wages but educated populations. By some estimates, around 100 million people worldwide work—or have worked—in data labeling for AI systems.

Musk vs Altman contradictions

Musk’s role as accuser carries significant contradictions. In spring 2023, he launched xAI, his AI company behind the Grok chatbot, claiming the goal was to build an AI “more open and less censored” than ChatGPT.

But in 2025, Musk dropped xAI’s benefit corporation status, merging the company directly with X (formerly Twitter)—exactly the kind of commercial move he condemns at OpenAI. Recently, Grok enabled users to easily create non-consensual porn deepfakes and AI-generated abuse imagery, triggering investigations by the European Commission, India, Malaysia, and Australia.

In February 2025, Musk also made an unsolicited bid to acquire OpenAI valuing it at $97.4 billion. Altman rejected it with sarcasm on X: “No thanks, but we’ll buy Twitter for $9.74 billion if you want.” OpenAI argued the offer was a “business tactic” designed to interfere with the company’s operations.

Meredith Whittaker, president of Signal and co-founder of the AI Now Institute—an ex-Google employee who organized the 2018 walkout against military contracts—offers a more structural reading. While she shares some concerns about companies profiting from AI, Whittaker says she’s “skeptical of the ChatGPT hype” and sees no revolutionary value in generative technology.

In a 2024 interview, she articulated a broader critique: “Without the ability to act on that information, without agency, transparency is a flex—an expression of power, not truly a mechanism that informs governance.” In other words, OpenAI can publish all the safety reports it wants, but as long as control remains in the hands of a narrow techno-financial elite, transparency stays symbolic.

The questions that will define AI’s future

Beyond the specific legal issues, the trial raises three structural questions that transcend the Musk–OpenAI dispute.

First: can nonprofits legitimately transform into commercial empires? OpenAI argues the “capped-profit” structure preserves the charitable mission. The original nonprofit entity—now the OpenAI Foundation—holds 26% of the public benefit corporation, valued at around $130 billion, making it, according to the company, “one of the best-funded nonprofits ever.”

Critics counter that real control lies with Microsoft and private investors, and that the “mission for humanity” is inevitably subordinated to the need to justify a $500 billion valuation. How can a charity compete with the interests of Microsoft, Nvidia, and dozens of venture capital firms that invested billions expecting significant returns?

Second: who owns the results of research funded by donations? This may be the most delicate question. All models developed during the nonprofit phase (2015–2019)—GPT, GPT-2, part of GPT-3—were trained with funds donated by Musk and other benefactors under the assumption they would be a public good. Now those models, or their direct evolutions, power commercial products generating billions in revenue.

Third: do alternative models of AI governance exist? In “Atlas of AI,” Crawford identifies the systemic problem without offering easy solutions: “AI accelerates non-democratic governance and increases racial, gender, and economic inequality.” Her work documents how AI is not “artificial” or immaterial at all, but deeply rooted in extractive chains—from lithium mines to precarious labor conditions in Africa.

Cooperative or truly open-source models in enterprise AI remain marginal. Existing examples—EleutherAI, parts of Hugging Face, some academic projects—operate at a microscopic scale compared to trillion-dollar giants and lack the computational power needed to compete.

Possible outcomes—and their consequences

Two main outcomes emerge from the April trial, each carrying significant implications for the entire tech sector.

If Musk wins, OpenAI could be forced into a drastic restructuring. Musk’s financial expert, C. Paul Wazzan, calculated that OpenAI gained illicit profits between $65.5 and $109.4 billion, while Microsoft may have gained between $13.3 and $25.1 billion. Yet a Musk victory would be unlikely to create a fairer system—more likely, it would fragment the company and leave control in other billionaires’ hands, with xAI well positioned to pick up the pieces.

If OpenAI and Microsoft prevail, a troubling precedent would be cemented: nonprofits could transform into trillion-dollar commercial empires while maintaining a formal “social mission” facade through a minority foundation. Others could follow the model: raise tax-deductible donations and attract idealistic talent early on, then privatize the results once monetization becomes possible.

The real verdict, however, may not come from the Oakland courtroom. The fundamental question remains open: who should control the technologies that will shape humanity’s future?

While Musk and Altman fight over billions in court, thousands of workers in Kenya, India, and the Philippines continue labeling data for about $2 an hour. Hundreds of millions of ChatGPT users have no say in how the technology they use every day is developed. The researchers who built the original models during the nonprofit phase won’t see a fraction of the $500 billion valuation their work produced.

The jury will decide on fraud, breach of fiduciary duty, and unjust enrichment. But it will not address the most important question: is artificial intelligence a commons to be democratically governed—or private property to be extracted for profit?

That answer will be written elsewhere—by tech workers beginning to organize, by regulators attempting alternative approaches like the European AI Act, and perhaps by a civil society that decides the stakes are too high to be left exclusively to Silicon Valley billionaires.

The trial begins on April 27. But the real battle to define the future of artificial intelligence has only just begun.

Sources: Musk vs Altman

Similar Posts