Inside OpenAI’s weird governance structure
“WHICH WOULD you have more confidence in? Getting your technology from a non-profit, or a for-profit company that is entirely controlled by one human being?” asked Brad Smith, president of Microsoft, at a conference in Paris on November 10th. That was Mr Smith’s way of praising OpenAI, the startup behind ChatGPT, and knocking Meta, Mark Zuckerberg’s social-media behemoth.
In recent days OpenAI’s non-profit governance has looked rather less attractive. On November 17th, seemingly out of nowhere, its board fired Sam Altman, the startup’s co-founder and chief executive. Mr Smith’s own boss, Satya Nadella, who heads Microsoft, was told of Mr Altman’s sacking only a few minutes before Mr Altman himself. Never mind that Microsoft is OpenAI’s biggest shareholder, having backed the startup to the tune of over $10bn.
By November 20th the vast majority of OpenAI’s 700-strong workforce had signed an open letter giving the remaining board members an ultimatum: resign or the signatories will follow Mr Altman to Microsoft, where he has been invited by Mr Nadella to head a new in-house AI lab.
The goings-on have thrown a spotlight on OpenAI’s unusual structure, and its even more unusual board. What exactly is it tasked with doing, and how could it sack the boss of the hottest ai startup without any of its investors having a say in the matter?
The firm was founded as a non-profit in 2015 by Mr Altman and a group of Silicon Valley investors and entrepreneurs including Elon Musk, the mercurial billionaire behind Tesla, X (formerly Twitter) and SpaceX. The group collectively pledged $1bn towards OpenAI’s goal of building artificial general intelligence (AGI), as AI experts refer to a program that outperforms humans on most intellectual tasks.
After a few years OpenAI realised that in order to attain its goal, it needed cash to pay for expensive computing capacity and top-notch talent—not least because it claims that just $130m or so of the original $1bn pledge materialised. So in 2019 it created a for-profit subsidiary. Profits for investors in this venture were capped at 100 times their investment (though thanks to a rule change this cap will rise by 20% a year starting in 2025). Any profits above the cap flow to the parent non-profit. The company also reserves the right to reinvest all profits back into the firm until its goal of creating AGI is achieved. And once it is attained, the resulting AGI is not meant to generate a financial return; OpenAI’s licensing terms with Microsoft, for example, cover only “pre-AGI” technology.
The determination of if and when AGI has been attained is down to OpenAI’s board of directors. Unlike at most startups, or indeed most companies, investors do not get a seat. Instead of representing OpenAI’s financial backers, the organisation’s charter tasks directors with representing the interests of “humanity”.
Until the events of last week, humanity’s representatives comprised three of OpenAI’s co-founders (Mr Altman, Greg Brockman and Ilya Sutskever) and three independent members (Adam D’Angelo, co-founder of Quora; Tasha McCauley, a tech entrepreneur; and Helen Toner, from the Centre for Security and Emerging Technology, another non-profit). On November 17th four of them—Mr Sutskever and the three independents—lost confidence in Mr Altman. Their reasons remain murky but may have to do with what the board seems to have viewed as pursuit of new products paired with insufficient concern for AI safety.
The firm’s bylaws from January 2016 give its board members wide-ranging powers, including the right to add or remove board members, if a majority concur. The earliest tax filings from the same year show three directors: Mr Altman, Mr Musk and Chris Clark, an OpenAI employee. It is unclear how they were chosen, but thanks to the bylaws they could henceforth appoint others. By 2017 the original trio were joined by Mr Brockman and Holden Karnofsky, chief executive of Open Philanthropy, a charity. Two years later the board had eight members, though by then Mr Musk had stepped down because of a feud with Mr Altman over the direction OpenAI was taking. Last year it was down to six. Throughout, it was answerable only to itself.
This odd structure was designed to ensure that OpenAI can resist outside pressure from investors, who might prefer a quick profit now to AGI for humankind later. Instead, the board’s amateurish ousting of Mr Altman has piled on the pressure from OpenAI’s investors and employees. That tiny part of humanity, at least, clearly feels misrepresented. ■