Good governance often calls for placing the values of the organisation ahead of short term gains. This is particularly important in AI, where the demands of investor to make quick returns can very easily incentivise companies to play fast and loose with safety.
This is a link-enhanced version of an article that first appeared in the Mint. You can read the original here.
When OpenAI was founded in 2015, its primary objective was to “advance digital intelligence in the way that is most likely to benefit humanity as a whole.” The path it chose to achieve this was to build large language models, a computationally intensive exercise that had only just become achievable at scale because of recent advances in modern chip design.
This, however, was going to require significant investment, and in order to raise these funds while staying true to its prime objective, OpenAI decided to put in place a complex corporate structure to separate ownership from control. Financial investors were made to invest in a for-profit company over whose governance they had no control. That would be determined by a different not-for-profit entity that was required to place human safety above all else, even if it was at the cost of profits or shareholder value.
In November 2023, the OpenAI board sacked Chief Executive Officer Sam Altman for, among other things, failing to provide the board with advance information on significant corporate developments, such as the launch of ChatGPT; at least one board member later claimed that she had first got to know of its launch through social media. If this was true —if, in fact, Altman had not disclosed critical business information to the board before releasing it to the world—it seems clear that the board’s ability to prioritize human safety had been severely compromised. By sacking him, it would seem the board was doing exactly what it was supposed to.
Good Governance
Good governance is about placing the core values of an organisation ahead of short-term commercial imperatives. While it might seem that the only reason businesses exist is to maximise profits, sustained profitability requires a performance culture built on a set of core values that are constantly and consistently enforced.
Through much of my professional career, I have been called upon to deal with many such instances—both as part of the management of my law firm as well as in the course of advising clients. Among the most difficult situations we’ve had to deal with are those that required us to take decisive action in order to uphold core organisation values. In more than a few instances, it involved taking action against high-performing individuals, or persons who either brought in significant revenue or controlled important client relationships. While considering the actions we had to take in order to uphold corporate values, we know we could suffer an immediate loss of revenue, while running the risk that some or all of the clients serviced by these individuals would leave with them. These are consequences no commercial organization wants to suffer. And yet, if that is what it takes to uphold the core values that define its culture, these are actions no firm can shy away from.
A well-governed organization will uphold its values even when it is not in its immediate commercial interest to do so. From experience, I can say that this is extraordinarily difficult to do in the moment. Concerns abound about the immediate commercial consequences of these actions and the impact on its reputation. It is only when the management recognizes that there is long-term value in preserving the culture of the organization that it will be able to hold true to its values despite the cost. Only organizations that consistently do this can truly evolve into institutions.
Reinstated
Shortly after Altman was sacked as CEO of OpenAI, there was a widespread revolt in the company. Nearly 800 employees threatened to quit unless he was reinstated, pledging to follow him wherever he went. There was significant consternation in the industry over what this dismissal meant and how it would impact the future of AI. Eventually, all of OpenAI’s major investors had to step in to set things right.
OpenAI’s board had been tasked with ensuring that the company remained true to its core values even if that came at a commercial cost. Its directors determined, for better or worse, that Altman remaining CEO was incompatible with these values, and so they ousted him from that role in the belief that doing so was aligned with their fiduciary obligation to uphold the core values of the organization.
Within a week of being sacked, Altman was re-instated as CEO. The board that had terminated him was recast. Key members who were involved in his dismissal were removed and new members more aligned with his vision were appointed. Three months later, Altman himself was back on the board of OpenAI.
Guardrails
Now it is impossible for any outsider, much less someone on the other side of the planet, to opine on whether the board was right in doing what it did. What, however, is beyond doubt is that the guard-rails that OpenAI had put in place to ensure that no one—not even its CEO—could act in a manner inconsistent with its values, had failed. Not only was the board unable to hold Altman accountable, key members got sacked for trying.
There is every likelihood that OpenAI will continue to grow from strength to strength. However, there is no doubt that this growth will be driven by purely commercial incentives. Its prime objective is no longer to operate in the interests of humanity, but to protect the interests of its investors who are now effectively in control.
Whether OpenAI will, under these new circumstances, become the institution that it could have, only time will tell.