The events surrounding OpenAI and its CEO Sam Altman highlight the challenges in establishing effective governance structures that can appropriately control AI development. Given the profit motivation of private enterprise and the other narrow commercial interests that they are constrained by, we need to develop alternate robust frameworks that can operate beyond the influence of private commercial entities.
This is a link enhanced version of an article that first appeared in The Mint. You can read the original here.
All anyone was talking about last week was OpenAI.
Over the course of five short days, its chief executive officer Sam Altman was fired by the board, hired by Microsoft and reinstated as the head of OpenAI. But, while the events of last week were reported from the perspective of the 700 odd employees who threatened to walk out if their CEO was not reinstated, the tech giant whose $13 billion commitment to a company over whose board it had little control was imprudent to say the least, and also of the 37-year-old CEO who remains the undisputed face of today’s Generative Artificial Intelligence (AI) revolution, despite the drama, the long-term effects of the week’s events will be most deeply felt by the governance community, whose attempt at controlling the most transformative technology in over a century has truly failed.
Raison d'etre
OpenAI was born out of a fear that commercially funded AI research labs—like Google’s DeepMind—were hidden from public gaze, which meant that the technologies they were creating could be dangerous and no one would be any wiser. It was to ensure that AI development proceeds in a safe and responsible manner that OpenAI was set up as a non-profit organisation with the objective of making sure “… artificial intelligence benefits humanity regardless of profit." Its original founders—Sam Altman and Elon Musk—committed up to $1 billion of their own money to a not-for-profit entity that had been established for that purpose.
Despite the generous initial commitment, it soon became clear that building a large language model was far more expensive than they had originally imagined. OpenAI was going to need far more capital than a non-profit would ordinarily be able to access. To marry the twin objectives of raising private capital while prioritising safety, OpenAI gave itself a somewhat unusual corporate structure in 2019—with a for-profit unit housed within an entity that was supervised by a not-for-profit board.
The not-for-profit board was vested with extraordinary powers in order to ensure that AI development proceeded safely. It was allowed to pull the plug if it believed that the company was going down a path that was harmful to society—even if that came at the cost of investments its shareholders had made. It was obliged to let nothing—neither the commercial interests of investors nor the hubris of the person at the helm of affairs—come in the way of ensuring that the AI that was being built was safe. The moment it believed that a line was about to be crossed, it was empowered to take extreme measures to prevent that.
What Went Down
This was the board that fired Sam Altman. It is still not clear, at the time of writing, what the exact reasons for his termination were. All that the board’s official statement said was that Altman had not been “consistently candid in his communications with the board."
What exactly was communicated or why the board believed he was not candid is still anyone’s guess—the board was under no obligation to provide reasons. Its singular mandate was to assure themselves that the path along which the company was currently proceeding continued to be of benefit to humanity. If directors believed for any reason that this was not the case, and that removing Sam Altman as CEO was necessary to set things back on track, they were well within their rights to show him the door.
This, as we all know, was not how things ended. In the face of vociferous protests from employees, pressure from its largest investor and a clamorous response from just about everyone in the tech community, Altman was reinstated as CEO five days after he was sacked. The board was also reconstituted with a group of individuals who, presumably, were less likely to depose him in the future. And with these changes, everyone seemed to relax, relieved that things had returned to what they should have been.
Responsible Data Governance
But surely, even those who are happy with the outcome must realize that the safety net we thought we had put in place to protect us from malevolent AI lies well and truly breached. Let us, for a moment, assume that Altman was working on technologies that were a threat to humanity, and, knowing that the board would shut these down, had been less than candid with the information he had provided them. If this was indeed the case, the board was right to sack him. But what was the point of such a decision if all it would take to reverse it was a weekend’s worth of protests? Is this really our protection against a Skynet future?
Whenever the responsibility of establishing governance frameworks for matters of societal importance is left to private enterprise, the systems set up by the latter inevitably fall short. Corporate entities, even those that are governed by not-for-profit boards, are driven by narrow commercial incentives. They are incapable of finding a balance between their financial imperatives and the larger societal interest.
As I have often said in this column and in my other writing, we need a different approach to technology governance—one that is capable of withstanding the pressures that OpenAI’s board clearly was not. This will require us to free AI from the control of a single organisation that seems to be consolidating its grip over it.
Maybe it is also time to check if key elements of the techno-legal approach to data governance that we have perfected in relation to digital public infrastructure can be usefully applied to AI governance.