Opinion
Why the battle over Open AI matters to everyone
November 22, 2023
As Sam Altman’s abrupt firing and then reinstatement at OpenAI has plunged the once unstoppable, now troubled AI giant into chaos. Altman has survived but predictions that the board which fired him would not last have proved correct. A new board has been appointed, including respected former US treasury secretary Larry Summers and former Salesforce boss Bret Taylor.
As it faced calls for resignation from major investors, the former executive team and a significant majority of OpenAI staff – 710 out of 770 – the position of the former board became untenable. They still haven’t offered a compelling justification for their behaviour.
Still, several internal leaks suggest that at the heart of this turmoil lies a critical debate: can a non-profit company originally dedicated to societal good maintain its ethical compass under significant, growing commercial pressure? And if they can’t – what does that mean for all of us?
OpenAI was founded to develop Artificial General Intelligence for the “best interests of humanity”. This soaring rhetoric isn’t unique in Silicon Valley, but OpenAI’s corporate structure is. Integrating a capped-profit business under a non-profit parent, the founders hoped to leverage the strength of both models.
Bringing in new investment and opportunities that would allow them to remain competitive and innovative in the rapidly evolving AI sector; while upholding their commitment to ethical and socially beneficial applications of AI technology.
Where the outcomes are potentially world-altering, we need an elevated sense of trust in the structures set up for making pivotal decisions.
Managing this tension between profit and societal good is always challenging and often messy, as is the tension between independent editorial and advertising in newspapers.
Still, it was a commitment felt so strongly the founders baked it into the company’s charter, a constitution of sorts, which mandates the benefits of AI are to be shared equally, and that long-term safety is a priority in all decision-making.
We know structures like this can be successful, too. Businesses such as Patagonia and Novo Nordisk have used similar governance structures to create a healthy tension that keeps commerce and ethics in balance and can lead to great profitability and success.
Using new governance structures like this is even more critical for pioneering businesses pushing on the absolute frontier of technology. All companies must act ethically, but as we venture into domains where the outcomes are unknown and potentially world-altering, we need an elevated sense of trust in the structures set up for making pivotal decisions. It’s a need that resonates widely; 87% of Australians want greater confidence that businesses will ethically implement AI technology.
We must trust that they’ll bring the right people into the room. They’ll share information with our governments, our universities and with us about what their tools are capable of. And to light the way for others to follow – ensuring Australian business leaders can ensure AI systems are accurate, accountable, fair, and fit-for-purpose too.
OpenAI hasn’t (and won’t) always get it right; it's far from it. But their structure sets them up to succeed more often than traditional businesses with shareholders looking over their shoulders.
Because at the end of the day, the only way to truly predict whether you can trust an organisation is by looking closely at its governance structure and processes. Who makes the decisions, in whose interest and where does the money go?
As Microsoft invested US$10 billion for a 49 per cent stake and customer interest in ChatGPT, DALL-E, and other tools surged, both external and internal critics questioned the sustainability of this balance. It’s telling, then, that barely ten days after the OpenAI DevDay, which included major consumer announcements such as the ability to build your own ChatGPT, the non-profit board removed the CEO who’d driven this strategy. Clearly, they felt he was no longer striking the right balance – and now they appear to have lost. The reinstated Altman says he will continue the relationship with Microsoft.
But this is more significant than an internal power struggle. After seeing the unaccountable actions of a non-profit board, investors will hesitate to fund similar organisations with the resources they need to compete. Delaying attempts to create more socially accountable organisations when we need them most. A void that will inevitably be filled by the existing tech giants, with all the baggage they carry.
This moment challenges us to ensure that technological advances serve us all, not just a few. The responsibility extends beyond industry; it requires governments, the institutions truly accountable to the people, to play an active role in safeguarding public interest in technology.
It’s no longer sufficient for private entities alone to champion social responsibility; regulatory oversight and the engagement of people from all walks of life will be crucial if we want to ensure these rapidly developing tools act in the interests of us all rather than for the profit of the few.