OpenAI case: the race for AI supremacy is no longer just between nations

Estimated read time 6 min read

OpenAI case: the race for AI supremacy is no longer just between nations

The week that has just passed has been breathtaking for OpenAI. And it now seems certain that an agreement has been reached for its co-founder and CEO Sam Altman, who had been ousted, to take over the reins of the company.

This decision comes after several twists and turns, during which OpenAI lost its CEO, saw him join Microsoft, replaced its first interim CEO with a second interim CEO, and had to face a staff revolt.


No one knows at the moment if Altman will get a seat on the board of directors. A seat he didn’t have before. He had said in an interview with Bloomberg in June 2023 on trust in AI: “You shouldn’t trust just one person… The board of directors can dismiss me. I think it’s important.”


In addition, it is not yet known whether the co-founder and chief scientist of OpenAI, Ilya Sutskever, will return. Mr Sutskever was on the previous board, along with his colleague Helen Toner, and both have been suspected of having played a role in the decision to dismiss Mr Altman. Sutskever, however, later expressed his regret for having acted in this way.


Helen Toner, who had remained silent throughout the affair, finally said on X after it was revealed that Altman would return: “And now we’re all going to sleep”.

Helen Toner vs. Sam Altman


Helen Toner had co-authored a research report that Altman said was critical of OpenAI. The report stressed that the company’s efforts to ensure the security of its AI developments were less important than those of its competitor Anthropic. According to an article in the New York Times, this report had angered Sam Altman enough that he campaigned in favor of Helen Toner’s removal from the board of directors.


In its initial statement announcing Altman’s dismissal, the OpenAI Board of directors had stated that it no longer had confidence in his ability to lead the company.


The Board of Directors also noted that OpenAI, created as a non-profit organization in 2015, had been “structured to advance our mission”, which consists of ensuring that general artificial intelligence (AGI) will benefit all of humanity. “The Board of Directors remains fully committed to serving this mission… we believe that new leadership is needed to move forward,” the statement read.

The future of AI is in the hands of a very small group of actors


The company was restructured in 2019 to raise capital in order to pursue its mission, while “preserving” the governance and supervision of the non-profit organization. “Although the company has experienced spectacular growth, the fundamental governance responsibility of the Board of directors remains to advance OpenAI’s mission and preserve the principles of its charter,” it says.


The board of directors, including Toner and Sutskever – who are said to be concerned that Mr. Altman favors expansion at the expense of AI security – having chosen to remain largely silent about the reasons that motivated the decision to dismiss Mr. Altman, speculation has multiplied on social media.


As the tensions between Mr. Altman and the Board of Directors multiplied, most observers noted that the debate most likely pitted the safety of AI against the profit of entrepreneurship. And therein lies the crux of the problem. These are still assumptions and speculations, because there is simply not enough information, if any, about the real concerns of the OpenAI Board of Directors.


What are the facts that Mr. Altman omitted or lied about? Is the research and development of OpenAI now close to AGI (general artificial intelligence)? The board of Directors is not sure that the “whole of humanity” is ready for this? Should the general public and the states also worry about this?


If there is one thing that has become even clearer in the last week, it is that the future of AI is largely in the hands of a very small group of market players.

Big Tech has the resources to determine the impact of AI on society

Big Tech players have the necessary resources to determine the impact of AI on society. However, this technological elite represents only a tiny part of the population.


In the space of a few days, she managed to maneuver Altman’s ouster, his hiring at Microsoft (albeit for a short time), the potential transfer of almost the entire OpenAI workforce to another major market player, and the eventual reinstatement of Altman.

And they did all this without explaining why he was fired and to verify or refute concerns about the priority given to AI security over OpenAI’s profits.


It was also mentioned that the new OpenAI Board of Directors would launch an investigation into the reasons that motivated Altman’s dismissal. But it would be an internal investigation.


Practicing what AI transparency preaches


Transparency is essential to the development and adoption of any AI – generative, AGI or otherwise. Transparency is the foundation of trust, on which most people agree that AI must be built in order to be accepted by humans.


Large technology companies also preach the importance of transparency in the implementation of responsible and ethical AI.


And when there are none, transparency must then be based on regulation. We need legislation that does not seek to inhibit market innovation in AI, but that focuses on the obligation of transparency in the way this innovation is developed and advanced.


The OpenAI case should allow governments to learn about how the development of AI should progress. We have also witnessed the complexity of managing AI, even if its development is linked to a non-profit corporate framework.


The fact that a key employee had to resign in order to be able to speak freely about the risks of AI makes it clear that market participants are not likely to be completely transparent in their development, even if they have committed to doing so.


This underlines the need for strong governance to ensure that they do so, and the urgency of setting up such governance.


Lawmakers will have to act quickly. The Bletchley Declaration on AI Security, which was developed under the auspices of the United Kingdom, is a great step forward. Indeed, 28 countries, including China, the United States, Singapore and the European Union, have agreed to collaborate on the identification and management of potential risks related to “avant-garde” AI. The multilateral agreement underlines the recognition by countries of the “urgent need” to ensure that AI is developed and deployed in a “safe and responsible” way for the benefit of the global community.


The United Nations has also planned the creation of an advisory team to review the international governance of AI in order to mitigate potential risks, pledging to adopt a “globally inclusive” approach.


I hope that people in these organizations and governments are taking notes on the OpenAI case. Indeed, the debate is no longer just about which country will dominate the race for AI, but also about whether the big tech companies will assume the necessary safeguards.


Source: “ZDNet.com “

You May Also Like

More From Author