AI6 min readArs Technica AI

“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO

P
Redakcja Pixelift0 views
Share
“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO

Bloomberg / Contributor | Bloomberg

More than 100 people associated with OpenAI, including key former leaders such as Ilya Sutskever and Dario Amodei, point to one fundamental problem plaguing the AI giant: a lack of trust in Sam Altman. An investigation published by "The New Yorker" casts a shadow over the image of the CEO, whom associates describe as having an almost sociopathic tendency toward manipulation and prioritizing his own power over technological safety. Although OpenAI officially promotes a policy of "putting people first" and warns of the risks of losing control over superintelligence, internal voices suggest these declarations may merely be a smokescreen for aggressive expansion. For global users and creators utilizing OpenAI tools, these reports have practical implications. They undermine the credibility of safety mechanisms implemented in GPT models, suggesting that the optimistic narrative of "prosperity for all" serves primarily to appease regulators and public opinion. In the face of a growing number of lawsuits and government scrutiny, the rift between ethical manifestos and Altman's actual management methods is becoming a key business risk. Users must consider that the direction of AI development today depends not only on technical breakthroughs but, above all, on the personal ambitions of a leader whose credibility has been publicly challenged by his closest associates.

In the world of technology, it is rare for a company's official policy line and its internal leadership culture to stand in such stark contradiction. OpenAI, the organization behind the generative artificial intelligence revolution, has just published an ambitious political manifesto aimed at convincing the public that it strives to create "superintelligence that serves humanity." However, at the same time, details of an extensive investigation by "The New Yorker" are coming to light, casting a shadow on the man behind these promises. According to company insiders, Sam Altman may be the biggest obstacle to realizing a safe vision of AI.

The gap between what OpenAI declares publicly and how its CEO is perceived is becoming increasingly difficult to ignore. While the company promotes ideas such as public wealth funds or a 32-hour work week, former colleagues and board members paint a picture of a leader whose primary goal is the accumulation of power. One board member summarized Altman as a person combining two rare traits: an obsessive need to be liked and an almost sociopathic lack of concern for the consequences of misleading others.

Architecture of promises versus the reality of disinformation

The "The New Yorker" investigation, based on interviews with over 100 people and an analysis of internal notes, suggests that Sam Altman operates based on an "accumulation of alleged deceptions and manipulations." Documentation gathered by former chief scientist Ilya Sutskever and former head of research Dario Amodei indicates that these behaviors were not isolated incidents, but a systematic pattern of action. It was these observations that led Amodei to the conclusion that "the problem at OpenAI is Sam himself."

Sam Altman during a public appearance
Sam Altman is seen by critics as a master of narrative who can tailor his arguments to the expectations of his interlocutor.

Altman defends himself against these allegations, claiming that he simply forgot some events and that changes in his narrative result from the dynamically evolving technological landscape. He only admits to avoiding conflicts in the past. Nevertheless, for industry observers, this "flexibility" in conveying facts becomes problematic at a time when governments and key institutions are beginning to rely on OpenAI models in critical areas of state functioning.

It is worth noting that Sam Altman has recently changed the tone of his communication. From the position of a "savior" protecting the world from an AI apocalypse, he has moved toward "exuberant optimism." This image change coincides with growing public resistance to AI technology, fueled by concerns about child safety, job loss, and the massive energy consumption of data centers. OpenAI's new policy proposals may therefore be seen not as a real corrective plan, but as a PR smokescreen.

Industrial policy for the age of intelligence

In response to deteriorating public sentiment, OpenAI presented a set of policy recommendations it calls "industrial policy for the age of intelligence." The company proposes a series of solutions intended to mitigate the effects of the technological transformation:

  • Pilot programs for a 32-hour work week without a reduction in pay, to compensate for the productivity growth resulting from AI.
  • A public wealth fund that would provide every citizen with a share in the profits generated by the development of artificial intelligence.
  • Taxation of automated labor, the funds from which would finance social programs such as Social Security or housing assistance.
  • Investment in the care sector, encouraging people pushed out of the labor market by AI to retrain for professions related to healthcare and social work.

From the editorial perspective of Pixelift, these proposals sound extraordinarily progressive, even utopian. However, critics, including an OpenAI researcher quoted by "The New Yorker," warn: Altman has a tendency to create structures that on paper limit his power in the future, only to—when that future arrives—simply dismantle them. This calls into question the credibility of any declarations about "self-regulation" or "public-private cooperation."

Analysis of OpenAI policy
OpenAI's proposals regarding public funds and worker protection are aimed at appeasing regulators and the public.

The game for dominance under the guise of safety

Analyzing OpenAI's postulates regarding regulation, one can see an attempt to lock down the market. The company suggests that "only companies with the most advanced models" should be subject to rigorous safety audits. While officially intended to protect smaller competition, in practice, it may mean that giants like OpenAI will co-create entry barriers that make life difficult for new players, while legitimizing their own dominance under the banner of "responsibility for superintelligence."

The political context is particularly interesting. Sam Altman privately lobbies against strict AI safety regulations, an accusation made in "The New Yorker" report, while publicly calling for "common-sense regulations." This dualism is key in the face of upcoming political changes in the US. If forces seeking stronger oversight of technology take control of legislation, Altman's current strategy of "persuading a skeptical public that their priorities are his priorities" may stop working.

"The problem is that Altman can convince people that he is striving for mutually exclusive goals, just to maintain the pace of development and funding," insiders note.

Currently, OpenAI is intensifying its lobbying efforts, offering research grants and API credits worth up to $1 million for projects supporting their political vision. This is a classic flight forward. In an industry where superintelligence is predicted by some experts in as little as two years, time works in favor of those who impose their narrative the fastest. However, without fundamental trust in the leader, even the most generous promises regarding a 32-hour work week or an AI dividend may be perceived only as an expensive marketing campaign by the "greatest huckster of his generation."

My thesis is as follows: OpenAI has entered a phase where technology has ceased to be their only product—now the product is trust. If Sam Altman cannot clear himself of allegations of manipulation and putting his own power above the safety of humanity, no policy recommendations will save the company from the growing skepticism of regulators. The AI industry stands at the threshold of a moment where operational transparency will become more important than GPT model parameters, and Altman's story shows that it is transparency that may be the most difficult challenge for OpenAI's current leadership.

Comments

Loading...