Hours
Minutes
Seconds

Today at 4pm EST I Webinar: Dapta 101: Go from zero to your first AI agent in one session.

Sam Altman: “He Has a Sociopathic Indifference to the Consequences of Deceiving”

AI News Stories of the Week

Sam Altman: “He Has a Sociopathic Indifference to the Consequences of Deceiving”

Picture of Annie Neal
Annie Neal

Growth Advisor

Table of Contents

Share this post

The New Yorker has published what may be the most damaging profile of a technology executive since Elizabeth Holmes. An 18-month investigation by Ronan Farrow and Andrew Marantz, drawing on over 100 interviews and previously undisclosed internal documents, paints a portrait of OpenAI CEO Sam Altman as a serial deceiver whose pattern of behavior stretches back over a decade. The allegations are specific, documented, and devastating.

At the center of the report are the so-called “Ilya Memos,” approximately 70 pages of Slack messages, HR documents, and analysis compiled by OpenAI co-founder Ilya Sutskever in fall 2023. The first item on Sutskever’s list was a single word: “Lying.” According to the investigation, Sutskever documented instances where Altman “misrepresented facts to executives and board members” regarding safety protocols. The memos were sent via disappearing messages because Sutskever feared their discovery.

But the New Yorker piece goes far beyond the 2023 boardroom drama. The investigation traces a consistent pattern beginning at Loopt, Altman’s first startup, through his tenure at Y Combinator, and into OpenAI. At Y Combinator, co-founder Paul Graham is quoted saying Altman had been lying to them. Multiple YC partners confirmed that Altman was forced out despite his public claims to the contrary.

The allegations regarding OpenAI’s safety commitments are particularly alarming. When Dario Amodei (now CEO of Anthropic) was still at OpenAI, he drafted a “merge and assist” clause as his top safety demand during Microsoft’s $1 billion investment in 2019. This clause would have allowed OpenAI to assist competing AGI projects. According to the report, Altman denied the provision existed, forcing Amodei to read it aloud from the contract. Amodei’s assessment was blunt: the vast majority of the charter had been betrayed.

The investigation also reveals that OpenAI’s much-publicized superalignment team, announced in 2023 with a commitment of “20% of compute secured to date” (valued at over $1 billion), received only 1 to 2 percent of actual compute resources. Four sources told the New Yorker that the team was assigned older hardware with inferior chips while superior resources went to revenue-generating products. Team leader Jan Leike called the announcement “a pretty effective retention tool.” The team was disbanded in 2024 without completing its mission.

Financial self-dealing also features prominently. The investigation documents roughly 400 company investments by Altman. Multiple Silicon Valley investors reported that he selectively invested in the best companies while blocking outside investors. Financial entanglements with numerous former romantic partners are also detailed.

Perhaps most consequential for global politics, the report describes Altman’s continued pursuit of Saudi investment after the 2018 murder of journalist Jamal Khashoggi, with Altman asking advisers if he could proceed without backlash. He developed what is described as a close personal relationship with Sheikh Tahnoon of the UAE, visiting the Sheikh’s $250 million superyacht. A senior Biden Administration official stated that Altman was pushing transactional relationships that raised significant red flags.

Presented by: Dapta

For sales teams tired of cold leads, slow customer responses, and manual processes, Dapta is the ultimate tool.

Dapta is the leading platform for creating AI sales agents specifically designed to increase inbound lead conversion. Respond to your leads in less than a minute with voice AI and WhatsApp that converts.

If you want your team to sell more while AI handles the complex stuff, you have to try it.

The timing of the article amplifies its impact. It was published just days after OpenAI secured a $50 billion partnership integrating its models into Amazon Web Services’ Pentagon digital infrastructure, a deal that came together after Anthropic was blacklisted by Defense Secretary Pete Hegseth for refusing to drop restrictions on autonomous weapons and domestic surveillance. The deal increased OpenAI’s valuation by $110 billion but triggered ChatGPT app deletions and employee departures.

OpenAI has disputed multiple allegations and stated that its mission did not change. But the sheer volume of named and documented sources makes this investigation difficult to dismiss. When the New Yorker asked to interview OpenAI’s existential safety researchers, the company’s response was telling. Hours after the article’s publication, OpenAI announced a new Safety Fellowship program.

The question now is whether any of this matters. OpenAI has become too large and too integrated into government infrastructure for personal character concerns to derail it. But as the New Yorker investigation makes clear, the organization that was supposed to prove AI could be built responsibly may have abandoned that mission long ago.

Link here.

Link here.

You might also be interested in