OpenAI launched GPT-5.5 on April 23, 2026, and for the first time in the company’s history, the launch did not center on chat. The model rolled out to ChatGPT Plus, Pro, Business, and Enterprise subscribers, with API access available the following day on April 24. President Greg Brockman, in a press briefing accompanying the release, called GPT-5.5 “a new class of intelligence” and described it as “a big step towards more agentic and intuitive computing,” framing the model as bringing OpenAI “one step closer to the creation of OpenAI’s super app.” The framing is unmistakable: this is a model built for autonomous work, not conversation.
The headline benchmark is Terminal-Bench 2.0, where GPT-5.5 achieves 82.7% accuracy, which OpenAI describes as state of the art for agentic coding. Terminal-Bench measures how well a model can autonomously complete real software engineering tasks in a terminal environment, including writing code, running tests, debugging failures, and iterating without human intervention. State of the art on this benchmark matters because it reflects the kind of work where AI agents are starting to deliver durable economic value: not answering questions but actually completing engineering tasks end to end.
Codex, OpenAI’s agentic coding application, is now powered by GPT-5.5 and runs on NVIDIA GB200 NVL72 rack-scale systems. The infrastructure choice is notable. NVIDIA’s GB200 NVL72 represents the highest-density compute platform currently shipping, designed specifically for trillion-parameter model inference at production scale. Pairing GPT-5.5 with this infrastructure allows OpenAI to handle the long-running, tool-heavy workloads that characterize agentic coding without the latency penalties that plagued earlier attempts at agent products.
OpenAI describes the gains as especially strong in four domains: agentic coding, computer use, knowledge work, and early scientific research. Each of these represents a direct competitive thrust against Anthropic, which has dominated the enterprise coding category through Claude Code. By leading with Codex updates rather than ChatGPT improvements, OpenAI is signaling that the company recognizes where the revenue is going. Enterprise customers paying for AI coding tools spend dramatically more per seat than consumer chat subscribers, and the addressable market scales with developer headcount globally.
The pricing structure for GPT-5.5 has not produced major changes in the API tier, which means existing OpenAI customers can upgrade with minimal procurement friction. Same latency, same pricing, but better results and lower token consumption per task. For teams that are already committed to OpenAI’s stack, GPT-5.5 is a free upgrade. For teams evaluating alternatives, it raises the bar that competitors need to clear.
For sales teams tired of cold leads, slow customer responses, and manual processes, Dapta is the ultimate tool.
Dapta is the leading platform for creating AI sales agents specifically designed to increase inbound lead conversion. Respond to your leads in less than a minute with voice AI and WhatsApp that converts.
If you want your team to sell more while AI handles the complex stuff, you have to try it.
The positioning shift away from chat is the most strategically significant element of this launch. ChatGPT remains OpenAI’s most visible product, but the company has clearly concluded that the future of AI revenue is not in better conversational interfaces. It’s in agents that complete work autonomously and bill for that work in volume. Greg Brockman’s super app framing reinforces this: OpenAI is building toward a single product that handles coding, research, document creation, and computer operation as one integrated agentic experience, rather than a chatbot that helps a human do those things.
For enterprise buyers in LATAM and globally, GPT-5.5’s launch creates a sharper choice between OpenAI and Anthropic for agentic coding workloads. Both companies now have flagship models specifically tuned for long-running, tool-heavy tasks. Both run on premium infrastructure (NVIDIA GB200 for OpenAI, Google TPUs for Anthropic). The differentiation increasingly comes down to enterprise integration, security posture, and pricing rather than raw capability. For developers and engineering leaders, the practical takeaway is that 2026 will be the year agentic coding moves from experimental to production, and choosing the right vendor will matter more than it did when AI was just helping people write code rather than writing it autonomously.