OpenAI introduces ChatGPT Pro $100 tier with 5X usage limits for Codex compared to Plus

Ledger
OpenAI introduces ChatGPT Pro $100 tier with 5X usage limits for Codex compared to Plus
Coinbase



OpenAI is making moves to try and court more developers and vibe coders (those who build software using AI models and natural language) away from rivals like Anthropic.

Today, the firm arguably most synonymous with the generative AI boom announced it will begin offering a new, more mid-range subscription tier — a $100 ChatGPT Pro plan — which joins its free, Go ($8 monthly), Plus ($20 monthly) and existing Pro ($200 monthly) plans for individuals using ChatGPT and related OpenAI products.

OpenAI also currently offers Edu, Business ($25 per user monthly, formerly known as Team) and Enterprise (variably priced) plans for organizations in said sectors.

Why offer a $100 monthly ChatGPT Pro plan?

So why introduce a new $100 ChatGPT Pro plan, then?

Binance

The big selling point from OpenAI is that the new plan offers five times greater usage limits on Codex, the company's agentic vibe coding application/harness (the name is shared by both, as well as a lineup of coding-specific language gmodels), than the existing, $20 monthly Plus plan, which seems fair given the math ($20×5=$100).

As OpenAI co-founder and CEO Sam Altman wrote in a post on X: "It is very nice to see Codex getting so much love. We are launching a $100 ChatGPT Pro tier by very popular demand."

However, alongside this, OpenAI's official company account on X noted that "we’re rebalancing Codex usage in [ChatGPT] Plus to support more sessions throughout the week, rather than longer sessions in a single day."

That sounds a lot like OpenAI is also simultaneously reducing how much ChatGPT Plus users can use its Codex harness and application per day.

What are the new usage limits for the new $100 ChatGPT Pro plan vs. the $20 Plus?

So, what are the current limits on the $20 Plus plan? The new Pro plan gives you 5X greater than…what?

Turns out, this is trickier than you'd think to calculate, because it actually varies depending on which underlying AI model you are using to power the Codex application or harness, and whether you are working on code stored in the cloud or locally on your machine or servers.

OpenAI’s Developer website underwent several updates today, so we've only reflected the latest pricing structure and offerings below as of Thursday, April at 10:45 pm ET. It notes that for individual users, Codex usage is categorized by “Local Messages” (tasks run on the user’s machine) and “Cloud Tasks” (tasks run on OpenAI’s infrastructure), and those limits share a five-hour rolling window.

It also says additional weekly limits may apply. The current Codex pricing page now shows lower displayed usage ranges than the older version, and it measures Code Reviews in a five-hour window rather than per week. For Pro 5x specifically, OpenAI says the currently shown limits include a temporary 2x usage boost that ends May 31, 2026.

ChatGPT Plus ($20/month)

GPT-5.4: 20–100 local messages every 5 hours.

GPT-5.4-mini: 60–350 local messages every 5 hours.

GPT-5.3-Codex: 30–150 local messages and 10–60 cloud tasks every 5 hours.

Code Reviews: 20–50 every 5 hours.

ChatGPT Pro 5x ($100/month)

GPT-5.4: 200–1,000 local messages every 5 hours.

GPT-5.4-mini: 600–3,500 local messages every 5 hours.

GPT-5.3-Codex: 300–1,500 local messages and 100–600 cloud tasks every 5 hours.

Code Reviews: 200–500 every 5 hours.

Note: The limits shown for Pro 5x include a temporary 2x usage boost that ends May 31, 2026.

ChatGPT Pro 20x ($200/month)

GPT-5.4: 400–2,000 local messages every 5 hours.

GPT-5.4-mini: 1,200–7,000 local messages every 5 hours.

GPT-5.3-Codex: 600–3,000 local messages and 200–1,200 cloud tasks every 5 hours.

Code Reviews: 400–1,000 every 5 hours.

Exclusive access: Includes GPT-5.3-Codex-Spark in research preview for ChatGPT Pro users only. OpenAI says it has its own separate usage limit, which may adjust based on demand.

OpenAI has revised the Codex usage table downward from the older numbers previously shown, and it has also changed how Code Reviews are presented. Instead of weekly pull-request limits, the company now lists Code Review capacity in the same five-hour framework used for local messages and cloud tasks. For Pro 5x users, OpenAI also says the current displayed limits are temporarily elevated by a 2x boost through May 31, 2026, so those figures may be reduced after that date.

And as OpenAI's Help documentation states:

"The number of Codex messages you can send within these limits varies based on the size and complexity of your coding tasks, and where you execute tasks. Small scripts or simple functions may only consume a fraction of your allowance, while larger codebases, long running tasks, or extended sessions that require Codex to hold more context will use significantly more per message."

The larger strategic implications and context

OpenAI’s sudden move toward the $100 price point and expanded agentic capacity comes amid the unprecedented financial ascent of its chief rival, Anthropic.

Just days ago, Anthropic revealed its annualized run-rate revenue (ARR) has topped $30 billion, surpassing OpenAI's last reported ARR of approximately $24–$25 billion.

This growth has been fueled by the massive adoption of Claude Code and Claude Cowork, products that have set the benchmark for enterprise-grade autonomous coding.

The competitive friction intensified on April 4, 2026, when Anthropic officially blocked Claude subscriptions from being used to provide the intelligence for third-party agentic AI harnesses like OpenClaw.

To be clear, Anthropic Claude models themselves can still be used with OpenClaw, users just must now pay for access to Claude models through Anthropic's application programming interface (API) or extra usage credits, rather than as part of the monthly Claude subscription tiers (which some have likened to an "all-you-can eat" buffet, making the economics challenging for Anthropic when power users and third-party harnesses like OpenClaw consume more than the $20 or $200 monthly user spend on the plans in tokens).

OpenClaw’s creator, Peter Steinberger, was notably hired by OpenAI in February 2026 to lead their personal agent strategy, and has, since joining, actively spoken out against Anthropic's limitations — advising that OpenAI's Codex and models generally don't have the same restrictions as Anthropic is now imposing.

By hiring Steinberger and subsequently launching a Pro tier that provides the high-volume capacity Anthropic recently restricted, OpenAI is effectively courting the displaced OpenClaw community to reclaim the professional developer market.



Source link

Blockonomics

Be the first to comment

Leave a Reply

Your email address will not be published.


*