Anthropic on Thursday broadly released Claude Opus 4.7, its latest flagship model, framing it as a direct upgrade over Opus 4.6 with stronger performance in advanced software engineering, complex multistep tasks, and professional knowledge work.
The company said the model is available across Claude products and its API, as well as through Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry, with pricing unchanged from Opus 4.6 at $5 per million input tokens and $25 per million output tokens.
Anthropic said Opus 4.7 improves instruction following, handles long-running tasks with greater rigor, and delivers better high resolution vision support, allowing images up to 2,576 pixels on the long edge.
In its own testing, the company said the model posted stronger results than Opus 4.6 across coding, finance, document analysis, and agent style workflows, while also introducing a new xhigh effort setting aimed at balancing reasoning depth and latency. Anthropic also warned that prompt behavior may change because Opus 4.7 follows instructions more literally than earlier Claude models.
The launch is especially notable because it comes just nine days after Anthropic introduced Claude Mythos Preview through Project Glasswing on April 7. Anthropic described Mythos Preview as its most capable model yet and said the initiative was built to help secure critical software because the model showed unusually strong cybersecurity capabilities.
In the Opus 4.7 release, Anthropic said Mythos Preview remains limited rather than broadly available, and that Opus 4.7 is the first model being deployed with new safeguards designed to detect and block prohibited or high risk cyber requests before any wider Mythos class release.
The release also lands amid a broader burst of frontier model launches from rivals. OpenAI introduced GPT 5.4 on March 5 and described it as its most capable and efficient frontier model for professional work, with state of the art performance in coding, computer use, tool search, and a 1 million token context window. Later in March, OpenAI also launched GPT 5.4 mini and nano as smaller, faster models optimized for coding and subagent workloads.
Google has been updating Gemini on a similarly rapid schedule. The company rolled out Gemini 3.1 Pro in February as its most advanced model for complex tasks, then followed with Gemini 3.1 Flash Lite and Gemini 3.1 Flash Live in March. On April 15, Google added Gemini 3.1 Flash TTS Preview, extending the recent push into lower latency and multimodal use cases.




Be the first to comment