I can help with the latest on Claude Opus 4.7 xhigh, but I don’t have real-time access to live feeds in this turn. Here’s what’s widely reported about Opus 4.7 and the xhigh tier as of mid-April to May 2026:
Key points
- Opus 4.7 is the latest major Claude Opus release from Anthropic, and it introduces an “xhigh” effort level between high and max, intended to give more depth of reasoning for tasks that warrant it while keeping cost and latency reasonable for others.[3][4]
- Visual capabilities have been expanded in Opus 4.7, with image input support up to around 2,576 pixels on the long edge reported by early coverage, roughly triple the previous limit for some model variants.[2][3]
- Benchmark chatter in early community discussions suggested notable improvements in certain multimodal benchmarks, with claims of higher accuracy on vision-heavy tasks and improved handling of dense visual inputs.[6][3]
- Reactions in industry coverage vary: some outlets highlight substantial gains in coding and multimodal tasks, while others (including some social media and analyst commentary) note that real-world gains can depend on the task, prompt design, and budget considerations due to token and compute cost changes at higher effort levels.[4][5][10]
What “xhigh” typically implies
- Increased internal thinking steps for deeper problem solving, at the trade-off of higher per-task cost and longer runtimes for certain tasks.[5][3]
- A shift in when to use the higher tier: for complex engineering, code generation, or tasks requiring careful step-by-step reasoning, xhigh is intended to be more capable than the standard high setting.[3][5]
- Potentially larger output token counts at higher effort, reflecting more thorough reasoning and expanded search/decomposition within a task.[3]
Cautions and mixed opinions
- Some reports question whether the gains justify the cost in all scenarios, noting that for some workloads 4.x tiers already cover practical needs and that moving to xhigh may yield diminishing returns depending on the prompt and task.[7][10]
- Migration and evaluation are recommended for production environments: re-run evals, recheck budgets, and adjust prompts when upgrading to Opus 4.7, since model behavior and costs can change with the new tier and tokenizer adjustments.[5][3]
Example takeaway
- If your use case involves heavy multimodal analysis or intricate code tasks with long input text and large images, Opus 4.7 xhigh may offer measurable benefits, but you should benchmark against Opus 4.6 to confirm cost-per-task and latency are acceptable for your workflow.[6][3]
Would you like a concise, side-by-side summary of Opus 4.7 vs 4.6 on specific dimensions (vision capability, coding tasks, latency, cost) or help drafting a short evaluation plan to test xhigh with your own prompts and datasets? I can tailor benchmarks and sample prompts for your setup. Additionally, if you want, I can pull the latest article links and quotes for citation.
Sources
Opus 4.7 vs 4.6: No meaningful performance gains detected. The "xhigh" tier is completely unnecessary - tier 4 already maxes out the useful capability ceiling. Worse, users had to endure a noticeable
www.binance.comOpus 4.7 vs 4.6: No meaningful performance gains detected. The "xhigh" tier is completely unnecessary - tier 4 already maxes out the useful capability ceiling. Worse, users had to endure a noticeable
www.binance.comHigh: 26/29 pass, 12/29 equivalent, 7/29 review-pass, custom avg 2.670, $5.01/task, 716s/task Xhigh: 25/29 pass, 11/29 equivalent, 4/29 review-pass, custom avg 2.669, $6.51/task, 804s/task
news.ycombinator.comBy Ashley Capoot / CNBC. View the full context on Techmeme.
www.techmeme.comFrom Anthropic. View the full context on Techmeme.
www.techmeme.com