AI Engineeringllm-rankingsai-modelsopenrouter

Top 10 LLMs of March 2026 — Usage Rankings & Pricing

By AI ChangeLogMarch 1, 20269 min read
Most RecentSearch UpdatesCore UpdatesAI EngineeringSearch CentralIndustry TrendsHow-ToCase Studies
Demand Signals
demandsignals.co
LLM Rankings — March 2026
100+
Models Tracked
Claude Sonnet 4.6
Top Model
GPT-5.4
New This Month
Gemini 3 Flash ↑2
Biggest Mover
Top 10 LLMs of March 2026 — Usage Rankings & Pricing

Welcome to the March edition of our monthly LLM rankings. Each month, we pull real-world usage data from OpenRouter — a platform routing millions of API calls across every major LLM — and rank the models developers are actually paying for.

Not benchmarks. Not press releases. Real tokens, real money, real usage.

The Top 10 — March 2026

RankModelProviderMoveInput / 1K tokensOutput / 1K tokens
1Claude Sonnet 4.6Anthropic$0.003$0.015
2Claude 3.5 SonnetAnthropic$0.003$0.015
3GPT-5.4OpenAI🆕$0.002$0.01
4Gemini 3 FlashGoogle$0.0001$0.0004
5GPT-5.2OpenAI$0.00175$0.014
6DeepSeek V3DeepSeek$0.0003$0.0009
7Gemini 3.1 ProGoogle🆕$0.00125$0.005
8Claude Opus 4.6Anthropic$0.015$0.075
9GPT-5.4 MiniOpenAI🆕$0.0003$0.0012
10DeepSeek R1DeepSeek$0.0006$0.002

New This Month

GPT-5.4 (Feb 20) — OpenAI's spring flagship. 500K context window, native image understanding, and a step-change in reasoning that finally closes the gap with Claude on coding tasks. At $0.002/1K input, it's aggressively priced. The full GPT-5.4 family launched simultaneously: base, Pro ($0.01/$0.05), Mini ($0.0003/$0.0012), and a limited-access 5.4 Pro for enterprise.

GPT-5.4 Mini (Feb 20) — The price fighter. At $0.0003 input, it directly undercuts DeepSeek V3 while matching GPT-5.2 quality. This model is going to eat an enormous amount of low-cost inference volume.

Gemini 3.1 Pro (Feb 18) — Google's quiet upgrade. Improved multilingual performance, better structured output, and a 2M token context window that makes it the go-to for massive document processing. Slots in at #7 with room to grow.

Grok 4.20 (Feb 25) — xAI's latest didn't crack the top 10 but deserves a mention. Strong reasoning, real-time web access, and no content restrictions make it popular for specific use cases. Sitting at #12 with growing adoption.

The Story

The crown passes. Claude Sonnet 4.6 officially overtakes Claude 3.5 Sonnet to claim the #1 spot. It's the first time since mid-2025 that the top model has changed. The transition was remarkably smooth — same pricing, better capabilities, and Anthropic's migration tools made the switch painless for most developers.

But the real story is OpenAI's comeback. GPT-5.4 launched with four variants at once, and three of them appear in this month's rankings. OpenAI now holds positions #3, #5, and #9. Their strategy is clear: bracket the market. GPT-5.4 Pro for enterprises that need maximum quality. GPT-5.4 base for the mid-tier. GPT-5.4 Mini for price-sensitive, high-volume workloads. It's aggressive, and it's working.

Gemini 3 Flash continues its relentless climb — up two spots to #4. Google isn't trying to win the quality crown; they're trying to own the floor. At $0.0001/1K tokens, Flash processes are essentially free at scale. Combined with Gemini 3.1 Pro at #7, Google has a complete product lineup for the first time.

Claude 3.5 Sonnet drops to #2 but retains massive usage. Legacy integrations, corporate contracts, and risk-averse teams will keep it in the top 5 for months. But the trend is clear.

Market Share

ProviderShareTrend
Anthropic~32%Stable
OpenAI~24%Recovering
Google~22%Growing
DeepSeek~10%Stable
Alibaba~4%Stable
Meta~3%Declining
xAI~2%New entry
Mistral~2%Declining
Others~1%

The Price War in Numbers

The cost of intelligence is falling off a cliff. Here's what $1 buys you in March 2026 versus January:

Model TierJan 2026 (per $1 input)Mar 2026 (per $1 input)Change
Flagship200K tokens (GPT-4o)333K tokens (Claude Sonnet 4.6)+67%
Mid-tier571K tokens (GPT-5.2)500K tokens (GPT-5.4)-12%*
Budget3.3M tokens (DeepSeek V3)3.3M tokens (GPT-5.4 Mini)Equal
Speed10M tokens (Gemini Flash)10M tokens (Gemini 3 Flash)Equal

*GPT-5.4 is slightly more expensive than 5.2 but significantly more capable — the value per token actually improved.

Looking Ahead

April should be quieter after March's avalanche. Watch for Claude's response to GPT-5.4 — Anthropic rarely lets a competitive gap persist for more than one cycle. DeepSeek V4 rumors continue to build. And Meta's Llama 4 is overdue — the open-source champion hasn't shipped a competitive model in months.


Monthly LLM rankings by Demand Signals, sourced from OpenRouter usage data. Subscribe to our blog for monthly updates.

Share:X / TwitterLinkedIn
More in AI Engineering
View all posts →

Get a Free AI Demand Gen Audit

We'll analyze your current visibility across Google, AI assistants, and local directories — and show you exactly where the gaps are.

Get My Free AuditBack to Blog

Play & Learn

Games are Good

Playing games with your business is not. Trust Demand Signals to put the pieces together and deliver new results for your company.

Pick a card. Match a card.
Moves0