Jensen Huang says he'd be "very concerned" if an engineer earning $500,000 a year spent less than $250,000 on AI tokens. Andrej Karpathy says "the name of the game is tokens." Meta's CTO calls heavy AI spending a "no-brainer."
Silicon Valley has a new status metric: how many tokens you burn.
But wait — does spending more tokens actually make you a better engineer?
What is Tokenmaxxing?
Tokenmaxxing is the practice of treating AI token consumption as a benchmark for productivity. The idea: the more you use AI tools (Cursor, Claude, Copilot…), the more "AI-native" you are — and therefore, the more productive.
Companies are now tracking this. Meta reportedly built an internal leaderboard showing which engineers use the most AI. Databricks' CEO publicly praised an engineer who spent $7,000 on tokens in two weeks. The signal being sent: high token usage = high performer.
The Logic Behind It (And Why It's Not Crazy)
To be fair, the reasoning isn't completely wrong.
If you're a developer who never uses AI assistance, you're likely writing code slower, spending more time on boilerplate, and doing more manual research. In 2024–2025, not using AI tools at all is genuinely a productivity gap.
And there's a real correlation: engineers who are comfortable using AI tend to delegate more tasks to it — code review, test generation, documentation, debugging — which does compound into real output gains.
So token usage as a rough proxy for AI adoption? Not unreasonable.
But Here's Where It Breaks Down
Token count is an input metric, not an output metric.
Consider these two engineers:
| Engineer A | Engineer B | |
|---|---|---|
| Token usage | 10,000/day | 500/day |
| What they're doing | Asking AI to rewrite the same buggy function 20 times | Asking AI one precise question, implementing the fix |
| Actual output | 1 half-working feature | 3 shipped features |
Engineer A is "tokenmaxxing." Engineer B is being effective.
High token usage can mean:
- ✅ Deep AI collaboration on complex problems
- ❌ Poor prompting that requires many retries
- ❌ Using AI as a crutch instead of understanding the problem
- ❌ Generating code you don't understand and can't maintain
The metric measures consumption, not judgment.
Real Examples Where More Tokens ≠ Better Outcome
1. The Vibe Coding Trap
A developer uses Cursor Agent to build an entire feature end-to-end, accepting every diff without reading it. Token usage: massive. Code quality: unmaintainable spaghetti that breaks in production three weeks later.
2. The Prompt Loop
An engineer doesn't understand why their auth middleware isn't working. Instead of reading the docs, they paste the error into Claude 15 times with slight variations. Each attempt generates 2,000 tokens. None of them work because the real issue is a misconfigured environment variable.
3. The Documentation Dump
Someone asks AI to "write documentation for this entire codebase." AI generates 40 pages of auto-generated prose. Nobody reads it. Tokens burned: enormous. Value created: near zero.
What Should You Actually Measure?
If token usage is a flawed proxy, what's better?
- Shipped features per sprint — did AI help you actually deliver more?
- Bug rate in AI-assisted code — are you reviewing what AI produces?
- Time-to-first-working-version — is AI compressing your feedback loop?
- Code review pass rate — does your team trust what you ship?
The best AI-native engineers aren't the ones burning the most tokens. They're the ones who've learned when to use AI and when not to — and that judgment is invisible to a token leaderboard.
The Uncomfortable Truth About Leaderboards
When you measure token usage and make it visible, you create an incentive to maximize the metric — not the outcome.
Engineers will:
- Use AI for tasks they could do faster manually
- Generate more output than they can review
- Optimize for looking AI-native rather than being effective
This is Goodhart's Law applied to AI adoption: when a measure becomes a target, it ceases to be a good measure.
Conclusion: Use AI More — But Smarter
The CEO quotes aren't wrong in spirit. Engineers who are afraid to use AI, who treat it as a threat rather than a tool, will fall behind. That's real.
But "use more tokens" is not the same as "use AI well."
The engineers who will actually win in the next 5 years aren't the ones with the highest token count. They're the ones who've developed taste — knowing which problems are worth delegating to AI and which ones require human judgment.
Burn tokens. But know why you're burning them.