If you use Claude Code daily, you have probably had this moment: you open /usage, see a single aggregate number, and think "Where did all of that go?"
No breakdown by project. No session history. No way to tell if yesterday's refactoring marathon cost you $3 or $30. You are flying blind.
This is not a niche problem. Thousands of developers hit this wall, and two tools have emerged to solve it: ccusage and Vibenalytics. They approach the same problem from fundamentally different angles.
This post breaks down both honestly - what each does well, where each falls short, and which one actually fits your workflow. We even ran both tools against the same data to show you the numbers.
ccusage deserves its 12k stars
Let me be clear upfront: ccusage is a solid tool. 12,080 GitHub stars and 43,500 weekly npm downloads do not happen by accident. ryoppippi built something that fills a real gap, and the community responded.
What ccusage does well:
- Quick terminal reports. Run
npx ccusage daily, get a cost breakdown. No signup, no cloud, nothing to configure. - Six report types. Daily, weekly, monthly, session, blocks (5-hour billing windows), and statusline. The blocks command is particularly clever - it aligns with Claude's billing cycles and shows burn rate.
- Per-model cost tracking. It parses your local
~/.claude/JSONL files and calculates costs per model using live LiteLLM pricing. - JSON output with built-in jq. Pipe data anywhere:
ccusage daily --jq '.[] | select(.totalCost > 5)' - MCP server.
@ccusage/mcpexposes daily, monthly, session, and blocks data as MCP tools. - Multi-tool support. Companion packages for Codex, OpenCode, Pi, and Amp.
- Statusline integration. Shows real-time cost, burn rate, and context usage right in your Claude Code status bar.
- Zero external dependencies. Everything runs locally. Your data never leaves your machine.
If you are a solo developer on a single machine who wants quick cost checks in the terminal, ccusage delivers that cleanly.
So why would you need anything else?
We ran both tools against the same data
Before we get into feature differences, let's talk about the numbers. We built a benchmark script that runs both tools against the same ~/.claude/ session files and compares token counts side by side.
The results are revealing.
The accuracy gap
Across 21 matched projects (197 Vibenalytics sessions vs ccusage's aggregated data):
| Metric | Vibenalytics | ccusage | Difference |
|---|---|---|---|
| Input tokens | 555,088 | 1,297,283 | -57.2% |
| Output tokens | 3,567,024 | 3,362,050 | +6.1% |
| Cache read | 1,355,842,034 | 2,315,437,685 | -41.4% |
| Cache creation | 52,005,652 | 85,578,973 | -39.2% |
Only 2 out of 21 projects matched exactly. The remaining 19 showed mismatches, some massive.
What is going on?
The benchmark identified two distinct mismatch patterns:
Pattern A: Output-only divergence. Input and cache tokens match exactly, but output tokens differ. This happened with projects like martinvanco (+164.5% output difference) and other-tools (+3.7%). The cause: ccusage and Vibenalytics handle streaming response deduplication differently. When Claude streams output, multiple intermediate token counts are recorded. ccusage takes the last value; Vibenalytics takes the max. Small difference in logic, noticeable difference in numbers.
Pattern B: Everything differs. Input, output, and cache tokens are all off, sometimes by 50-80%. Projects like context-base showed Vibenalytics at 24.9M cache read tokens vs ccusage's 276.5M - a 10x gap. The cause: session grouping and counting differences. ccusage reads raw JSONL transcript files and must reconstruct sessions from conversation logs. Vibenalytics captures events in real-time via hooks, so it knows exactly which events belong to which session.
The claudnalytics project itself showed a 428M token difference in cache reads (576M vs 1,005M). When the tool you are using to measure your usage cannot agree on the number, the measurement has a problem.
Why this matters
If you are using token counts to:
- Decide if your $200/month Max subscription is worth it
- Understand which project is eating your quota
- Split costs across a team
...then accuracy is not optional. Numbers that are off by an unknown margin undermine the whole exercise.
Feature-by-feature comparison
Here is the full breakdown. We are being honest about where ccusage wins too.
Data Collection
| ccusage | Vibenalytics | |
|---|---|---|
| Method | Parses ~/.claude/ JSONL files after the fact | Real-time hooks capture events as they happen |
| Token accuracy | Best-effort reconstruction (known issues #705, #313) | Direct from event metadata |
| Sub-agent tracking | Partial - tokens can slip through (#313) | Captured via hooks with subagent flags per request |
| Setup | npx ccusage - zero config | curl install + vibenalytics login (~2 min) |
| Data freshness | On-demand when you run the command | Auto-sync on session boundaries |
ccusage wins on: Zero-friction setup. No accounts, no auth, just run it.
Vibenalytics wins on: Accuracy and completeness. Hooks capture what JSONL parsing misses.
Reporting & Visualization
| ccusage | Vibenalytics | |
|---|---|---|
| Daily/monthly reports | Terminal tables with compact mode | Web dashboard with interactive charts |
| Session drill-down | ccusage session --id ID in terminal | Click-through session detail with per-prompt breakdown |
| Billing blocks | 5-hour block tracking with burn rate | Not available (planned) |
| Statusline | Real-time cost/burn in Claude Code status bar | Not available |
| Contribution heatmap | Not available | Full-year SVG heatmap (like GitHub contributions) |
| Hourly patterns | Not available | 15-minute slot activity heatmaps |
| Tool usage analytics | Not available | Donut chart + per-tool counts, skills tracking |
| Language distribution | Not available | Lines added/removed by programming language |
| Compaction tracking | Not available | Context window compaction event visualization |
| Output format | Terminal tables, JSON, jq | Web dashboard, embeddable SVG, API |
ccusage wins on: Billing block tracking and statusline integration. If you live in the terminal and want real-time burn rate, ccusage has this nailed. The --jq flag is also great for scripting.
Vibenalytics wins on: Depth of visualization. Contribution heatmaps, hourly patterns, tool usage analytics, language distribution, and compaction tracking give you a picture of how you code with AI, not just how much it costs.
Project & Session Intelligence
| ccusage | Vibenalytics | |
|---|---|---|
| Project-level breakdown | --instances flag groups by project dir | Native per-project attribution via hashed paths |
| Project filtering | --project NAME string match | Project pages with drill-down to sessions |
| Project grouping | Not available | Group multiple directories into logical projects |
| Session metadata | Session ID, model, timestamps, cost | + Claude version, permission mode, turn count, lines changed, CLI version |
| Per-prompt detail | Not available | Prompt type (prompt/command/compaction), individual token counts, skills used |
| Per-request detail | Not available | Per-API-call tokens, model, subagent/aside flags, lines changed per request |
ccusage wins on: Quick filtering. ccusage daily --project myapp is instant.
Vibenalytics wins on: Granularity. Tracking down to individual prompts and API requests, with code change metrics, gives you real observability. You can see not just that a session cost $4, but which prompt within that session drove the cost and how many lines of code it generated.
Multi-Machine & Cloud
| ccusage | Vibenalytics | |
|---|---|---|
| Multi-machine | Single machine only (#287 open) | Cloud-synced across all machines |
| Data storage | Local ~/.claude/ files (~30 day retention) | Permanent cloud storage |
| History | Limited to what Claude Code retains locally | Unlimited historical data |
| Web access | None - terminal only | Dashboard accessible from any browser |
| API | Library import (loadDailyUsageData()) + MCP | REST API for all data |
ccusage wins on: No cloud dependency. If you genuinely never want data leaving your machine, this is a feature, not a limitation.
Vibenalytics wins on: Unified view across machines, permanent history, browser access. If you work on a laptop and a desktop (most of us), ccusage gives you two disconnected views. Vibenalytics gives you one. And without history, you don't have insight. You have a snapshot that expires.
Team Features
| ccusage | Vibenalytics | |
|---|---|---|
| Team dashboard | Not available | Per-member cost, activity, and session breakdown |
| Plan Share | Not available | Split shared Claude subscription by actual usage |
| Billing alignment | Not available | Per-member subscription start date tracking |
| Member hourly activity | Not available | 15-minute slot heatmap per team member |
| Daily cost per member | Not available | Daily reconciliation grid |
| Project groups | Not available | Team project groups with cross-member aggregation |
| Role-based access | Not available | Owner / Admin / Member permissions |
| Invite system | Not available | Email invites with token-based joining |
ccusage wins on: Nothing. This is not ccusage's domain.
Vibenalytics wins on: Everything. If you share a Claude Max subscription with 4 people and one of them generates $140 of the $200 plan while another generates $3, the only fair way to split is by actual usage. Plan Share does this automatically - no spreadsheets, no guesswork.
Privacy
| ccusage | Vibenalytics | |
|---|---|---|
| Data location | 100% local - nothing leaves your machine | Metadata synced to cloud |
| Prompts/code collected | Never (reads local files, does not transmit) | Never (hooks capture metadata only) |
| File paths | Visible locally (directory names) | SHA-256 hashed - raw paths never sent |
| What is synced | N/A | Token counts, tool names, timestamps, hashed project identifiers |
| Source code | Open source (MIT) | CLI open source, dashboard hosted |
Both tools take privacy seriously. The difference is architectural: ccusage achieves privacy by being local-only. Vibenalytics achieves it by design - the system literally cannot access your code because it is never sent. Only metadata (token counts, tool names, timestamps, hashed project paths) leaves your machine.
Cost & Ecosystem
| ccusage | Vibenalytics | |
|---|---|---|
| Pricing | Free, open source (MIT) | Free tier, Pro ~$5-8/mo, Team ~$12-15/dev/mo |
| Multi-tool support | Claude Code, Codex, OpenCode, Pi, Amp | Claude Code only (for now) |
| MCP server | Yes (@ccusage/mcp) | Not available |
| Config system | JSON config file with per-command defaults | Settings via CLI and web dashboard |
| Installation | npx ccusage / bunx ccusage | curl -fsSL https://vibenalytics.dev/install.sh | bash |
| Embeddable widget | Not available | SVG widget for GitHub READMEs (dark/light theme) |
ccusage wins on: Free forever, multi-tool support, and the MCP server. If you use Codex or OpenCode alongside Claude Code, ccusage tracks all of them.
Vibenalytics wins on: The embeddable SVG widget is unique - show your Claude Code activity in your GitHub README. And the free tier covers basic tracking with 90 days of history.
Who should use what
I will be straight about this.
Use ccusage if:
- You are a solo developer on a single machine
- You want quick cost checks in the terminal with zero setup
- You use multiple AI coding tools (Codex, OpenCode, Amp)
- You want billing block tracking and statusline burn rate
- Free and open source matters more than visualizations
- You prefer everything local with zero cloud dependency
ccusage is a good tool for that use case. Genuinely. The billing blocks feature and statusline integration are things we do not have yet.
Use Vibenalytics if:
- You work across multiple machines and want one unified view
- You need to know which projects consume your AI usage
- You want usage history beyond Claude Code's ~30-day retention
- You share a Claude subscription with a team and need fair cost splitting
- You want a visual dashboard with heatmaps, charts, and drill-downs
- Accuracy of token/cost data matters to your decisions
- You want per-prompt and per-request granularity
- You want to track code generation impact (lines added/removed by language)
Use both if:
You can. They do not conflict. Use ccusage for quick terminal checks and billing blocks. Use Vibenalytics for long-term tracking, team analytics, and deep dives. They read different data sources (ccusage reads JSONL files, Vibenalytics uses hooks), so there is no interference.
The real question
The question is not "which tool is better." The question is: do you need observability, or do you need a quick check?
Token counts are just the surface. The real value is knowing where they come from - which project, which session, which prompt, which tool - and how that changes over time.
This is the dashboard Claude Code should have shipped.
Try Vibenalytics free at vibenalytics.dev. Install the CLI, connect your machines, and see where your AI usage actually goes. No credit card, no commitment.
