Back to blog

Vibenalytics vs ccusage: A Better ccusage Alternative?

An honest, data-backed comparison of Vibenalytics vs ccusage for Claude Code analytics, project attribution, persistent history, and team-ready usage visibility.

Martin Vančo
Vibenalytics vs ccusage: A Better ccusage Alternative?

If you use Claude Code daily, you have probably had this moment: you open /usage, see a single aggregate number, and think "Where did all of that go?"

No breakdown by project. No session history. No way to tell if yesterday's refactoring marathon cost you $3 or $30. You are flying blind.

This is not a niche problem. Thousands of developers hit this wall, and two tools have emerged to solve it: ccusage and Vibenalytics. They approach the same problem from fundamentally different angles.

This post breaks down both honestly - what each does well, where each falls short, and which one actually fits your workflow. We even ran both tools against the same data to show you the numbers.


ccusage deserves its 12k stars

Let me be clear upfront: ccusage is a solid tool. 12,080 GitHub stars and 43,500 weekly npm downloads do not happen by accident. ryoppippi built something that fills a real gap, and the community responded.

What ccusage does well:

  • Quick terminal reports. Run npx ccusage daily, get a cost breakdown. No signup, no cloud, nothing to configure.
  • Six report types. Daily, weekly, monthly, session, blocks (5-hour billing windows), and statusline. The blocks command is particularly clever - it aligns with Claude's billing cycles and shows burn rate.
  • Per-model cost tracking. It parses your local ~/.claude/ JSONL files and calculates costs per model using live LiteLLM pricing.
  • JSON output with built-in jq. Pipe data anywhere: ccusage daily --jq '.[] | select(.totalCost > 5)'
  • MCP server. @ccusage/mcp exposes daily, monthly, session, and blocks data as MCP tools.
  • Multi-tool support. Companion packages for Codex, OpenCode, Pi, and Amp.
  • Statusline integration. Shows real-time cost, burn rate, and context usage right in your Claude Code status bar.
  • Zero external dependencies. Everything runs locally. Your data never leaves your machine.

If you are a solo developer on a single machine who wants quick cost checks in the terminal, ccusage delivers that cleanly.

So why would you need anything else?


We ran both tools against the same data

Before we get into feature differences, let's talk about the numbers. We built a benchmark script that runs both tools against the same ~/.claude/ session files and compares token counts side by side.

The results are revealing.

The accuracy gap

Across 21 matched projects (197 Vibenalytics sessions vs ccusage's aggregated data):

MetricVibenalyticsccusageDifference
Input tokens555,0881,297,283-57.2%
Output tokens3,567,0243,362,050+6.1%
Cache read1,355,842,0342,315,437,685-41.4%
Cache creation52,005,65285,578,973-39.2%

Only 2 out of 21 projects matched exactly. The remaining 19 showed mismatches, some massive.

What is going on?

The benchmark identified two distinct mismatch patterns:

Pattern A: Output-only divergence. Input and cache tokens match exactly, but output tokens differ. This happened with projects like martinvanco (+164.5% output difference) and other-tools (+3.7%). The cause: ccusage and Vibenalytics handle streaming response deduplication differently. When Claude streams output, multiple intermediate token counts are recorded. ccusage takes the last value; Vibenalytics takes the max. Small difference in logic, noticeable difference in numbers.

Pattern B: Everything differs. Input, output, and cache tokens are all off, sometimes by 50-80%. Projects like context-base showed Vibenalytics at 24.9M cache read tokens vs ccusage's 276.5M - a 10x gap. The cause: session grouping and counting differences. ccusage reads raw JSONL transcript files and must reconstruct sessions from conversation logs. Vibenalytics captures events in real-time via hooks, so it knows exactly which events belong to which session.

The claudnalytics project itself showed a 428M token difference in cache reads (576M vs 1,005M). When the tool you are using to measure your usage cannot agree on the number, the measurement has a problem.

Why this matters

If you are using token counts to:

  • Decide if your $200/month Max subscription is worth it
  • Understand which project is eating your quota
  • Split costs across a team

...then accuracy is not optional. Numbers that are off by an unknown margin undermine the whole exercise.


Feature-by-feature comparison

Here is the full breakdown. We are being honest about where ccusage wins too.

Data Collection

ccusageVibenalytics
MethodParses ~/.claude/ JSONL files after the factReal-time hooks capture events as they happen
Token accuracyBest-effort reconstruction (known issues #705, #313)Direct from event metadata
Sub-agent trackingPartial - tokens can slip through (#313)Captured via hooks with subagent flags per request
Setupnpx ccusage - zero configcurl install + vibenalytics login (~2 min)
Data freshnessOn-demand when you run the commandAuto-sync on session boundaries

ccusage wins on: Zero-friction setup. No accounts, no auth, just run it.

Vibenalytics wins on: Accuracy and completeness. Hooks capture what JSONL parsing misses.

Reporting & Visualization

ccusageVibenalytics
Daily/monthly reportsTerminal tables with compact modeWeb dashboard with interactive charts
Session drill-downccusage session --id ID in terminalClick-through session detail with per-prompt breakdown
Billing blocks5-hour block tracking with burn rateNot available (planned)
StatuslineReal-time cost/burn in Claude Code status barNot available
Contribution heatmapNot availableFull-year SVG heatmap (like GitHub contributions)
Hourly patternsNot available15-minute slot activity heatmaps
Tool usage analyticsNot availableDonut chart + per-tool counts, skills tracking
Language distributionNot availableLines added/removed by programming language
Compaction trackingNot availableContext window compaction event visualization
Output formatTerminal tables, JSON, jqWeb dashboard, embeddable SVG, API

ccusage wins on: Billing block tracking and statusline integration. If you live in the terminal and want real-time burn rate, ccusage has this nailed. The --jq flag is also great for scripting.

Vibenalytics wins on: Depth of visualization. Contribution heatmaps, hourly patterns, tool usage analytics, language distribution, and compaction tracking give you a picture of how you code with AI, not just how much it costs.

Project & Session Intelligence

ccusageVibenalytics
Project-level breakdown--instances flag groups by project dirNative per-project attribution via hashed paths
Project filtering--project NAME string matchProject pages with drill-down to sessions
Project groupingNot availableGroup multiple directories into logical projects
Session metadataSession ID, model, timestamps, cost+ Claude version, permission mode, turn count, lines changed, CLI version
Per-prompt detailNot availablePrompt type (prompt/command/compaction), individual token counts, skills used
Per-request detailNot availablePer-API-call tokens, model, subagent/aside flags, lines changed per request

ccusage wins on: Quick filtering. ccusage daily --project myapp is instant.

Vibenalytics wins on: Granularity. Tracking down to individual prompts and API requests, with code change metrics, gives you real observability. You can see not just that a session cost $4, but which prompt within that session drove the cost and how many lines of code it generated.

Multi-Machine & Cloud

ccusageVibenalytics
Multi-machineSingle machine only (#287 open)Cloud-synced across all machines
Data storageLocal ~/.claude/ files (~30 day retention)Permanent cloud storage
HistoryLimited to what Claude Code retains locallyUnlimited historical data
Web accessNone - terminal onlyDashboard accessible from any browser
APILibrary import (loadDailyUsageData()) + MCPREST API for all data

ccusage wins on: No cloud dependency. If you genuinely never want data leaving your machine, this is a feature, not a limitation.

Vibenalytics wins on: Unified view across machines, permanent history, browser access. If you work on a laptop and a desktop (most of us), ccusage gives you two disconnected views. Vibenalytics gives you one. And without history, you don't have insight. You have a snapshot that expires.

Team Features

ccusageVibenalytics
Team dashboardNot availablePer-member cost, activity, and session breakdown
Plan ShareNot availableSplit shared Claude subscription by actual usage
Billing alignmentNot availablePer-member subscription start date tracking
Member hourly activityNot available15-minute slot heatmap per team member
Daily cost per memberNot availableDaily reconciliation grid
Project groupsNot availableTeam project groups with cross-member aggregation
Role-based accessNot availableOwner / Admin / Member permissions
Invite systemNot availableEmail invites with token-based joining

ccusage wins on: Nothing. This is not ccusage's domain.

Vibenalytics wins on: Everything. If you share a Claude Max subscription with 4 people and one of them generates $140 of the $200 plan while another generates $3, the only fair way to split is by actual usage. Plan Share does this automatically - no spreadsheets, no guesswork.

Privacy

ccusageVibenalytics
Data location100% local - nothing leaves your machineMetadata synced to cloud
Prompts/code collectedNever (reads local files, does not transmit)Never (hooks capture metadata only)
File pathsVisible locally (directory names)SHA-256 hashed - raw paths never sent
What is syncedN/AToken counts, tool names, timestamps, hashed project identifiers
Source codeOpen source (MIT)CLI open source, dashboard hosted

Both tools take privacy seriously. The difference is architectural: ccusage achieves privacy by being local-only. Vibenalytics achieves it by design - the system literally cannot access your code because it is never sent. Only metadata (token counts, tool names, timestamps, hashed project paths) leaves your machine.

Cost & Ecosystem

ccusageVibenalytics
PricingFree, open source (MIT)Free tier, Pro ~$5-8/mo, Team ~$12-15/dev/mo
Multi-tool supportClaude Code, Codex, OpenCode, Pi, AmpClaude Code only (for now)
MCP serverYes (@ccusage/mcp)Not available
Config systemJSON config file with per-command defaultsSettings via CLI and web dashboard
Installationnpx ccusage / bunx ccusagecurl -fsSL https://vibenalytics.dev/install.sh | bash
Embeddable widgetNot availableSVG widget for GitHub READMEs (dark/light theme)

ccusage wins on: Free forever, multi-tool support, and the MCP server. If you use Codex or OpenCode alongside Claude Code, ccusage tracks all of them.

Vibenalytics wins on: The embeddable SVG widget is unique - show your Claude Code activity in your GitHub README. And the free tier covers basic tracking with 90 days of history.


Who should use what

I will be straight about this.

Use ccusage if:

  • You are a solo developer on a single machine
  • You want quick cost checks in the terminal with zero setup
  • You use multiple AI coding tools (Codex, OpenCode, Amp)
  • You want billing block tracking and statusline burn rate
  • Free and open source matters more than visualizations
  • You prefer everything local with zero cloud dependency

ccusage is a good tool for that use case. Genuinely. The billing blocks feature and statusline integration are things we do not have yet.

Use Vibenalytics if:

  • You work across multiple machines and want one unified view
  • You need to know which projects consume your AI usage
  • You want usage history beyond Claude Code's ~30-day retention
  • You share a Claude subscription with a team and need fair cost splitting
  • You want a visual dashboard with heatmaps, charts, and drill-downs
  • Accuracy of token/cost data matters to your decisions
  • You want per-prompt and per-request granularity
  • You want to track code generation impact (lines added/removed by language)

Use both if:

You can. They do not conflict. Use ccusage for quick terminal checks and billing blocks. Use Vibenalytics for long-term tracking, team analytics, and deep dives. They read different data sources (ccusage reads JSONL files, Vibenalytics uses hooks), so there is no interference.


The real question

The question is not "which tool is better." The question is: do you need observability, or do you need a quick check?

Token counts are just the surface. The real value is knowing where they come from - which project, which session, which prompt, which tool - and how that changes over time.

This is the dashboard Claude Code should have shipped.


Try Vibenalytics free at vibenalytics.dev. Install the CLI, connect your machines, and see where your AI usage actually goes. No credit card, no commitment.