How to Make Claude Code /Insights Actually Work

    How to Make Claude Code /Insights Actually Work

    Everyone writes about getting roasted by /insights. Nobody writes about making it useful. Seven practical techniques for getting actionable results from Claude Code's undocumented self-assessment tool.

    1 minute

    How to Make Claude Code /Insights Actually Work

    Seven techniques for getting actionable results, not just a roast

    Claude Code's /insights command has become a minor internet phenomenon. Run it, get an HTML report analyzing your last 30 days of sessions, share a screenshot of the part where it calls out your worst habits. The "I got roasted by /insights" post has become its own genre.

    The entertainment value is real, but so is the missed opportunity. Underneath the roast is a self-assessment tool that identifies recurring friction patterns in how you work with Claude, suggests changes to your CLAUDE.md configuration, and flags workflow habits that cost you time. No other AI coding tool offers anything like it. The problem is that most developers run it once, laugh, and never act on the results.

    This post is about making /insights consistently useful. It covers what you need to know about the tool's limitations (it has no official documentation, runs entirely on Haiku, and has known bugs that can fabricate statistics) and seven techniques for getting actionable output rather than a one-time novelty. If you want the full technical internals, Rob Zolkos wrote an excellent deep dive on the six-stage pipeline. This post is about what to do with that knowledge.

    Seven Ways to Get Better Results

    1. Talk to Claude Like It Is Listening (Because /Insights Is)

    The facet extraction system classifies your satisfaction based on explicit verbal signals. Saying "great, that works" registers as satisfied. Saying "that's wrong, try again" registers as dissatisfied. Saying nothing and silently accepting the output, or abandoning the session without comment, registers as "unsure."

    This matters more than it sounds. If you are the kind of developer who accepts output silently when it is correct and only speaks up when something breaks, /insights will classify most of your sessions as ambiguous. The friction patterns will be real, but the satisfaction data will be noise. The fix is simple: acknowledge good output when you get it. "That's perfect" takes two seconds and gives /insights the signal it needs to distinguish between sessions where Claude nailed it and sessions where you gave up.

    2. Stop Mixing Tasks in a Single Session

    The official best practices warn against what they call the "kitchen sink session," where you start one task, ask something unrelated, then circle back. The facet classifier tries to determine what you asked for, whether you got it, and what went wrong. A session that jumps between debugging a database migration, asking about CSS specifics, and then returning to the migration produces confused facets.

    Use /clear between unrelated tasks. If you have corrected Claude more than twice on the same issue, the context is cluttered with failed approaches. Start fresh with a better prompt. Clean session boundaries produce cleaner analysis.

    3. Start in Plan Mode

    "Wrong approach and premature action" is the single most common friction pattern identified by /insights across multiple independent sources. The Prosper in AI analysis found 46 instances in 38 sessions. The Blundergoat write-up documented Claude "jumping to fixes before understanding problems" as the primary issue. Boris Cherny, who created Claude Code, confirmed that most of his sessions start in Plan mode (Shift+Tab twice), iterating on the approach before switching to implementation.

    The /insights report itself suggests this: "front-load explicit instructions about approach before giving the go-ahead, or use plan mode to force a proposal step before execution." If you only change one thing about how you use Claude Code, this is the one with the most evidence behind it.

    4. Wait Until You Have 30+ Sessions

    Below this threshold, the Haiku model fills gaps with plausible-sounding but fabricated numbers. A Hacker News commenter had only 7 sessions but the report claimed 336 database changes. When challenged, Claude itself admitted the number "appears to have been hallucinated" and "doesn't exist anywhere in the underlying facets data."

    The pattern is consistent across sources. Users with fewer than 30 sessions get unreliable quantitative data. Users with 38+ sessions (Prosper in AI) get actionable, granular friction counts. Users with hundreds of sessions (Jangwook's 1,042-session analysis) get highly detailed pattern recognition. If you just started using Claude Code last week, wait. The qualitative patterns become reliable only with enough data points.

    5. Verify Your Facets After Running

    This is the tip nobody writes about because it requires looking under the hood. After running /insights, check the directory ~/.claude/usage-data/facets/ and count the JSON files. Each file represents one session that was actually analyzed for qualitative patterns.

    Multiple GitHub issues (#22998, #23273) document a sampling bug where /insights claims to analyze thousands of sessions but generates facets for only 3 to 5. The aggregate statistics (total hours, messages, sessions) come from all your data, but the narrative sections (friction points, suggestions, impressive things) come from only the sessions with facets. If you have 200 sessions and 4 facet files, the qualitative analysis is based on 2% of your work.

    The workaround: delete the entire ~/.claude/usage-data/facets/ directory and re-run. This forces fresh analysis instead of relying on cached results. Since /insights processes up to 50 sessions per run, you may need multiple runs for large session counts. Running 2 to 3 times also helps filter non-deterministic noise, since repeat runs produce similar but differently-emphasized reports.

    6. Keep CLAUDE.md Lean and Convert Rules to Hooks

    The /insights suggestions prompt specifically flags instructions you give repeatedly across sessions as "PRIME candidates" for CLAUDE.md additions. This is useful advice, but it can lead to a bloated CLAUDE.md that hurts more than it helps. A quantitative analysis by Alex Op found that a 2,000-line CLAUDE.md consumes approximately 18% of your token budget before any work begins. A 50-line file consumes 1 to 2%.

    The better approach has two parts. First, keep CLAUDE.md under roughly 60 lines with only universal, always-relevant rules. Move domain-specific context into a /docs folder that Claude reads on demand based on a short instruction in the main file.

    Second, if a rule must happen every time without exception, such as formatting, linting, or type checking, make it a hook rather than a CLAUDE.md instruction. Hooks are deterministic. CLAUDE.md instructions are advisory, and Claude may skip them under token pressure. The Blundergoat implementation followed exactly this pattern: PostToolUse hooks for formatting after every file edit, Stop hooks for static analysis once per turn, both directly from /insights recommendations.

    7. Archive Your Report and Run Monthly

    Before running /insights again, rename ~/.claude/usage-data/report.html to something like report-2026-01.html. Delete the facets directory to force fresh analysis. Run /insights, then compare the new report to the old one.

    This is the feedback loop that nobody in the current wave of /insights content has explored. The value is not in any single report. It is in the delta between reports after you have implemented changes. Did the friction patterns decrease? Did new ones emerge? Are the same CLAUDE.md additions still being suggested, meaning you either did not implement them or they did not stick?

    The cadence matches the tool's design: /insights reads the last 30 days of sessions. Monthly runs cover your full rolling window without overlap. Implement the top 3 recommendations from each run, work normally for a month, measure again. Over a quarter, you build a compounding improvement cycle that most developers never establish because they treat /insights as a one-time novelty.

    The Bigger Picture

    No equivalent to /insights exists in Cursor, GitHub Copilot, or any other AI coding tool. The closest analogy is APM (application performance monitoring), but applied to the collaboration between a developer and an AI rather than the performance of a server. It is the first tool that treats human-AI collaboration as something worth measuring systematically.

    It is also imperfect. The sampling bugs are real. The Haiku model has limitations. The statistics can be fabricated. But the qualitative patterns, the friction types, the workflow suggestions, the CLAUDE.md recommendations, are consistently useful across every source we reviewed, from users with 38 sessions to users with over 1,600.

    For teams, the multiplier effect is significant. Run /insights across team members, identify shared friction patterns, and encode solutions in a shared CLAUDE.md checked into version control. Update it during code review. This is the approach used by the Claude Code team itself, where mistakes during development become encoded rules that prevent the same mistake from recurring across the entire team.

    No Vibe Coding Means No Vibe Metrics

    Disciplined AI-augmented development requires the same observability you would demand from any production system. You would not ship a service without monitoring. You should not run an AI development workflow without measuring how the collaboration actually performs, where it breaks, and whether your changes are making it better. /Insights is the beginning of that measurement practice. The engineers and teams who build a feedback loop around it will compound their advantage, one month at a time.

    Share:
    Carlos from Vindler

    Carlos from Vindler

    Founder and AI Engineering Lead at Vindler. Passionate about building intelligent systems that solve real-world problems. When I'm not coding, I'm exploring the latest in AI research and helping teams leverage AWS to scale their applications.

    Get in Touch

    Subscribe to our newsletter

    Get notified when we publish new posts on AI development, AWS, and software engineering.