How to Actually Use Claude Code Without Losing Your Mind
Stop Treating CLAUDE.md Like a Junk Drawer
This was my biggest mistake. I threw every instruction I could think of into a massive CLAUDE.md file. Code style guidelines, bash commands, edge cases, debugging tips, everything. Thought I was being thorough. Turns out I was just creating noise.
The thing that cracked this open: CLAUDE.md goes into every single conversation. Every one. That means it's literally the highest leverage place you can put information. One bad line doesn't just fail once. It creates downstream problems in every session for weeks.
Anthropic figured this out, so Claude Code injects a system reminder telling Claude to ignore irrelevant stuff in CLAUDE.md. The problem is what happens when you keep piling instructions in. Claude doesn't just ignore the bad ones. It starts ignoring all of them uniformly. There's actual research on this. The instruction following decay is exponential for smaller models, linear for bigger ones. Fun fact that nobody asked for but apparently is my life now.
Here's what actually needs to be in CLAUDE.md:
- Project structure. Map it out. Especially in monorepos. Just tell Claude what each folder does instead of making it guess
- Your stack. Be specific. 'We use Bun, not npm' matters. 'Import/export syntax, not CommonJS' matters
- How to verify work. What commands run tests? TypeScript? Linting? If Claude doesn't know how to validate itself, it'll just guess
- Genuinely important patterns. Not 'always use const over let' because that's what a linter is for. I mean 'we check git history to understand why APIs are designed this way' or 'we use monorepo patterns X, Y, Z'
Mine is less than 60 lines. If you're looking at a 300+ line CLAUDE.md, you've got a problem.
The Progressive Disclosure Thing Actually Works
Instead of cramming everything into one file, create a folder structure like this:
docs/
folder database_schema.md
folder service_architecture.md
folder running_tests.md
folder deployment.mdThen in CLAUDE.md, point Claude to these resources and tell it to read them when relevant. Don't include code snippets that'll go stale. Use file:line references instead. Let Claude pull what it needs when it needs it.
This is the progressive disclosure principle. You're telling Claude 'here's how to find important information' instead of 'here's all the information.' Your context window stays cleaner, your instruction count stays manageable, and Claude actually retains the stuff that matters.
Use CLAUDE.md as a Tuning Knob, Not a Replacement for Tests
I've seen people try to use CLAUDE.md to fix behavior issues. Add more urgent sounding instructions. Throw in ALL CAPS sections about what Claude MUST do.
Stop. That's not what it's for.
If Claude is consistently doing something you don't want, that's usually a signal that your workflow is broken, not your CLAUDE.md. The test-driven development approach works stupidly well because it gives Claude a clear target. Instead of 'please be careful and thorough,' you're saying 'here's the expected output, make it happen.'
Same thing with visual targets. If you're building UI, give Claude a screenshot and let it iterate to match it. Way more effective than prose instructions about 'please make it look polished.'
Tools, Tools, Tools
Claude Code inherits your shell environment. That's actually huge. You can set up bash functions and aliases for common patterns, and Claude just uses them.
Document your custom tools in CLAUDE.md though. Just mention them by name with usage examples. Don't dump 50 lines of bash functions into the file. Just do this:
npm run test:unit: Run unit tests for current service
check-types: Run TypeScript across the monorepoIf you're using MCP servers (Model Context Protocol), you can configure them at the project level or globally. I've got Puppeteer set up for browser interactions, GitHub CLI for repo stuff. Each one adds capabilities without cluttering CLAUDE.md.
The custom slash commands are underrated too. Create .claude/commands/ with markdown templates for repeated workflows. Want Claude to auto-fix GitHub issues? Template that. Running migrations? Template it. Makes work consistent and repeatable.
The Workflows That Actually Work
Explore, Then Plan, Then Code, Then Commit
This is the one I use most often now. You tell Claude to read files first (no implementation yet), then to make a plan, then to implement. Sounds tedious but saves so much wasted work.
The key part is the planning step. Use the word 'think' explicitly to trigger extended thinking mode. Claude gets more computation budget and usually catches gotchas before diving into code. If the plan looks wrong, you can reset there instead of ripping out half finished code.
Test-Driven Everything
Write tests first. Have Claude write them based on input and output pairs. Be explicit about the fact that you're doing TDD. Tell Claude not to mock out functionality that doesn't exist yet. Commit those tests. Then have Claude make the tests pass.
This gives Claude a target and a way to verify its own work. Iteration gets way faster. I've had Claude go from failing tests to green in like 3 or 4 attempts, all self correcting.
Visual Iteration For UI Stuff
Paste a design mockup. Have Claude implement it. Ask for a screenshot. Compare to the mockup. Iterate. The delta between the mockup and the result usually tightens up after 2 or 3 iterations. This works because Claude can actually see what it did wrong.
Specificity Is Your Best Friend
'Add tests for foo.py' is vague garbage that produces vague garbage.
'Write test cases for foo.py covering the edge case where the user is logged out, avoid mocks' is specific and produces better work.
The gap between these two is huge. Claude performs better when it knows exactly what you want. I learned this the hard way by getting mediocre results until I started being stupidly explicit about constraints.
Same thing applies to asking Claude to research something. Don't say 'find information about X.' Say 'I need to understand how our payment processing works, especially the retry logic. Look at the service code, tests, and git history. Focus on edge cases.' Now Claude has direction.
When You Need to Course Correct
Claude Code's interrupt system (press Escape) is genuinely useful. You can pause Claude mid-execution, preserve the context, and redirect. Double-tap Escape lets you edit a previous prompt and explore a different approach. Use /clear between unrelated tasks to keep context window from getting polluted.
But honestly, the best course correction happens upfront with a good plan. If you force Claude to explain its approach before diving into implementation, most bad directions get caught early.
The Multi-Claude Thing
I was skeptical about this but it actually works. Fire up Claude in one terminal for implementation, another for review. They end up finding issues the single instance would miss. Or use git worktrees to have multiple Claudes working on different features simultaneously, each with its own isolated workspace.
It's parallel processing for your coding. Feels weird but produces better results than serial execution.
Things That Sound Good But Aren't
Auto-generating CLAUDE.md: Don't. Seriously. Every line goes into every session. Run /init once to see what it generates, then throw it out and write your own. The auto-generated stuff tends to be way too verbose and includes helpful instructions that create downstream problems.
Using Claude as a Linter: This one drove me crazy because I kept trying it. Linters are deterministic, fast, and cheap. Claude is expensive and slow. Use ESLint or Biome or whatever, set up a stop hook to run it automatically, and let Claude fix the errors it finds. Don't ask Claude to find the errors itself.
Code Style Guidelines in CLAUDE.md: Just no. Use a formatter. I like Biome because it auto-fixes by default. Claude will pick up your style from existing code if you show it a few examples. The moment you add 'always use const over let' to CLAUDE.md, you're just wasting tokens and context.
Model Selection Actually Matters
HAIKU ($1) — Low-stakes, self-contained tasks:
- Write a git commit message from your code diff
- Generate boilerplate: test file skeleton, API endpoint template, component wrapper
- Write a simple utility function (string formatter, array mapper, date helper)
- Rename variables/functions for clarity across a small file
- Write a single unit test for straightforward logic
- Create a README or documentation template
- Add comments/docstrings to existing code
- Fix an obvious syntax error
- Generate a simple SQL query or database migration
SONNET ($3) — Real feature work:
- Build a feature end-to-end: "Create a login form with validation, API endpoint, and database schema"
- Debug a complex issue: "Race condition in my async state updates, help me trace it"
- Refactor a module: break a 200-line file into testable pieces
- Write comprehensive test coverage for a feature
- Integrate two libraries that need custom glue code
- Optimize a slow query—redesign the schema or indexing
- Code review: "Look at this codebase and suggest improvements"
- Build an authenticated API endpoint with error handling
OPUS ($5) — Architectural reasoning:
- Redesign system: "Move from monolith to microservices, map out dependencies"
- Implement a non-trivial algorithm with constraints
- Large-scale refactor across multiple files with dependencies
- Security review: reason through attack vectors
- Resolve a Sonnet attempt that didn't work
Also worth noting, smaller models degrade faster under high instruction counts. If your CLAUDE.md is 200 lines, Haiku gets way worse. Sonnet stays more consistent. Plan accordingly.
The Reality Check
Claude Code is genuinely powerful, but it's not a replace your brain tool. It's a force multiplier. The better your workflows are, the more Claude can amplify them. The messier your setup, the more Claude amplifies that too.
I've seen people get amazing results and people complain that it's 'broken' when really they were just asking it to work blind without giving it context. CLAUDE.md is the diff between those two groups.
Start with a minimal CLAUDE.md under 100 lines. Document your project's WHY, WHAT, and HOW in plain English. Give Claude specific instructions and clear targets. Use tests or visual mocks as verification. Iterate. That's it. That's the whole thing.
The magic part isn't Claude. It's the discipline of having a good setup and the patience to actually use it correctly. Sounds unglamorous but it's honestly where most of the value is.