← Back to BlogClaude Code

The TOOLS.md Hack — 6 Months Later: From a Single File to an AI-Native Infrastructure

My most-read article got a reality check. What happened when a simple markdown file evolved into a full infrastructure system — including a €90 incident, MCP integrations, and 4 competing AI CLIs.

Dr. Florian Steiner

Claude AI Consultant & Trainer

10 min read
The TOOLS.md Hack — 6 Months Later: From a Single File to an AI-Native Infrastructure

Six months ago, I published what I thought was a neat productivity hack: a single markdown file called TOOLS.md that gave my AI assistant persistent context about my tool stack. That article became my most-read piece on Medium.

Since then, the hack evolved into something I didn't expect. What started as one file turned into an entire infrastructure system. And honestly, the original article only scratched the surface.

Here's what happened next.

What Changed: From File to System

The original TOOLS.md was a flat list. One file, 40+ tools, all in one place. It worked — but it didn't scale.

After three months of daily use with Claude Code, I hit problems:

  • The file got too long. At 800+ lines, my AI was burning context window on tool entries irrelevant to the current task.
  • Details were never deep enough. Knowing I had "Supabase Free" didn't help when I needed the exact CLI commands to push a migration.
  • Updates were painful. Changing one tool meant scrolling through a wall of text.

The Hub & Spoke Architecture

So I restructured. The single file became a system:

your-project/
├── TOOLS.md              # Index — quick lookup table
├── tools/
│   ├── ai-ml.md          # Claude, ChatGPT, Perplexity
│   ├── dev-tools.md      # IDEs, GitHub, CLI tools
│   ├── design.md         # Figma, Canva, Descript
│   ├── productivity.md   # Linear, Obsidian, Granola
│   ├── infrastructure.md # Vercel, Railway, Supabase
│   └── ...               # More categories as needed
├── CLAUDE.md             # Project-level AI instructions
└── scripts/hooks/        # Security automation

The index file is a compact lookup table — tool name, category, CLI command, MCP status, and a link to the detail file. Under 100 lines. The AI loads it instantly, then pulls detail files only when needed.

The result: Context loading dropped from ~2,000 tokens (full flat file) to ~400 tokens (index only) for most conversations. The AI reads the detail file only when it actually needs to interact with that specific tool.

Want to try this yourself? I published an open-source scaffold that includes TOOLS.md and the full project structure: vibe-coding-scaffold on GitHub

The MCP Revolution: From Documentation to Direct Control

This is the biggest evolution since the original article — and something I didn't see coming.

When I wrote the first article, my AI could read about my tools. Now, through MCP (Model Context Protocol), my AI can use them directly.

Here's what that looks like in practice:

Tool Before (TOOLS.md v1) After (MCP Integration)
Linear "I have Linear Pro for project management" AI creates issues, updates status, manages projects — directly
Figma "I use Figma for design" AI reads design files, extracts components, gets screenshots
Supabase "I have Supabase Free tier" AI runs migrations, deploys functions, manages database
Canva "I have Canva Pro" AI creates designs, exports assets, manages brand kits
Docker "I use Docker Desktop" AI manages containers, starts services, reads logs

My current setup has 9 MCP-connected tools. The documentation layer (TOOLS.md) tells the AI what I have and what it costs. The MCP layer lets the AI actually do things.

A Real Example

Before MCP, a conversation like "Create a ticket for the workshop landing page" meant:

  1. AI writes the ticket description
  2. I copy it
  3. I open Linear
  4. I paste it and adjust

Now:

Me: "Create a Linear ticket for the workshop landing page on produktentdecker.com"

AI: Creates the issue directly in Linear with title, description, labels, and project assignment. Returns the issue URL.

One sentence. Done. No context switching.

Lessons Learned the Hard Way

The original article was optimistic. Six months of daily use taught me things I wish I'd known earlier.

Lesson 1: Cost Awareness Needs Teeth

Documenting costs in TOOLS.md was a good start. But real cost control requires automation.

The GitHub Actions Incident: A backend test workflow ran for 6 hours — four times. Cost: ~€90 in a single day. My monthly budget is €30.

What I added after that:

# EVERY workflow job now requires this
jobs:
  build:
    runs-on: ubuntu-latest
    timeout-minutes: 15  # Non-negotiable

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true  # Kill duplicate runs

This isn't in TOOLS.md. It's in CLAUDE.md — project-level instructions that the AI reads before writing any workflow file. The AI now refuses to create a GitHub Actions job without a timeout.

The CodeRabbit CLI Surprise: CodeRabbit switched their CLI to usage-based pricing: $0.25 per file. In an AI coding loop reviewing 10 files across 5 iterations, that's $12.50 for one feature. We documented the decision and moved to free alternatives (Claude Code, Gemini CLI) for local reviews. CodeRabbit stays active for PR reviews — that's included in the Pro plan.

Lesson 2: Security Can't Be Optional

The original TOOLS.md had no security layer. Now, every tool that touches secrets is documented with its 1Password path:

# 1Password CLI with structured taxonomy
op item list --vault "Development Keys" --tags "service:supabase"

Pre-commit hooks automatically scan for leaked API keys, tokens, and PII before any commit reaches the repository.

Tools with secrets get a 1Password column in the TOOLS.md index:

Tool CLI MCP 1Password
Supabase supabase Yes service:supabase
Railway railway No service:railway
Attio CRM attio Yes (MCP) service:attio

The AI knows exactly where to find each key — without the key ever appearing in a file.

Lesson 3: One File Isn't Enough — You Need a CLAUDE.md Ecosystem

TOOLS.md tells the AI what you have. But it doesn't tell the AI how you work.

I now maintain a layered instruction system:

~/.claude/CLAUDE.md          # Global: language, workflow, security rules
project/CLAUDE.md            # Project: hooks, cost controls, structure
project/TOOLS.md             # Tools: what's available and what it costs

The global CLAUDE.md contains rules like:

  • Never commit directly to main
  • Never bypass pre-commit hooks
  • Filter API responses with jq — never load raw JSON into context
  • Secrets only via .env or 1Password — never hardcode

This layered approach means every conversation starts with full context: what tools I have, how I work, what the rules are, and what the current project needs.

The AI CLI Wars: A New Dimension

When I wrote the original article, Claude Code was my only AI CLI. Now I run four AI coding CLIs side by side:

CLI Use Case Cost Model
Claude Code Primary development, complex tasks Subscription (~€185/month)
Gemini CLI Code review, second opinion Free (Google Workspace)
Codex CLI Quick tasks, OpenAI ecosystem Pay-per-use
Grok CLI Experimental, X integration Free CLI (X Premium)

Each CLI reads the same TOOLS.md and CLAUDE.md. The documentation layer is tool-agnostic — it works for any AI that can read markdown files.

This created an unexpected benefit: AI-assisted code review without vendor lock-in. When CodeRabbit's CLI became too expensive, I already had three alternatives ready to go.

What the Numbers Say

Here's the honest accounting after 6 months:

Time Investment

Activity Time
Initial TOOLS.md (original article) 3 hours
Hub & Spoke restructure 2 hours
Adding MCP configurations 4 hours
Security hooks setup 3 hours
CLAUDE.md ecosystem 2 hours
Ongoing maintenance (~15 min/week x 26 weeks) 6.5 hours
Total ~20 hours

Monthly Cost Overview (Current)

Category Cost
Claude 20x ~€185
Google Workspace (includes Gemini) €16
Linear Pro ~€15
Figma Pro ~€14
SessionLab €9
1Password Family ~€5
Fixed Total ~€244/month
Variable (Lovable, Bolt, Railway, etc.) ~€50-100

What I Saved

  • Context-setting time: ~5 minutes per session x ~100 sessions/month = 8+ hours/month
  • Tool discovery: AI recommends the right tool from my stack in seconds, not minutes
  • Cost incidents: The GitHub Actions controls alone prevented at least 2 more runaway workflows
  • Security: Zero leaked secrets since implementing the hooks

The Updated Implementation Checklist

If you read the original article and built a basic TOOLS.md, here's how to level up:

Level 1: Hub & Spoke (1-2 hours)

  • Split your flat TOOLS.md into category files
  • Create a compact index with CLI commands and links
  • Add a cost overview table

Level 2: CLAUDE.md Ecosystem (2-3 hours)

  • Create a global ~/.claude/CLAUDE.md with your workflow rules
  • Add project-level CLAUDE.md files for each repo
  • Document security rules and cost controls

Level 3: MCP Integration (3-4 hours)

  • Identify which tools support MCP
  • Configure MCP connections (Linear, Supabase, Figma, etc.)
  • Document MCP tools in your detail files

Level 4: Security & Automation (2-3 hours)

  • Set up pre-commit hooks for secret detection
  • Document 1Password paths for all secrets
  • Add GitHub Actions cost controls to CLAUDE.md

Start from scratch? Clone the vibe-coding-scaffold — it includes TOOLS.md, CLAUDE.md, session handover templates, and code review configuration out of the box.

What's Next: Where This Is Heading

Three trends I'm watching:

1. Agent-native tools will replace documentation layers. When every SaaS ships an MCP server, the AI won't need TOOLS.md to know what you have — it'll discover it at runtime. We're maybe 12-18 months from this.

2. Persistent memory systems are emerging. Claude Code already has a memory system that persists across sessions. Combined with TOOLS.md, my AI remembers not just what tools I have, but decisions I made, mistakes I learned from, and patterns I prefer.

3. Multi-agent orchestration needs shared context. When I run Claude Code, Gemini CLI, and CodeRabbit in the same workflow, they all need the same context layer. TOOLS.md and CLAUDE.md provide that — they're the shared source of truth that any AI can read.

The Bottom Line

Six months ago, TOOLS.md was a clever hack. Today, it's the foundation of an AI-native development infrastructure.

The file itself is the least interesting part. What matters is the principle: your AI assistant is only as effective as the context you give it. And that context needs to be structured, layered, maintained, and — increasingly — actionable through direct integrations.

If you built a TOOLS.md after reading the first article: congratulations, you've got the foundation. Now it's time to build the system around it.

If you haven't started yet: clone the vibe-coding-scaffold and start today. The system will grow naturally from there.

This is a follow-up to The TOOLS.md Hack: How I Supercharged My AI Assistant with Context Awareness.

The scaffold is open-source under MIT license: github.com/ProduktEntdecker/vibe-coding-scaffold

Dr. Florian Steiner

Claude AI Consultant, Trainer and Speaker. Anthropic Community Ambassador Munich. I help product teams adopt Claude Code productively.

Book a call →