Introducing BLZ: Blazing-fast llms.txt docs search for agents and humans
Fast 5–50ms doc search CLI tool that caches, parses, and indexes llms.txt files
⛺ BLZ is my first release as a part of this experiment I’m calling “Outfitter”
Bring llms.txt local, results in 5–50ms
BLZ (pronounced Blaze) is a CLI search tool that brings documentation on‑device, making it instantly searchable—and I mean instantly. Think ripgrep, purpose-built for documentation. It parses and indexes llms.txt and llms-full.txt files (single‑file, AI‑readable mirrors of docs sites) and returns line‑accurate results in 5–50ms depending on corpus size and query complexity (most warm‑cache queries land around ~6–10ms). Source docs can be searched fully offline after the first download of source llms.txt files.
Source available on GitHub: github.com/outfitter-dev/blz (MIT License)
I built BLZ after watching coding agents search for docs the same way humans do—lots of web crawling, repeated queries, wasting time and tokens from their precious context window. To me, it seemed inefficient to mostly copy human behavior when agents might be more unlocked with something more befitting their capabilities.
💾 Install BLZ with this Guide or 🖥️ Run this in your terminal:
curl -fsSL https://blz.run/install.sh | shSee BLZ it in action in this demo video:
How BLZ works
📚 Add sources easily:
blz add bun https://bun.sh/llms.txtblz add looks for llms-full.txt (full docs), caches it locally, parses, then indexes it. Sources are instantly available for searching. Frequently changing source docs aren’t a problem either, they’re just an update away.
🔎 Search specific sources, or all of them:
blz “dependency management” --source bunRanked results are returned with inline snippets and line-accurate citations. Oh, and it’s crazy fast.
📑 Deterministic citations:
blz get bun:304–324BLZ returns stable spans like you can reuse later. Including the flag --context all will expand a hit to its full heading-bounded section when you need the surrounding info.
💡 Tip:
Get available commands/options withblz --help
Agent-specific guidance is available if they runblz --prompt
What it’s good for
Agents that want line‑accurate documentation search without eroding context
Devs who live in the terminal and want fast docs search with better ergonomics than
rgforllms.txtTeams that care about offline, private, up-to-date reproducible lookups
Where it’s limited
Onboarding a new
llms.txtsource is entirely manual. You or your agent will need to find the doc first before usingblz add(though we’re working on making this much better)BLZ is meant to search docs only. It doesn’t (and won’t) replace the kinds of info you might find on Stack Overflow, Reddit, X.com, etc.
For those who prefer non-CLI apps, well, that’s still a ways off (demand dependent)
👉 Bench notes: MacBook Air M2, warm cache; typical keyword queries ~6–20ms. Complex boolean queries and larger corpora trend toward ~50ms, though often faster on my Mac Studio M1 Ultra.
On Building BLZ
I’ve been building alongside agents for a while now, but BLZ is the first thing I’ve actually shipped that feels real—a tool that stands on its own. I’ve co‑founded five startups, led product, design, and everything in between, but the engineering? Always in someone else’s hands.
A while back, I accepted that it made more sense to double down on my strengths than start from scratch learning to code. Then ChatGPT came along. Then Cursor. Then Claude Code. Suddenly, I wasn’t just brainstorming; I was building. Pairing with agents turned out to be the most productive apprenticeship imaginable.
The result? I built something end‑to‑end. A CLI tool. Written in Rust. By a guy whose last serious coding was HTML + CSS. Wild.
The Problem: Agents Search Like Humans
When agents code, they constantly look up docs for SDKs, APIs, and frameworks—just like we do. But their searches are slow, noisy, and context‑hungry. A single documentation lookup can chew through 60,000 to 100,000 tokens, leaving barely any room to reason about code.
Watching this happen again and again, I couldn’t shake the thought: why are these brilliant machines still searching like us? Shouldn’t they have better tools by now? BLZ became my answer.
Bringing Docs to Agents (Instead of Agents to Docs)
The spark came from watching an agent use Ripgrep. It was so fast. Type, result, done. I wondered: what if documentation worked like that?
llms.txt and llms-full.txt files already existed—AI‑friendly, single‑file mirrors of docs—but they were unwieldy. Just grepping through them worked, but you’d lose crucial structure: headings, hierarchy, and relationships. Searching docs isn’t the same as searching code.
So I built BLZ to do both. It downloads llms-full.txt files, parses their sections, indexes them with Tantivy, and caches them locally. The result: fast lookups and structured context. Search results come back in milliseconds with line‑accurate references and full sections when you need them.
Building BLZ
It started as a Friday afternoon brainstorm. I studied a few llms.txt examples, mapped out an architecture with ChatGPT, wrote up a quick PRD, and sent my agents off to build while I made dinner for the family.
After the kids were in bed, I checked back in expecting a stub. Instead, I saw a working CLI. I ran blz add bun https://bun.sh/llms-full.txt—it downloaded, parsed, and indexed in under a second. Then blz search “package manager” returned results before I could lift my finger off Return. 6 milliseconds flat.
I didn’t believe it. Asked Claude if it was faking it. It wasn’t. Even Anthropic’s massive llms-full.txt—40k lines, 300k tokens—searched just as fast. I actually laughed out loud. What the hell.
That was the moment I knew I had to keep going.
Why Speed Isn’t the Real Win
Speed was the hook, but the real win is efficiency. Agents using BLZ don’t waste their whole context window reading docs—they get exactly what they need. Most sessions use under 10k tokens instead of 60k+. That’s not just cheaper—it’s smarter.
That’s what I call context engineering: giving agents the right data, at the right time, in the right shape. Not more, not less. BLZ was built for that.
Offline, Private, Update-able
Once you’ve added a source (say https://bun.sh/llms-full.txt), every search runs entirely offline. Zero network calls, zero privacy trade‑offs. Updates use ETags and conditional requests—blz update bun only fetches new data when something’s actually changed. Fast, private, reproducible. In a world full of non‑determinism, that reliability feels like magic.
Iterating With Agents
I built BLZ with agents—and tested it the same way. Claude, Codex, Gemini, AMP, Factory—they all took turns using it directly. It was like doing user research at 100x speed: clear feedback, detailed reports, and surprising insights.
The best ideas came from them:
Every agent added
--jsoninstinctively, so I made it official.--helpwasn’t enough, so I built--promptto return agent‑specific guidance.They wanted full sections instead of snippets, so I added
--context all.
One review still makes me smile:
The
--context allflag is transformative—it turns documentation search from a multi‑step process into a single command that returns ranked results with full context. This is exactly what agents need.Rating: 4.8/5 — One of the best CLI tools I’ve tested. The 0.2 deduction is purely for discoverability of the
--context allfeature.
Fair point. They dinged it because their favorite feature wasn’t high enough in the help text.
CLI tools used to intimidate me. Now I prefer them—and so do agents. Turns out they’re the perfect test subjects for tools like this. No interface needed.
Still an Experiment
BLZ isn’t a totally polished, or fully baked product yet. So far it’s mostly been a fun experiment, that also happens to work remarkably well, for my agents and I at least. As far as who’s building it, right now it’s just me and my agent crew (Claude Code, Codex, Amp, and newcomer Factory). I’ve had it reviewed repeatedly (shout‑out to 🐰 CodeRabbit for being delightfully pedantic), but I won’t pretend it’s idiomatic Rust. It’s intended to be maintainable, and I’d love any critiques, suggestions, or PRs to help make it better.
What It Means
A year ago, I’d have laughed if you told me the first tool I’d build and ship would be a Rust CLI. But here we are. BLZ isn’t perfect, and I’m certainly not the one to judge its engineering purity. But it’s real, it works, and it’s fast—really fast.
If you try it, treat it like I do: an ongoing experiment in what happens when you stop worrying about experience and lean into curiosity—and work side‑by‑side with some very capable, very eager robots.
Got feedback? I’d love to hear it: @mg on X/Twitter
Install BLZ:
curl -fsSL https://blz.run/install.sh | shThen try a search:
blz add bun https://bun.sh/llms.txt
blz “test runner”When you lift your finger from Return and see results already there, you’ll know it’s working.



Love this!
Looks really cool! I’ll give it a try on a project I‘m working on right now.