What's the thing you've been thinking about for years?
Because that is what your AI tools should be helping you with — and right now they can't, because they don't remember a single thing you've ever told them about it.
Your space is in here somewhere — or it isn't, and that's the point.
Every conversation you've had with an AI about your thing — remembered gone. Every nuance you taught it — re-derived, badly, next time. That's you, doing unpaid labor to keep your own context alive.
// The memory problem hiding inside every AI tool you use isn't a feature gap. It's a tax on the people whose work compounds.
If you care about a space, the AI you use should accumulate your perspective on it.
One database. Any AI plugs in. Your context lives in your brain, and the model — Claude, ChatGPT, a local Llama, whatever comes next — borrows it on the way to answering you.
The pattern is documented and shareable — see the working spec. The substrate is opinionated where it matters (capture grammar, sensitivity tiers, project-scoped domains) and boring everywhere else (Qdrant, FastAPI, MCP). Anyone can build one.
What we do at killercatfish is the next layer: spawning brains for specific spaces, on the right substrate for the job — cloud when speed matters, local when privacy does, edge when the brain needs to live where the work happens. We call that layer Nodal. This page is how you find out if your space wants one.
Tell us a space. We'll spawn a brain that holds it for thirty days.
Three ways to bring a brain home.
Do it yourself
The full pattern, written down. Paste a prompt into your own Claude Code (or Cursor, Cline, Aider — tool-agnostic) and it interviews you, then builds a brain on your machine in about thirty minutes. Yours forever.
Read the pattern →We'll run it for you
You don't want to manage a database. We host the brain on our infrastructure, you get the dashboard and the MCP endpoint, your data is yours. Waitlist open.
Join the waitlist →On your hardware
Your data never leaves home. We install the brain on a local model — Ollama, llama.cpp, or whatever fits your box. White-glove setup, you own everything. For privileged contexts.
Talk to us →