ai-cortex
[public]local intelligence layer for ai agents — context, memory, and continuity without touching the repo
v0.5.1 — actively used. Available on npm.
what it does
Three layers, one substrate: structural (rehydrate, suggest, blast-radius), continuity (history capture + memory extraction), integration (33 MCP tools, briefing injection). Cache lives at ~/.cache/ai-cortex/ — never inside the target repo.
Make agents start fast, stay grounded, and never lose context.
# features
- rehydrate — every agent session starts warm: project structure, key files, pinned memories, in milliseconds
- suggest — find the right files for any task in three ranking modes, plus blast-radius callers before you refactor
- history — never lose a correction, decision, or referenced file when the harness compacts the conversation away
- remember — persistent memory of decisions, gotchas, how-tos; agents recall them, apply them, cite the why
- multi-language — typescript, javascript, python, c, c++ indexed with tree-sitter
- agent-agnostic — 33 mcp tools, cli, and library; local-first; zero writes into your repo
a session, end to end
agent → rehydrate_project
ai-cortex · v0.5.1 · 330 files · pinned: "never write into the target repo"
memory digest: 16 decisions · 8 gotchas · 3 how-tos
agent → recall_memory { query: "auth" } # browse — no signal
agent → get_memory { id: "..." } # apply — counts as used
agent → suggest_files_deep { task: "..." } # path + trigram + content
agent → blast_radius { qualifiedName: "parserForExt", file: "..." }
recall_memory is browse, get_memory(id) is the use signal — the cardinal pattern that makes memory usefulness measurable per-record, not aggregate.
memory
The most valuable context is nowhere in the repo: “we tried X, it broke under load, we use Y now”, “this module is owned by team Z”, “always use POST for create endpoints — agreed last quarter”. Code shows the what; memory shows the why and the what we tried that didn’t work. cortex captures it once, surfaces it forever.
user → "should I write tests with mocks here?"
agent → recall_memory { query: "mock database tests" }
→ "integration tests must hit a real database, not mocks"
type: decision · confidence 0.95 · pinned
agent → get_memory { id: "mem-2026-03-15-..." }
← rule: integration tests MUST hit a real database
why: prior incident — mocked tests passed, prod migration broke
when: any test under tests/integration/
agent → "no — your project rule says integration tests use a real db.
here's the reasoning [cites memory]. want me to set up a test container?"
The agent didn’t re-derive. It recalled, applied, and explained the why.
session history
Long agent sessions hit the context window and compact away the conversation. Corrections you gave, decisions made mid-task, file paths referenced once — all gone. cortex keeps every session searchable so the agent can recover what it was about to forget.
user → "wait, what did we decide about the auth token?"
agent → search_history { query: "auth token decision" }
→ [session 7df1c · correction · turn 42 · score 0.91]
"no, keep the token in httpOnly cookie — we agreed on that"
→ [session 7df1c · userPrompt · turn 38 · score 0.84]
"localStorage is fine for the dev demo but not for prod"
agent → "you decided to keep auth tokens in httpOnly cookies after
ruling out localStorage for prod. that was 4 sessions ago."
Context the harness threw away, recovered in one tool call.
why it compounds
- integration — repo + memory + history share storage, audit, vectors, lifecycle. Cloning one layer doesn’t reproduce the value.
- improvement — re-extraction bumps confidence, rewrite cleanup auto-promotes, aging trashes stale, access counters gate cleanup.
- substrate, not agent — agent-agnostic via MCP. Same store serves Claude Code, Codex, Cline, and whatever ships next.
scope
TS/JS/Python/C/C++ via tree-sitter. Node ≥ 20. Cache local per-worktree at ~/.cache/ai-cortex/. No writes into the target repo, ever.
Cortex makes agents understand your project and remember it.
# install
npm i -g ai-cortex