I've been using AI tools (Claude, Copilot, Cursor) to build software for the past year. They're incredible at writing code. They're terrible at remembering what depends on what.

Here's a scenario every AI-assisted developer knows:

You ask the AI to refactor your user model. It does a great job. Clean code, good types, well-structured. What it doesn't know is that six other files import from that model. Your auth service, your cart controller, your API docs, your validation middleware. All of them still reference the old interface. The AI has no idea they exist.

You don't notice either. Not until something breaks in production three sessions later.

The pattern I kept seeing

After about 40 development sessions, I started tracking these failures. They fell into three buckets:

Anchor drift. A core file changes, but everything that depends on it stays stale. The AI modified the source of truth without updating the consumers.

Context loss. Session crashes, token limits hit, or I simply started a new day. The AI had zero memory of what we decided yesterday, what was half-finished, or which files were in a fragile state.

Silent regressions. Tests quietly removed. Files growing past 500 lines. Documentation falling weeks behind the code. Nobody checking because there was no system to check.

These aren't bugs in the AI. They're structural problems. The AI is stateless. It can't track cross-file dependencies across sessions. That's not its job. But someone needs to.

So I built TRACE

TRACE is a CLI tool that enforces structural coherence in AI-augmented codebases. It's not a linter. It's not a test runner. It's a system that understands which files are your sources of truth (anchors), which files depend on them (consumers), and whether everything is still in sync.

npm install -g trace-coherence

Here's what a typical session looks like:

$ trace gate start

━━━ Start Gate — MyProject ━━━

✓ TRACE state exists
✓ Baseline tests passing (37/37)
✓ No unresolved debt (0/5)
✓ Integrity checksums verified
✓ Config validation passed
✓ AI context generated

GATE PASSED — Session open.

That last line, "AI context generated", creates a file called .trace/AI_CONTEXT.md that contains only what's relevant to this session: which anchors exist, which consumers depend on them, any outstanding debt, current plan items. Your AI tool reads this and has focused context instead of fumbling through the whole project.

The anchor-consumer model

The core concept is simple. In trace.yaml, you declare which files are anchors and which files consume them:

anchors:
  - id: user_model
    file: src/models/user.ts
    consumers:
      - src/services/auth.service.ts
      - src/controllers/user.controller.ts
      - src/middleware/validation.ts
      - docs/api/users.md

Now TRACE knows the dependency graph. When user.ts changes but auth.service.ts doesn't, that's a coherence violation:

$ trace gate end

━━━ End Gate — MyProject ━━━

✗ Consumer sync: user_model anchor modified,
  but 3 consumers not updated:
  - src/services/auth.service.ts
  - src/middleware/validation.ts
  - docs/api/users.md

GATE BLOCKED — Fix consumer drift before closing.

This is what makes TRACE different from a linter. A linter checks syntax. TRACE checks whether your changes are structurally complete.

Two ways to start

New project: trace init creates everything with full enforcement from day one.

Existing project: trace scan does a 4-phase analysis. It scans files, identifies likely anchors from import graphs, detects existing test infrastructure and quality tools, and auto-calibrates complexity thresholds. Gates default to "warn" mode. Pre-existing issues get a baseline pass, only new code is enforced.

This is the "Clean as You Code" approach borrowed from SonarQube. It turns out it works just as well for structural coherence.

What else it does

22 commands total. The ones I use most:

trace impact user_model     # Blast radius before coding
trace checkpoint            # Crash recovery + auto-observation
trace plan add "Auth refactor" --priority high
trace ci --json             # PR-scoped analysis for CI/CD
trace license               # Dependency license compliance
trace validate              # Config check with typo detection

The planning system is a YAML Kanban board. No infrastructure, no SaaS. Just a file in your repo. trace plan release turns completed items into formatted release notes.

The numbers

4,379 lines across 18 source files. One dependency (yaml). 37 tests. 48 KB compressed. Zero network calls. Runs entirely offline.

Token overhead: about 1,000 to 2,000 per session. But it prevents the 50,000+ token debugging sessions that happen when drift goes undetected.

What I learned building it

The biggest insight: AI tools are excellent at local reasoning (this file, this function) but have no mechanism for global reasoning (across files, across sessions). TRACE fills that gap.

The second insight: "Clean as You Code" is the only adoption strategy that works for existing projects. Nobody will stop everything to retrofit coherence checks. But if you only enforce rules on new code, people adopt it naturally.

If you're using AI tools to write code and have ever been burned by cross-file drift, give it 5 minutes.