Last month I asked Claude to refactor the user model in a project I was building. It did a fantastic job. New type-safe interfaces, clean separation of concerns, proper validation. I committed the changes and moved on to the next feature.

Two sessions later, the auth service broke. Then the cart controller. Then the API documentation started returning descriptions of fields that no longer existed.

The AI had done exactly what I asked. It changed the user model. What it couldn't possibly know was that six other files depended on that model. It couldn't know because AI tools don't maintain a map of your codebase's structural dependencies. They see the file you're working on. Everything else is invisible.

This wasn't a one-time thing. I started paying attention, and the same pattern showed up in almost every multi-session project.

Three failure modes, one root cause

After tracking this across 40+ development sessions, the failures clustered into three categories.

Files that depend on each other get out of sync. You change a source of truth. The files that consume it don't update. They silently break or, worse, silently produce wrong results.

Context vanishes between sessions. Your session crashes, you hit a token limit, or you simply come back the next day. The AI doesn't remember what you decided yesterday, what was half-finished, or which part of the codebase is in a fragile state. You spend the first 20 minutes of every session re-explaining where you left off.

Quality erodes invisibly. A test gets removed during a refactor. A file grows from 200 lines to 600. Documentation falls a week behind the code. There's no mechanism to catch these because nobody is tracking the trajectory of the project's structural health.

The root cause is the same for all three: AI tools are stateless. They have no memory, no dependency graph, and no awareness of your project's structural contracts. That's not a criticism. It's an architectural reality. But it means the structural integrity of your codebase depends entirely on you remembering everything. And you won't.

What would actually fix this

I spent a while thinking about what a solution would look like. Not a better AI. Not a smarter prompt. A system that sits alongside the AI and handles the things AI can't.

It would need to know which files are sources of truth and which files depend on them. Check that changes are structurally complete before you close a session. Survive crashes and token limits without losing project state. Give the AI focused context instead of making it guess. Work with any AI tool, any language, and any CI system. And add minimal overhead.

So I built it.

TRACE: structural coherence for AI-augmented codebases

TRACE is a command-line tool. You install it with npm, declare your project's structural relationships in a YAML file, and then start and end your development sessions through TRACE's gates.

The gate start verifies your project's integrity, checks for unresolved issues, and generates a focused context file for your AI tool. The gate end checks whether your session's changes are structurally complete: did every modified source of truth have its dependents updated? Did quality metrics stay within thresholds? Were docs updated?

The most important part was getting the adoption story right. Nobody is going to stop their work and retrofit structural checks into an existing codebase. So TRACE has two tracks.

For new projects, you get full enforcement from the start. For existing projects, TRACE scans your codebase, auto-detects dependencies and quality tools, and creates a baseline. Only new and modified code is enforced. Over time, as your team naturally touches old files, they come into compliance organically.

The overhead question

About 1,000 to 2,000 tokens per session. But one undetected drift incident costs you an entire debugging session: 30,000 to 50,000 tokens of the AI trying to figure out why things don't work, re-reading files, tracing dependency chains, fixing cascading breakage. TRACE's overhead pays for itself the first time it catches a stale consumer.

What surprised me

Two things surprised me while building this.

First, the act of declaring your anchors and consumers forces you to think about your architecture in a way that AI-assisted development usually skips. When you're building fast with an AI tool, you don't naturally stop and ask "what depends on this?" TRACE makes that question part of the process.

Second, crash recovery turned out to be more valuable than I expected. TRACE takes checkpoints that capture which files changed, classifies them, and writes it all to a log. When your session crashes, the next gate start knows exactly where you were.

22 commands. One dependency. Runs entirely offline. Works with any language. If you're building with AI tools and your projects span more than a single session, you've probably experienced the problems TRACE solves.