
What breaks when you apply AI development patterns to a codebase nobody fully understands
The emerging patterns for AI-assisted development are written for teams with clean codebases and good documentation. Most Australian enterprises don't have that. Here's what you adapt when the system is eight years old and the original engineers are gone.
Thoughtworks recently published a useful series on AI-assisted development through Martin Fowler's site. Knowledge priming, context anchoring, design-first collaboration. Worth reading. Also written for a codebase that knows what it is.
Most Australian enterprises don't have that codebase.
One organisation we work with has over a thousand production systems. Average age: five to seven years. Average engineer tenure: shorter than that. The software is being maintained by people who didn't write it. In some cases, by engineers who were in high school when the original architectural decisions were made. Nobody is embarrassed about this. It is simply the condition of having been in business long enough.
So when you sit down to apply these patterns to a system like that, things break in specific and predictable ways. Here is what we have learned.
Knowledge priming hits a wall immediately
The premise of knowledge priming is that you create versioned context files (architecture overview, naming conventions, anti-patterns) and load them before each session. The problem is that nobody has written this down. Or what has been written down describes a system that no longer exists. Or it describes what the architect intended, not what the developers actually built.
The first time a team tries to build a priming document on a legacy system, they discover the archaeology project they are actually in. This is valuable. It is also not what anyone planned for in the sprint.
What works better: build the priming document from the tests, not the documentation. If the codebase has reasonable test coverage, the tests describe the actual behaviour as it exists today. Feed the model the test suite and ask it to generate the priming document from there. The result is grounded in what the system does rather than what someone hoped it would do.
Context anchoring goes from productivity tool to survival mechanism
In a well-understood codebase, keeping a context anchor is a nice habit. In a legacy system, it is essential. Without it, every new session starts with the model defaulting to generic patterns from its training data, and you spend the first half hour undoing decisions that contradict things the team agreed to last week.
More importantly: in a legacy system, the reasoning behind architectural decisions often exists only in the head of one or two senior engineers. When those engineers leave, the reasoning leaves with them. The codebase becomes progressively more fragile as subsequent decisions are made without it.
A context anchor that captures the why behind legacy decisions, even retrospectively inferred from the code, is a form of institutional memory the organisation has never had before. That is worth treating seriously.
Design-first conversation becomes a diagnosis
The original pattern is about aligning on design before touching implementation. In a legacy context, the design conversation tends to surface something uncomfortable: nobody in the room has a coherent mental model of the system they are about to modify.
This is actually the most valuable thing the pattern does in a legacy context. The model will find inconsistencies the team has been stepping around for years. The discipline is to stay in that conversation rather than rush to implementation once it gets awkward.
The teams that have done this well expected the investment phase to be longer. They planned for understanding before productivity. The teams that struggled expected output from the first sprint and got frustrated when the early sessions produced more questions than code.
That frustration is information. It is telling you something about the system that was always true and that you were not previously equipped to see.
InfoQ's piece on the oil and water moment in AI architecture frames this well: deterministic legacy systems and non-deterministic AI behaviour create a genuine architectural mismatch that cannot be papered over. The patterns that work are the ones that acknowledge the mismatch and work around it rather than pretending the codebase is something it is not.