
Non-engineers are in your codebase now. Here's what actually happens.
A major Australian retailer gave marketing and purchasing staff AI access to the codebase. Not to write code. To ask questions. What happened next was not what anyone planned for.
The use case seemed modest. A major Australian retailer started issuing AI coding tool licences to non-engineering staff. Marketing analysts. Purchasing managers. The idea was simple: let them ask questions about the codebase in plain English, reduce the volume of interruptions landing on engineering teams.
How does the search ranking work? What factors drive product recommendations? Why does the price for this SKU display differently in two places?
The interruption volume dropped. That was expected. Everything else was not.
The first surprise was that marketing staff started finding inconsistencies the engineering team had either not noticed or had quietly parked. When an analyst can ask the codebase why two product categories have different pricing logic and get a coherent answer in seconds, they ask. When the answer is "because these two features were built three years apart by different teams and were never reconciled," they escalate. Several issues surfaced this way that had existed for years without anyone on the business side having the language to raise them precisely.
The second surprise was better requirements. Purchasing managers who could inspect the logic behind inventory alerting wrote sharper briefs for changes to that logic. They knew what the system already did. They knew what they were actually asking for. The clarification cycle between business and engineering shortened significantly.
Neither of these was in the business case.
Then the things that did not go to plan.
The assumption that non-engineers would only read and not write did not hold. Some staff members, having understood the system well enough to describe a change in plain English, asked the model to generate the code. Some of them submitted it as pull requests. The review process caught most of it, but it created a question the team had not worked through: what is the policy for AI-generated code submitted by someone without engineering accountability?
There is also a real security surface here. An AI tool with codebase access can surface implementation details, data structures, and business logic that are commercially sensitive. Read access via a supervised AI interface is not the same as unrestricted repository access, but that distinction needs to be actively designed. Assuming it holds by default is not a governance strategy.
The teams handling this well approached it as an access design problem from the start. Which systems are in scope? What can the model surface? What happens when a question reveals something the person asking it was not supposed to know? These are solvable, but they require the same rigour as any other access control decision.
What is clear is that the boundary between people who understand a system and people who do not is moving. Not because non-engineers are becoming engineers. Because the tools that previously required engineering expertise to operate are becoming usable without it.
The question is not whether this happens in your organisation. It is already happening. The question is whether someone is designing it.
The broader shift has a name. InfoQ's work on decentralising architectural decisions argues that architecture needs to move closer to the people doing the work, with a structured advice process rather than centralised gatekeeping. Giving non-engineers AI access to the codebase is, in practice, a step in that direction, whether it was designed as one or not. The organisations that treat it as an architecture decision tend to fare better than the ones that treat it as a tooling rollout.