
Building Business of AI, Not Just with AI
The technology problem is largely solved. The models work. The protocols are maturing. What remains is considerably harder: rebuilding enterprise operating models around intelligence as a core capability, not bolting it onto the side.
Every enterprise I talk to has an AI strategy. Most of them are the same strategy. Add a copilot here. Automate a workflow there. Stand up a centre of excellence. Measure cost savings. Report to the board that AI adoption is underway.
It is underway. And it is not enough.
The companies that will define the next decade are not the ones adding AI to their existing business. They are the ones rebuilding their business around AI as a core operating capability. The difference between those two things is the difference between installing solar panels and redesigning the grid.
The distinction that matters
Most organisations are still asking the wrong question. They ask: "How can we use AI in our business?" The right question is: "How do we become a business of AI?"
The difference is not semantic. A business with AI bolts intelligence onto existing processes. It adds a chatbot to customer service. It automates a report that someone used to write by hand. It treats AI as a feature, owned by the IT department, measured by cost reduction. The gains are real but incremental, and they plateau quickly.
A business of AI is something else entirely. It rebuilds its operating model around intelligence as a core capability. It does not automate existing workflows. It asks which workflows should exist at all when the cost of intelligence approaches zero. It treats data not as a byproduct of operations but as the fuel for a compounding system where each new use case makes every other use case smarter.
The companies that will dominate the next decade are not adding AI to their processes. They are rebuilding around it.
What changed in the last twelve months
To understand why this matters now, you have to understand what changed. And what changed is not one thing but three things happening at once.
The models crossed a quality threshold that engineers respect
Late 2025 was the turning point. When the creators of Redis and Python began publicly using AI coding tools, something shifted in the engineering culture. Before that moment, senior engineers could dismiss AI code generation as a toy. After it, dismissal became harder to defend. At one organisation I am familiar with, AI-assisted commits now account for 40 to 60 per cent of code written. Their QA function went from a full team to two people. Production incident rates held steady.
Claude's trajectory tells the story in compressed form. Anthropic released Claude Opus 4 in May 2025, establishing a new frontier in coding and agentic capability. Opus 4.1 followed in August as a precision update for real-world engineering tasks. By November, Opus 4.5 had reclaimed the coding benchmark lead with 80.9 per cent on SWE-bench Verified. With the release of Opus 4.6 in February 2026, the model could sustain autonomous work for over fourteen hours. Long enough that a team of sixteen Opus 4.6 agents wrote a C compiler in Rust from scratch, one capable of compiling the Linux kernel. Google's Gemini followed a parallel arc: Gemini 3 launched with state-of-the-art reasoning, Gemini 3.1 Pro arrived in February 2026 leading on twelve of eighteen tracked benchmarks, and the pricing (two dollars per million input tokens) made frontier intelligence accessible at commodity economics.
This is not a hardware refresh. It is a phase change. The models are now good enough, cheap enough and reliable enough that the binding constraint on enterprise AI has moved decisively from capability to organisation.
The protocol layer matured
The Model Context Protocol (MCP) went from Anthropic's internal experiment to what Nvidia's Jensen Huang called a technology that "completely revolutionised the agentic AI landscape." Google announced fully managed MCP servers across BigQuery, Compute Engine, Kubernetes Engine and Maps. Microsoft embedded Claude in Microsoft 365 Copilot. The broader signals confirm it: OpenAI's recent funding round locked AWS as the exclusive distributor of its Frontier agent platform, a structural bet that agentic AI moves from experiment to enterprise infrastructure this year.
The significance is not the protocol itself but what it enables: a standard way for AI agents to discover and use tools. Agents can now operate across enterprise systems without bespoke integration for each one. This is the moment the AI ecosystem acquired something like the composability that made the web powerful. A single agent can now query your data warehouse, check your calendar, draft a message in Slack and file a Jira ticket. Not because someone wrote code for each of those integrations, but because the tools expose themselves through a common protocol that the agent can discover at runtime.
Australia specifically reached an inflection
The Anthropic Economic Index, released this week, shows Australia using Claude at more than four times the rate its population would suggest. New South Wales and Victoria account for nearly 70 per cent of national adoption, driven not by mining wealth or government spend but by the concentration of finance, professional services and technology workers in Sydney and Melbourne. Australian users are spreading their usage across a broader range of tasks than the global average. Less coding, more management, administration and professional communication.
The Australian government has now signed a memorandum of understanding with Anthropic on AI safety research and economic data tracking, with Dario Amodei meeting the Prime Minister in Canberra. This is not a technology announcement. It is a structural signal that AI adoption in Australian enterprise has crossed the point where government considers it a matter of economic policy.
The Headless Enterprise: a pattern for building of AI
If the technology problem is largely solved, what remains is architecture. Not just software architecture, but the architecture of how a business operates. I want to describe a pattern I have been seeing emerge in organisations that are making the shift well. I am calling it the Headless Enterprise, borrowing the term from the headless CMS concept that reshaped web development a decade ago.
The original headless insight
The headless CMS separated content from presentation. Instead of a monolithic system that managed both what was stored and how it was displayed, the headless approach created an API-first content layer that any frontend could consume. This separation of concerns unlocked an explosion of innovation in digital experience delivery.
The Headless Enterprise applies the same principle to CRM, ERP and every other enterprise system that has traditionally been a monolith combining data, logic and interface into a single, tightly coupled package.
Why ERP and CRM are ripe for decomposition
We wrote about ERP hostageware back in 2018. The problems we identified then (cumbersome implementations, glacial update cycles, vendor lock-in so severe we called it Stockholm Syndrome) have not gone away. They have intensified. The ERP vendor that sued customers for connecting Salesforce to their data is still in business. The average enterprise still runs core processes on systems designed when video rental stores existed.
But something fundamental has changed. In 2018, decomposing a monolithic ERP was a multi-year programme that required building custom integrations for every system that needed to talk to every other system. The cost was prohibitive and the risk was enormous. In 2026, with MCP-compatible agents that can discover and orchestrate tools at runtime, the economics of decomposition have shifted radically.
The pattern
The Headless Enterprise pattern has four layers:
The Intelligence Layer sits at the centre. This is not a single model but a coordination layer: an orchestrator that routes decisions to the appropriate combination of models, tools and human reviewers. It maintains context across interactions and compounds learning over time. Every business decision that flows through the intelligence layer makes the layer smarter, which makes the next decision better. This is the compounding return that distinguishes a business of AI from a business with AI.
The Data Layer treats data as a product, not a byproduct. In a traditional ERP, data is locked inside the application. In the Headless Enterprise, data is exposed through APIs that any system (including AI agents) can consume. The data layer includes not just transactional records but the semantic context that makes those records meaningful. What a customer relationship actually looks like. What a project's real status is. What the second-order effects of a decision might be.
The Capability Layer replaces monolithic applications with composable capabilities. Instead of "our CRM" or "our ERP," the organisation assembles capabilities (customer relationship management, resource planning, financial control, people management) from a mix of best-of-breed services, custom-built components and AI agents. Each capability exposes itself through MCP or equivalent protocols. Each can be replaced, upgraded or augmented independently.
The Experience Layer is where humans interact with the system, but it is no longer the only layer that matters. In a traditional enterprise, the UI is the application. In the Headless Enterprise, the experience layer is one of several consumers of the intelligence and data layers. An AI agent scheduling a meeting, processing an invoice or triaging a support ticket is also a consumer. The experience layer becomes thinner and more adaptive. It asks the intelligence layer what to show, rather than hardcoding business logic into forms and workflows.
An example in practice
Consider what this looks like for a professional services firm, a pattern I have been observing at close range.
In the traditional model, the firm runs a CRM for client relationships, a resource management tool for staffing, a project management system for delivery and a finance system for billing. Each is a separate application with its own data model, its own interface and its own update cycle. When a partner wants to know whether they can take on a new engagement, they need to check the CRM for the client history, the resource system for team availability, the project system for current commitments and the finance system for budget constraints. In practice, they call three people and wait two days.
In the Headless Enterprise, the partner asks a question in natural language. An orchestrating agent queries the data layer across all four domains through MCP-compatible interfaces, synthesises the result and presents a recommendation. The reasoning is exposed, the confidence levels explicit, and the human override always available. The agent does not replace any of the underlying systems. It replaces the coupling between them, which was always the real problem.
At Kablamo, we have been building exactly this kind of system for our own operations. Our internal agents, built on Claude and orchestrated through MCP, span CRM, resourcing, project management, content operations, sales intelligence and strategic account planning. They are not separate tools bolted onto existing processes. They are the operational fabric of the business, each one making every other one more effective because they share a common intelligence and data layer.
The result is not incremental efficiency. It is a qualitatively different way of operating. A partner preparing for a client meeting gets a briefing that synthesises CRM data, recent project delivery metrics, competitive intelligence and relevant case studies. Assembled in minutes, not days. Improving in quality with every interaction because the system learns what a good briefing looks like.
The adoption paradox and the governance gap
If this architecture is sound, and I believe it is, why is adoption uneven? The answer is captured in an observation from that Melbourne roundtable: there is an inverse relationship between a person's skill in any discipline and their willingness to use AI to help with that discipline.
A senior engineer who uses Claude to plan a holiday will hold it to a completely different standard the moment it touches their codebase. The more someone has invested in mastering a domain, the higher the bar they set. This is not irrational. It is a reasonable response to uncertainty about reliability. But it creates a specific problem for organisations running top-down transformation programmes, because the people they need to lead adoption are the people most likely to resist it.
The organisations making progress are the ones that focus on individual benefit rather than business benefit. People adopt AI when it helps them, not when it helps the organisation. The personal tipping point cannot be manufactured. It has to be experienced.
And then there is governance. One participant at the Melbourne gathering was running sixteen personal AI agents alongside their daily work. Another had a dedicated Slack channel for bots, quarantined from human channels. A third raised the security implications: agents can interact with each other in ways that create vulnerabilities nobody anticipated.
The analogy the room settled on was shadow IT. Every department building its own Access database in the 1990s, until one critical process broke when the person who built it left. Agent proliferation is the same pattern at higher speed and higher stakes.
The Headless Enterprise pattern addresses this directly, because centralising the intelligence and data layers creates natural governance boundaries. Agents operate within a framework that controls what data they can access, what actions they can take and when a human must be in the loop. This is not governance imposed after the fact. It is governance built into the architecture.
The measurement problem
When the conversation at these gatherings turns to ROI, something uncomfortable invariably surfaces. Most organisations have committed to AI efficiency targets. Almost none have a baseline to measure against.
The board wants 50 per cent productivity improvement. Engineering is deploying tools. But when the review comes around, nobody can demonstrate the gain, even when the team has genuinely become more productive, because the measurement framework does not exist. The efficiency that does materialise tends to create new demand rather than reduce cost. Build software faster and you generate more customer requests. The gain is real and invisible at the same time.
The Anthropic Economic Index provides the first large-scale empirical framework for understanding these dynamics. Its finding that AI usage leans 57 per cent toward augmentation and 43 per cent toward automation suggests the immediate economic impact is more about making existing workers more capable than about replacing them. More experienced users attempt higher-value tasks and are more likely to get successful outcomes, a learning curve that rewards sustained investment in adoption.
For Australian enterprises specifically, the Index reveals something instructive: the country's usage pattern skews toward management, administration and professional communication rather than pure coding. This suggests that Australian businesses are further along in discovering AI's value beyond software engineering. Exactly the kind of broad-based adoption that distinguishes a business of AI from a business that has merely given its developers a coding assistant.
The companies that will not survive
The sharpest insight from the Melbourne conversation came when someone pointed out that many companies simply do not need to exist anymore. Train companies did not become airlines. Traditional retailers did not build the winning online stores. The people who understood the old model best were often the least able to let go of it.
One organisation at the table has a board mandate to reduce headcount by 50 per cent within five years. The first five per cent is straightforward. The next fifteen requires genuine transformation. The last thirty requires rebuilding the business around capabilities that do not exist at scale yet.
This is the hard part that the tipping point reveals. The technology works. The protocols are maturing. The economics are favourable. The intelligence is available at commodity pricing. What remains is the organisational will to rebuild. Not to bolt AI onto the side of the existing business, but to reconstruct the business around intelligence as its core operating principle.
The Headless Enterprise is one architectural pattern for doing this. MCP and its successors provide the protocol layer. The current generation of models (Claude Opus 4.6, Gemini 3.1 Pro and their forthcoming iterations) provide the intelligence. The Australian market, with its high adoption rates, diverse usage patterns and emerging government partnership framework, provides a context that is as favourable as any in the world.
The question is not whether this transformation will happen. It is whether you will be the one doing the transforming, or whether you will be transformed by someone else.
The Kablamo AI Round Table is a private forum for technology leaders working through real problems in AI adoption. The next session will be held later in 2026. If you would like to participate, reach out.