
Kablamo launches dedicated AI practice to meet enterprise demand
We're formalising what our teams have been doing for years: helping Australian organisations move from AI experimentation to production systems that deliver real value.
Every system Kablamo has delivered has been, at its core, a data and intelligence project. Not "AI-enhanced." Not "with an AI module." Built around data, machine learning, and intelligent automation from the foundations up. The AI practice we are formalising in 2025 is not new capability. It is a decade of this work given a name and a sharper focus as enterprise demand shifts from experimentation to transformation.
It is worth being specific about what "a decade of AI work" actually means, because the claim is easy to make and hard to back up.
The track record
Bushfire prediction and decision intelligence. When the 2019/2020 Black Summer fires pushed existing emergency technology to its limits, Kablamo built a cloud-native platform that ingests vast quantities of geospatial and sensor data and turns it into operational decisions. The underlying system uses Amazon SageMaker for compute and ML, Bayesian network models for prediction through Phoenix RapidFire, and automated data pipelines through Athena and S3. It achieved a 70x improvement in modelling resolution for the Victorian Government. The NSW Rural Fire Service subsequently commissioned Athena, a custom implementation built on this platform, to fundamentally shift how incident information is gathered, validated, and turned into decisions. There is nothing else like it in Australia. It won the 2021 AWS Global Public Sector Partners Award for Most Innovative AI and ML Solution.
Petabyte-scale data platforms. For a major infrastructure group, Kablamo designed and delivered an AWS data lake engineered to ingest petabytes of data from thousands of real-time feeds. The platform used AWS Textract to extract text and data from over seventy years of documentation, event-driven serverless workloads for processing, and a custom UI and data catalogue to make it usable. This was not a dashboard bolted onto a legacy database. It was a ground-up intelligent system designed to create new revenue streams from previously inaccessible data.
Media archive intelligence. CoDA, built for the ABC, consolidated ninety-one years of broadcast content from five legacy systems into a single searchable platform. It reduced archive retrieval time from weeks to milliseconds, grew to over six terabytes of audio, video, and image content, and has been in continuous production for over seven years. Originally built on Elasticsearch with a Go backend and S3 as the API layer, the system now uses Google Gemini to power next-generation search and content understanding across the archive. CoDA is a living example of a system that was built for intelligence from day one and has continued to evolve as the AI landscape has advanced.
ML-powered transcription for government. For a government agency, Kablamo built DAME, a machine learning-powered transcription platform for investigative interview data. It reduced administrative workload by 50 per cent or more while maintaining the security and access controls that sensitive investigative material demands.
Voice AI for children with cerebral palsy. My Voice Library, built with the Cerebral Palsy Alliance, is a world-first platform for collecting dysarthric voice data to develop assistive technology. Fully serverless, cost-effective, and designed to handle sensitive voice recordings from children with strict data security. This is AI in the service of people who need it most.
Intelligent financial platforms. For a major Australian bank, Kablamo delivered a research-based fintech platform with over 500 user interviews, an intelligent data platform with machine learning capabilities, and the bank's first production AWS workload. From kickoff to closed pilot in twelve months.
Digital asset management. Kablamo's own DAM platform was built as a modular, cloud-based system for streaming, broadcast, corporate enterprise, and law enforcement. Not a licensed product with features bolted on. A system designed around the data and how it moves.
Every one of these projects was, in the language we now use, a business of AI rather than a business with AI. The intelligence was not a feature added to an existing system. It was the reason the system existed.
What changed
The models changed. Claude Opus 4.6 can sustain autonomous work for over fourteen hours. Gemini 3.1 Pro leads on twelve of eighteen tracked benchmarks at two dollars per million input tokens. The Model Context Protocol has given agents a standard way to discover and use tools across enterprise systems. The cost of intelligence has dropped to commodity economics.
This means the kind of work we have been doing for a decade, connecting data to consumers in powerful ways, is now the thing every enterprise needs. The systems we build today use the same architectural instincts as CoDA and Firestory: composable, API-first, designed for change. But they now have access to foundation models that make capabilities possible which would have been science fiction five years ago.
What the practice focuses on
Agentic systems that orchestrate across enterprise tools and data sources. We build these for our own operations, with agents spanning CRM, resourcing, project management, content operations, and strategic account planning, all orchestrated through MCP. The same patterns apply to client work.
Production RAG and knowledge systems that make enterprise data accessible, searchable, and actionable. Not chatbots. Systems with governance, observability, and the engineering rigour that production demands.
AI-augmented engineering workflows where AI changes not just what gets built but how. QA functions moving upstream into specification-writing that agents test against continuously. Non-engineers gaining AI-mediated access to codebases. Legacy system comprehension as a prerequisite for transformation, not an afterthought.
Model evaluation and monitoring so AI systems remain accurate as data, requirements, and models change beneath them.
The partnerships
This formalisation comes alongside two significant partnerships. Kablamo's long-standing AWS Advanced partnership, underpinned by the Global Public Sector Award, continues to anchor much of our cloud and AI delivery. Our new Google Cloud partnership, formalised in March 2026, extends our capability across GCP's AI, data, and Kubernetes infrastructure, including Vertex AI, BigQuery, and Gemini.
Most of our clients do not live on a single cloud. The ability to operate across both platforms with the engineering depth to make real architectural decisions, rather than vendor-driven ones, is the practical differentiator.
Why engineering, not consulting
There is no shortage of firms offering AI strategy. The gap is in engineering: connecting models to real data, real workflows, and real users inside organisations with legacy systems, regulatory requirements, and the accumulated complexity of having been in business for a long time.
That is where Kablamo has always operated. Every programme we have delivered has been built around intelligence, not decorated with it. The AI practice formalises this because the demand now justifies a name, and because the organisations that move in the next eighteen to twenty-four months will build advantages that compound for years.
If you are looking to move from experimentation to production, get in touch.