What every CIO should know about Cloud

Watch Angus and Allan talk about CIOs, Cloud and oranges or read the full transcript below.

What should every CIO know about Cloud, well, every CIO should know something about Cloud. I think we can agree on that part.

(laughing).

But, look, I don't have time for... I shouldn't put it that way. I think the era of the CIO that declared that we're a Cloud first business just because everybody else is doing that. I think that's finished and what we're seeing in Australia and you know from my experience internationally, as well, in some of the more advanced Cloud markets is, we're seeing people with technical understanding of what Cloud and other advances in technology can do for their business and that's what CIOs really need.

They need to understand how Cloud gives them and their businesses the freedom to be remarkable, not just we're gonna go there because everybody else is and that might deliver us some cost savings.

It's a variable topic, but I think that, you know. I think first of all, the move to, the move to Cloud is actually a, a move to make your business more nimble. Now I'm, that's the opportunity that lays in front of you. It's not always about cost savings. It's great that cost savings are a by product but that's exactly what it is, it's a by product of making a business more nimble.

I think, I think the agile model that, enables you to, you know to, to be more nimble. You, you need to be okay with value in some states and it's more about getting in there and trying uh, and doing some proof of concepts and you know, and just start building trust and start practicing the ways to make your business more nimble. It's not just about moving applications into, into any eco system.

It's always a great first step, but there has to be back in your mind. It has to be, actually you know, how do we transform this, how do we transform this business into being something that is going to adapt to oncoming competition.

Yeah, and the, the, the, the new generation of CIOs coming in. I think also need to realize that the partners that have got them to this point, can't get them to where they need to go next.

Yeah.

And, and it gets back to that conversation around the inter priority service delivery model. You can't get what you need from one or two big technology sourcing partners, I think. The best CIOs now understand that they need to engage specialists to deliver that transformation in their organisation. That's a very different perspective to what a CIO would have had in the past.

Yeah. Yeah, and I said if I bought apples today, 10 apples isn't gonna equal one orange. Like, it's like you could have a billion apples, you're not gonna get the orange they need, you know to be able to, at the level what they, they need to do.

I'm gonna have to get you to explain to me what that actually means later.

I don't know. I don't know. I just like apples. (laughing).

Build Culture Around X-Factor Humans

Allan talks about the secret to building culture and business around X-Factor Humans. Watch below or read the full transcript.

There's something about how an x-factor human relates to a client as well as what they can do technically speaking so, over the years, we've crossed very few in the market globally actually have that gravitas that enables them to engage a client in a way that gets them really material outcomes, and when they have that x-factor technical capability, the outcomes they can drive are amazing. I think that's what we like to build up [is it surrounds us 00:04:46] having those really specialised great engineers that have that gravitas 'cause there's a perpetual business model in, you know, growing that culture behind team members like that and then also extending out their horizons and their sphere of influence into the enterprise space.

In ERP No One Can Hear You Scream

800px-Edvard_Munch,_The_Scream,_1893,_National_Gallery,_Oslo_(1)_(35658212823).jpg

Perhaps no enterprise technology is as divisive as Enterprise Resource Planning (ERP). Given the critical role ERP plays in an organisation - driving the core functions of business intelligence, CRM, accounting, and HR (to name a few) - it isn’t surprising that the popular opinion toward the technology is a mix of boundless love and seething hatred. To call the relationship ‘complicated’ is an understatement - a more accurate description would be the ‘Stockholm Syndrome’.     

Stockholm Syndrome describes the feelings of affection a hostage can develop for their captor. In ERP’s case, there seems to be little other choice for enterprises who have invested hundreds of thousands of dollars in ERP technologies over the past decade (or decades). The technology has become so ingrained in how enterprises do business, the thought of leaving - of decommissioning - these investments seems like pure fantasy.

Welcome to the world of ERP hostageware.

In the 1990s, when this technology first saw widespread adoption, ERP systems were a way for businesses to replace multiple aging back-office systems. For the first time, there was one ring to rule all the critical business functions - payroll, invoicing, logistics, supply chain - and enterprises couldn’t get enough.

With this central repository of all a businesses key data and functions, anything that could be handled by the ERP was migrated to the ERP.

What could possibly go wrong?        

ERP is a product of its time - massive on-site implementations, expensive maintenance and sporadic updates. When everything the business does runs off this central system, the thinking is generally “once its up, don’t touch it”.

While this might have served its purpose during the Y2K era and the early 2000s, it has inevitably led to two key drawbacks: business strategy is limited by the capabilities of the ERP, and so much reliance is placed on the system it seems impossible to ever leave.

So, despite the business world now being infinitely more dynamic than it was 15-20 years ago, enterprises are stuck using technologies designed for times when video rental stores still existed. 

Imagine using the same hardware from more than a decade ago in just about any other context. Its inconceivable in any other part of the businesses, but not when it comes to ERP.

This is despite clear failures of the ERP industry to change with the times.

Last year, one of the world’s largest ERP vendors, started suing organisations for using any software that connected with data stored on the ERP platform. When just about every business function touches the ERP system, this is almost impossible to avoid. One firm was ordered to pay more than £54 million (USD $70.4 million) after implementing Salesforce into their environment.

The blowback resulted in updates to ‘indirect access’ rules and the introduction of a new pricing model to bring more transparency to customers.

Still, the same organisations that once rejoiced in securing one-size-fixes-all software are now frustrated and cautious to admit those one-size-fits-all solutions no longer fit.

Perhaps it’s nostalgia. Perhaps it's the comfort of the fact that everyone else is in the same boat. Perhaps it’s about money, a clear-eyed awareness that dropping an ERP vendor can be incredibly expensive and disruptive to business.

More organisations today use the ‘hulking’ on-prem ERP systems of yester-decade, and those who have invested in these technologies are reluctant to explore new solutions.  

Regardless, customers often seem chained to these legacy vendors. Businesses are being held hostage and the only remedy in the past has been to accept it as a fact of life. To focus on what the vendors can do, not what they can’t. To develop Stockholm Syndrome.

Perhaps I’m naive, but I believe businesses should be able to make decisions based on what is the best fit for their current and future needs. They shouldn’t be shackled to decisions and investments made decades ago. They shouldn’t be forced to empathise with their captors just because the alternative seems too daunting.

So here are three simple pointers that can help counter this cycle of dependency: Build so you can get the data out; drive requirements for your customer downwards, not from the ERP upwards; and always keep an eye on the exit (many ERPs are looking for the lock-in).  

AWS Security: Rule-based approvals for CloudFormation

2000px-AWS_Simple_Icons_AWS_Cloud.svg.png

AWS CloudFormation is a great tool for developers to provision AWS resources in a structured, repeatable way that also has the added benefit of making updates and teardowns far more reliable than if you were to do it with the individual resource APIs. It is the recommended approach for teams to utilise these stacks for the majority of their workloads and easily integrates into CI/CD pipelines.

Being so powerful, the complexity of CloudFormation templates can quickly become overwhelming and shortcuts or mistakes can occur. One common solution to this problem is to define a set of rules for the stack resources to be evaluated against. These rules can be generically defined, or specific to a particular teams needs. For example, a common issue that teams face is S3 buckets being incorrectly exposed. A rule may be defined that prevents this from occuring or notifies security teams.

Pre-Deploy vs. Post-Deploy Analysis

There are two distinct approaches to performing CloudFormation analysis; pre-deploy and post-deploy. Pre-deploy analysis reviews the content of the templates before they are created or updated whereas post-deploy analysis will look at the resultant state of the resources created/updated by the CloudFormation action.

Pre-deploy analysis will catch problems before they have a chance to manifest themselves in the environment. It is a more security-conscious approach but has the drawback of being significantly more difficult to predict or simulate the result.

Post-deploy has a much clearer picture of the state of resources and the result of the stacks actions, however some damage may already have been done the moment the resources are placed in this state. Amazon GuardDuty is a service which will alert on an Amazon-managed pre-defined set of rules on all resources within your account and is an example of a post-deploy analysis and alerting tool.

Validating templates before deployment

Let's discuss how a pre-deploy tool might work. The following example is written in Python 3:

def processItem(item):
  if isinstance(v, dict):
    for k, v in item.items():
      processItem(v)
  elif isinstance(v, list):
    for listitem in v:
      processItem(listitem)
  else:
    evaluate_ruleset(item)

processItem(template_as_object)

This is a very rudimentary evaluator that looks for all primitive types (strings, integers, booleans) within the template and evaluates against the ruleset.

Diving in

Consider the following template:

{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Description" : "A public bucket",
  "Resources" : {
    "S3Bucket" : {
      "Type" : "AWS::S3::Bucket",
      "Properties" : {
        "AccessControl" : "PublicRead"
      }
    }
  }
}

A rule that prevents S3 buckets from being publicly exposed may choose to interrogate the AccessControl property of any AWS::S3::Bucket resource for a public ACL and alert or deny based on that. This is how the majority of pre-deployment analysis pipelines work. Things can get tricky though when you involve the CloudFormation intrinsic functions, like Ref. Now consider the following template:

{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Description" : "A public bucket",
  "Resources" : {
    "S3Bucket" : {
      "Type" : "AWS::S3::Bucket",
      "Properties" : {
        "AccessControl" : { "Fn::Join" : [ "", [ "Publi", "cRead" ] ] }
      }
    }
  }
}

You'll quickly notice that even if a tool were to iterate through all properties in every Map and List, they would never find the "PublicRead" keyword intact. It's very common to join strings, or reference mappings in templates so a string-matching approach would be fairly ineffective.

CloudFormation resource specification

AWS produces a JSON-formatted file called the AWS CloudFormation Resource Specification. This file is a formal definition of all the possible resource types that CloudFormation can process. It includes all resources, their properties and information about those fields such as whether or not they need recreation when they are modified.

We can use this file to evaluate the properties for each resource and apply rulesets directly to individual properties, rather than the template as a whole. With logic around the processing of the intrinsic functions, we have created an open-source CloudFormation template simulator that is easily deployable in any environment. This simulator is able to evaluate most intrinsic functions like the example above and properly evaluate with our ruleset.

The template simulator can be found at https://github.com/KablamoOSS/cfn-analyse.

Thinking maliciously

Now consider the following template:

{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Description" : "A public bucket",
  "Resources" : {
    "TopicLower" : {
        "Type" : "AWS::SNS::Topic",
        "Properties" : {
            "TopicName" : "a-b-c-d-e-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t-u-v-w-x-y-z-0-1-2-3-4-5-6-7-8-9"
        }
    },
    "TopicUpper" : {
        "Type" : "AWS::SNS::Topic",
        "Properties" : {
            "TopicName" : "A-B-C-D-E-F-G-H-I-J-K-L-M-N-O-P-Q-R-S-T-U-V-W-X-Y-Z"
        }
    },
    "S3Bucket" : {
      "Type" : "AWS::S3::Bucket",
      "Properties" : {
        "AccessControl" : { "Fn::Join" : [ "", [
            { "Fn::Select" : [ 15, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicUpper", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 20, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 1, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 11, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 8, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 2, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 17, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicUpper", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 4, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 0, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 3, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] }
        ] ] }
      }
    }
  }
}

The above template will actually evaluate to produce a public S3 bucket. This is because the S3 buckets "AccessControl" property uses characters from the "TopicName" attribute of the SNS topics. The format of these attributes is not formally documented in the CloudFormation resource specification, nor anywhere else. This means that there is currently no effective way to truly verify that resources with the intrinsic functions "Ref" or "Fn::GetAtt" are truly valid against the defined ruleset.

By the nature of CloudFormation, there isn't a perfect solution for static analysis which is why a multi-layered strategy is the most effective defense. If your organisation needs help protecting their environment, get in touch with us to find out how we can help you better protect yourself against these threats.

What is one simple thing you can do to improve UX?

Watch Allan Waddell, co-CEO of Kablamo, and Victoria Adams, Kablamo’s UX/UI Lead, tackle a thorny UX issue that comes up again and again, but is there to be solved. And Freddy, the Dev Ops dog, makes a special appearance.

If I were to pick one thing, in order to improve user experience that I've seen done, is honestly involving the user. User experience it's in the name but I see it time and time again that the user is left out of the table that's being discussed. We talk about features, we talk about what's in scope, what's out of scope and the user's voice is left behind. So even if they can't be at the table, having that research in the user experience person so that they can be that voice.
Isn't that ironic? The user experience the user is left out.
(laughs)
Yeah, that's often the case. I think--I don't think it's--sorry. It's Freddy. 
(laughs)
Um. Yeah. I can't take anything seriously with him sitting on my lap. I'm sorry but like it's--
It's a 'blamo dog.
Yeah, it's a DevOps dog. I think uh ... yeah, in user experience the number one thing that doesn't happen is users gets involved and I think it's not necessarily the fault of the customers that are doing it. It takes experience to know how to engage customers in the right way. It's not as simple as just putting a wire frame in front of a bunch of people and getting a result. You have to know what your tests are for. You have to know what hypotheses you're validating, specifically and you have to get those answers and be able to, be able to measure that in the right way and then understand those results. It's like saying the difference between data and information. You get a whole bunch of data back but it doesn't really mean what you think it means unless you thought about that preemptively. That's what Freddy thinks anyway. (laughs)

Is Cloud All or Nothing?

Screen Shot 2018-09-23 at 2.19.50 pm.png

When it comes to cloud adoption, there’s largely two schools of thought on the best approach. The first involves migrating most - if not all - business functions in a large scale transformation project. The second involves shifting functions piece-by-piece as and when the need or desire to do so becomes apparent.

You want to enable organisations to pursue both strategies. Perhaps most importantly, however, you want to take the time to listen to the organisation and understand which approach would be the best fit for their unique business circumstances - even if your advice runs counter to their initial thinking.

Regardless of which strategy is pursued, making the choice to move business functions to the cloud provides multiple benefits to an organisation. Whether it’s an improved security posture, better disaster recovery and backup processes, increased flexibility or lower IT overheads, embracing the cloud can bring huge competitive advantages.

When it comes to SMEs, perhaps the greatest benefit of cloud computing is that a smaller company can leverage all the tools larger enterprises use without the upfront investment needed for on-premise enterprise computing equipment.

The flip-side of this coin is that larger enterprises can use new-generation tools to either replace or compliment their current investments. This helps large organisations compete with newer, more nimble entrants to the market - assuming of course they’re willing to make the leap.

No matter the size of an organisation, a full-scale transformation project can be daunting. Many organisations can fall into the trap of rushing into a complete IT overhaul without being completely aware of the internal skill sets or retraining required to make the project a success.   

It is in these cases we’d advise a client to test the cloud waters first. Rather than migrate all functions at once, it could be more beneficial to embrace the cloud application-by-application. Below we’ve listed the most common business processes our clients migrate to the cloud, and outline some of the benefits of doing so. For the cloud-curious, these applications make the most sense to migrate first in order to introduce an organisation to cloud before exploring a more wide-scale transformation;   

Web Facing Applications: Websites, content management systems, mobile apps and online commerce sites should be the first applications to consider migrating to cloud. Not only are these typically more modern, meaning migration is much simpler, but they’re also less essential to business than applications like ERP -  if there’s any teething issues during the project, there is less disruption to business. Not only are these applications some of the simplest to migrate, but their performance can be greatly improved by shifting them to cloud as they typically require scalability to balance unpredictable online volumes - scalability that is both difficult and expensive to achieve on-premise.

Customer Relationship Management: CRM software keeps track of every aspect of the customer relationship from first contact throughout the entire lifecycle. A robust cloud-based CRM improves a business’ knowledge of its customers as all interactions are recorded and easily accessible; whenever a customer gets in contact, that customer’s history is available to the agent at the click of a button. Critically, cloud-based CRMs can extend this functionality to agents in the field as a smartphone with an internet connection can access all the same data as a computer terminal at HQ.  

Human Resource Management System: HRM systems focus on the human component of your business. It encompasses everything from payroll and benefits planning, to talent acquisition and reporting. A cloud-based HRM system enables an organisation to confidently manage the changing nature of work, for example more staff wish to work remotely and gig economy workers are becoming more prevalent. A responsive system enables employees to more efficiently track and log their time, while the inbuilt analytics of these systems gives the HR team a better of insight of employee productivity.     

The most important takeaway, though, is that cloud computing has something to offer almost every business. For some, it is leveraging powerful hardware, software, and services for a pay-as-you-go price. For others, it is a level of data security they could not achieve on their own. For developers, it is remote collaboration and multi-platform tools for creating, testing, and deploying highly available applications.

When considering cloud, it can seem intimidating. Not every organisation is ready to shift every application all at once, and there’s nothing wrong with that. Each business is unique and at a different stage of their cloud journey. Regardless of whether you’re ready to go all-in on cloud or would prefer to dip your toe in the water first, it’s important to find a partner who understands the specific outcomes your business wants to achieve and has the technical ability to help you achieve them.  



Bring Your Humans...

endless-long-road-5094.jpg

If you want a tech transformation.

Transformations are disruptive. I don’t mean this in the lazy, cliched, and sigh-inducing “X company is the Uber of Y industry, poised to disrupt the industry”, I mean it is a process of change that requires some adaptation.

Digital transformations involve overhauling processes and reimagining how an organisation does business. It’s not as simple as calling in a carpenter to renovate the office kitchen - although this too can cause some disruption - because transformations are underpinned by an aspirational vision of what the organisation could be.

This desired future state, by its very nature, is disconnected from the organisation’s current reality. It is this disconnect that is the cause of most of the disruption. Humans, being creatures of habit, become accustomed to doing things in a certain way - in a business context, this means using certain programs, processes, or resources to achieve a particular task. Transformation projects aim to overhaul this status quo and ultimately give a workforce access to tools to make it more efficient, more collaborative, and more responsive to change.  

While these are all noble aims, an organisation’s humans must be brought along for the journey so they understand why this transformation is taking place and what that desired future state looks like.

A people-first focus enables you to really listen, to ask the right questions and discover exactly what an organisation needs. This open and frank communication - devoid of any preconceptions - allows you to intimately understand what the organisation actually desires to achieve.

This free-flow of information is something we encourage our clients to undertake with their staff during a digital transformation. It is a disruptive time for any organisation, but below are a few tips to ensure employees understand what changes are coming and - most importantly - why they’re coming;   

  • Collaborative Enthusiasm. During a transformative project, every employee has a role to play and needs to be ready to collaborate across teams and disciplines. For example, the marketing team may need to start promoting the tech transformation before it’s implemented, and needs to mesh with the IT team to make sure their message is accurate and timely. Make these roles clear, and ensure the teams understand what they need to do and why they need to do it.

  • Common Vision. Building enthusiasm and cross-department collaboration is far more successful when the entire enterprise shares a common vision and understanding of the project. Outlining the project and its goals in a product development framework document is one important way for key stakeholders to gain an overview of the project and to communicate the cogent information effectively to employees.

  • Technical Skill. This encompasses not only the skills and knowledge of your employees but also your managers’ ability to evaluate potential vendors and the tech they’re providing. Sometimes the “best-dressed” vendor isn’t the best choice for a project; do your people have the knowledge to determine this? It is important to have an honest conversation with your technical team before evaluating any transformation initiative - what skills do they have and where are the blind spots?

These components won’t fall into place overnight. They require planning and a clear view of the organisation’s strategy and desired future state, but each needs to be addressed. Ask the hard questions: Do your people have the technical skill required for their particular piece of the project? Are they excited to join in the process of transforming your enterprise’s technology? Does everyone share the vision of the enterprise’s future that this technology will usher in?

Also apply these concepts when choosing vendors or consultants, who each contribute a piece to the overall project. How well do they know the product or technology they’re working with? Are they enthusiastic about the project and able to collaborate effectively with your humans? Inasmuch as you can share the details of the project with them, do they understand their role in making the company’s vision become a reality?

Evaluate the answers to these questions before launching a project, and return to them periodically throughout the process to make sure your people’s skill sets, enthusiasm, knowledge and collaboration - as well as those of your technology providers - are on track.

After all, technology is only as effective as the people using it. Bringing your people onboard early in the process ensures your organisation can navigate the coming changes as seamlessly as possible.  

What matters to you when you think about hiring an IT Consultant?

survey monkey image.jpg

Hiring an outside IT consultant is obviously a common business decision, but the experience can be complicated, costly and frustrating. How has it worked for you? We’ve turned to the monkey that surveys to learn a little bit more. Please take our brief survey and share your experience.

The (Human) Ethics of Artificial Intelligence

robot thinker.jpg

Remember those three laws of robotics penned by Isaac Asimov?

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 

As far back as 1942, we were wrestling with the ethical implications of thinking machines. We are rapidly approaching the intersection of silicon and cognition. It is more important than ever that we update Asimov’s laws to ensure that humans are the beneficiaries rather than the victims of artificial intelligence,: a concept that is no longer in the domain of science fiction. 

Our first brush with these concerns at a practical level surfaced a few decades ago when wide scale automation became a reality. The fear was that hard-working humans would lose their jobs to less expensive and more efficient robot workers. Companies were focused more on short-term profits than the long-term effects on of millions of newly unemployed factory workers who were unprepared to reenter the labor force.

This ethical dilemma has not been resolved. And with the advent of AI, it might well have been exacerbated. Now that we are that much closer to producing machines that can think for themselves, we must consider ethical implications that even Asimov couldn’t fathom. Before jumping on the AI bandwagon, consider the following:

A Proper Education

Intelligence without ethics is dangerous. Human intelligence is tempered with human empathy. This is not a bug, but a feature. We do not want people making laws that affect us who demonstrate a lack of human empathy. And we should be just as wary of machines that wield intelligence sans empathy.

Part of the way humans learn empathy is via the process of education. From the time a baby is placed in a mother’s arms, she is engaged in the process of education. Something very similar happens with AI.

Ethical AI begins with ethical training. AI is not merely programmed, it is trained like a human. It learns like a human.

To learn the right lessons, AI has to be trained in the right way, or ethical problems will inevitably arise. They already are.. Recently, facial recognition systems for law enforcement have come under fire because it had the tendency to misidentify people of colour as criminals based on mugshots.

This is a training issue. If AI is predominantly trained in Euro-American white faces, it will disadvantage ethnic groups and minorities. As a startup, you cannot settle for the first off-the-shelf solutions that give you a short-term advantage. You have to vet your AI solutions like you do employees, ensuring to the best of your ability that they have received a proper, ethical training.

When Better Is Worse

Every company wants the best, most efficient, most productive processes possible. But there are times when better is worse. One example is customer service. There is a good chance that in time, an AI solution such as Google Assistant will do a better job at making appointments, answering questions, and making outgoing sales calls. When that time comes, AI still might not be the right solution.

The simple matter is that humans want to talk to other humans. They do not want to talk to technology when they are stressed. If a person is calling for customer service, that means something has gone awry. And they are experiencing a higher state of stress.

What they need is human contact. They want their problem resolved. But they also want to feel like they have been heard. That is something AI cannot, and possibly should not attempt to do. The decision to eliminate humans from call centres has ethical implications.

You have to consider the ethical implication for every AI deployment in your business. If there is one thing we have learned about customer behaviour, it is that they repeatedly bypass better systems for more human friendly systems.

The Final Analysis

AI is inevitable. You might already be using it right now without being aware of it. There is no doubt that the proper application of AI will make you more efficient and save you money. It will even help you avoid blunders that would have put an end to your venture without it.

That presents us with the temptation to rely on AI as the final arbiter of all matters relating to our business ventures. What we have to remember is that all business is a human to human enterprise. AI can make it better. But you should always reserve the final analysis for yourself: the human with everything on the line.


illusion stairs.jpg

ERP is dead . . . Long live ERP

Dinosaurs. Edsels. Consumer-grade Betamax players. All long gone. Is ERP about to wind up on the dust heap of history?

For executives disillusioned with the problems created by cumbersome enterprise resource planning (ERP) systems, it certainly seems that these platforms – once hailed as work-saving technology must-haves by the IT world – are on their way out.

What will replace the traditional ERP model? Nothing less than a completely reimagined ERP – one that takes full advantage of advances in cloud services and artificial intelligence to maximise resources, provide actionable data, and restore confidence in the ability of ERP to truly smooth out business processes across departments.

Customisation: ERP’s double-edged sword

ERP’s biggest benefit lies in providing integrated applications for common back office functions such as technology, human resources, and finance, as well as for production processes and manufacturing. All the facets of a business’ operations, such as project planning, product development, sales and marketing  are part of a single database tied to an ERP application accessible through a user interface. Best of all, the enterprise has total governance of the system, rather than entrusting its critical data to servers on the public internet.

The ability to customise ERP to a specific enterprise’s operational needs is its most attractive feature, and one on which the biggest ERP vendors  have built billion-dollar businesses.

That customisation, however, is also the traditional ERP system’s biggest weakness. Making sure an implementation meets a company’s requirements takes careful planning and a measured approach, which increases the turnaround time on new implementations, maintenance and updates.  Most legacy ERP systems average just two updates per year. A decade ago, that pace may have been fine for most enterprises. In 2018, however, it’s far too slow to keep up with advances in technology.

That makes many traditional ERP systems nothing more than “hostageware” – software that holds a company hostage because a lot of money has already been sunk into it. An ERP system that can’t be updated quickly or cost-effectively can become a bottleneck, making it difficult to update the entire process and endangering future implementations.

What’s at stake?

Faced with the stark cost of current ERP systems, enterprises may be tempted to abandon or forego implementation and turn to a patchwork of third-party back-office management software, where they may have less control of their data and only surface-level business insights.

That in turn can put the enterprise at risk of more costly scenarios: misuse of customer data or data breaches, issues with government-sponsored contracts, and more.

For SMEs especially, making the right choice is important. That realisation can paralyse many small business owners, leaving them indecisive about which ERP to go with. And businesses need to be prepared to take the leap, or they’ll land short of their goal.

The future of ERP

Knowing what’s at stake has led a number of ERP providers to use the most promising advances in cloud technology and AI to solve many of the issues dogging traditional ERP.

These new generation platforms can reside securely on remote servers and take advantage of the increased computing power offered by dedicated facilities to give enterprises a real-time look at their data, along with AI-powered business intelligence.

And it's these next-gen ERP systems to which businesses are turning. A 2016 study by Panorama Consulting Solutions found 46 per cent of organisations were implementing new ERP systems to replace out-of-date ERP software, and 20 per cent were implementing ERP for the first time.

ERP providers are continually adding new features to make their systems easier and more attractive to use:

  • More device integration: ERP systems will be accessible by smartphone

  • Better business intelligence: Modules will not only store data, but provide deeper insights into that data

  • Internet of Things integration: The addition of IoT sensors will bring a raft of new data into ERP systems

  • Better automation: Repetitive, time-consuming tasks can be automated more quickly with next-generation ERP

  • Fragmented implementation: Multiple-point ERP solutions can be implemented in a shorter time, at lower cost and offer lower risk because of their modular nature

Many next-gen ERP system providers make the transition less painful by migrating the system as separate components – for example, separating finance, HR, and sales and marketing functions – shifting each component into a cloud-based system over time. This can cut CAPEX and allow the enterprise to amortise the cost of the shift over several months, if they desire.

To top it off, the best talent in the industry is bringing innovation to cloud-based ERP systems. Many are part of smaller teams where they can use precision skills to tackle difficult problems in just one component of cloud ERP, rather than general solutions at the enterprise level. All in all, the next generation of ERP may indeed deliver on its predecessors’ glowing promise, and that’s great news, but implementation and the technical competency around ERP will be more critical than ever.

 

Screen Shot 2018-08-29 at 10.10.00 pm.png

ENGINEERING STABILITY INTO KOMBUSTION'S PLUGIN SYSTEM

Kombustion is a CloudFormation tool that uses plugins to pre-process templates. It enables DevOps to re-use CloudFormation best practices, and standardise deployment patterns.

Kombustion's intent is to provide a reliable enhancement to CloudFormation. Starting with native CloudFormation templates, Kombustion uses plugins to enable offline reliable preprocessing transformations.

When you start using a plugin in your template, Kombustion relies on the following formula to guarantee stability of the generated template.

(SourceTemplate, Plugins) => Generated Template

Given the same SourceTemplate, and the same Plugins you will always get the same Generated Template.

To get this stability you need to commit kombustion.yaml, kombustion.lock and .kombustion to your source control. These files and folder are created when you initialise Kombustion, and install a plugin.

It's best practice for Plugins to be pure functions without side-effects. That is, with the same input they will always have the same output.

Kombustion makes an effort to prevent lock in, providing a way to "eject" from using it with kombustion generate. This will save the template after it has been processed with plugins. With the generated template, you can use the aws cli to upsert it.

You don't need to though, as Kombustion has a built in upsert function, with carefully chosen exit codes to make CI integration much easier.

In general when calling upsert if the changes requested (for example: Create Stack, or Update Stack) and are not cleanly applied, an error is returned.

And when calling Delete Stack if the stack is not fully deleted, an error is returned.

In addition Kombustion prints out the stack event logs inline so you have all the information you need to debug a failed upsert or delete, from within your CI log.

Without using any plugins, Kombustion will happily upsert a CloudFormation template. So you can start using it with your existing templates, and add plugins when you need to.

Download Kombustion from kombustion.io.

Follow our guide to writing your first plugin.

Screen Shot 2018-06-28 at 2.17.06 PM.png

A BUSINESS-PRACTICAL WAY TO THINK ABOUT AI

A salesperson representing a company that deploys artificial intelligence solutions walks into a CTO’s office one day and starts talking about the company’s product. You’re the executive assistant observing the conversation. What do you see? Can you see an animated individual talking incessantly about how great his company and its products are, and another one nodding and looking knowledgeable while not seeing the relevance of any of that to his business?

This scenario plays out everyday somewhere in the world, and it’s not uncommon for such meetings to end with the prospect holding an even bigger bag of questions than before. Questions like “How appropriate is this product or service to my business, specifically?” are often left unanswered. Nothing like “It will reduce your manpower requirement by 32 FTEs” or “This will speed up your average response time by up to 23%” ever breaches the surface.

If you’re the business owner or executive responsible for the decision, questions like that might leave you wondering whether it’s worth the effort at all. More often than not, the salesperson will gloss over many of the challenges, which only makes the decision even harder.

To help put your train of thought on the right track, we’ve identified some key elements that any business eyeing AI deployment needs to think about no matter what its size. Hopefully, these points will help clarify your position on AI and whether it’s really viable or even necessary for your organization.

First things first.

Quantify It

A lot of businesses fail to calculate the benefits of new, AI tech at work in a tangible way.

For example, if you’re responsible for customer support at your company, you need to ask how much AI chatbots will help reduce your issue resolution time.

If you operate an online store like Amazon.com, you need to know if a machine-learning-based inventory management system bring down your backorder levels or prevent the system from displaying out-of-stock items. Will customer ratings go up as a result? That’s the sort of tangible measurement that will help you develop your digital work.

It’s the same when adopting any new technology, like moving to a cloud computing environment. Moving to the cloud is a good thing for the most part, but unless you know exactly what workloads should be moved and how that will materially impact your revenue or other key metrics, it’s only going to be a trial and error exercise.

Ask metrics-related questions to help you pinpoint the areas that can positively impact your business. If you’re measuring a specific metric like backorder levels, the AI system you’re considering should ideally move the needle for that metric in the right direction, and considerably so.

Just as you quantified the benefits, you also need to consider the cons.

Understand the Downside

AI is a sensitive topic because of the perceived threat to jobs. What’s good for your business metrics might not be too great for your company when it comes to attracting new talent. If a lot of your business depends on bringing in the right people and your company is known for rapidly deploying efficient automated systems that result in job cuts, you might end up facing an HR crunch or labor unrest at some point. Here are some examples:

In early June 2018, casino workers in Las Vegas threatened a city-wide strike against companies like MGM Resorts International. One of their sticking points was the increased level of automation threatening their jobs.

Dock workers around the world regularly call for strikes because of the rampant automation efforts by freight companies and port authorities. In reference to the labor strike in Spain on June 29, 2017, the following comment was made by a leading port technology portal:

“Given the contemporary state of globalised business, it also means we inhabit a world of globalised unions, and with Spain seeking to gain Europe-wide support for its tangle with the EU, it is not impossible to imagine a much larger response to the burgeoning trend of AI automation in the near future.”

After a conversation with several human resources department heads, Deloitte vice chairman and London senior partner Angus Knowles-Cutler said,“The general conclusion was that people have got to (come to) grips with what the technology might do but not the implications for workforces.”

You need hard data to help you make the final decision, especially when facing strong opposition from the company’s stakeholders. It makes it easier to filter out unwanted investments that could hurt you versus those that can take your business to the next level in a positive way. In a future post, we'll explore an example of how one company implemented automation too quickly and too widely, and finally called the whole thing off.

 

Effective AWS IAM

Screen Shot 2018-08-26 at 9.16.25 pm.png

Know, Don't Slow, Your Users

Ian Mckay, DevOps Engineer for Kablamo, weighs in on the effective use of AWS IAM:

When creating a secure environment in AWS, IAM is a critical part of the security provided in the solution. It's used for controlling how users compute resources and other services interact with each other, however it can also be the critical hole if you don't properly secure your policies.

For users (and sometimes administrators), security is hard. It can slow you down with authentication and provide roadblocks when you are denied access to the resources you need. Many customer environments have experienced this and create dangerous holes for users. This can be an overzealous firewall rule, a shared credential or excessive privileges to perform the actions they need to do. It's worth considering what the risks are when a business creates these shortcuts, as more and more companies are having big public security failures that crush reputations.

How IAM policies are evaluated

For developers that are new to working with IAM, the logic can be confusing at first, especially since the Amazon documentation is massive.

Here's how I explain to others how IAM policy rules are applied:

1.  If a permission isn't mentioned, it's not given

2.  If a permission is given, that action is allowed UNLESS any other policy explicitly denies that permission.

With the way rules are evaluated, the order of how the rules are applied does not matter.

Open IAM Roles

Let's look at an Amazon-provided role, ReadOnlyAccess.

This role gives the ability for a user to have read-only access to all resources within an AWS account, which is a privilege often given to users in order for them to view the resources within an AWS account. Though they have no access to perform any modifications directly, the scope of this role can unintentionally reveal information (unless explicitly denied elsewhere).

For example, this role grants permission to download all objects in every bucket in the account. Often, S3 objects may contain configuration information or even credentials that the developer may have thought secure. The role can also allow users to intercept SQS messages, get EC2 console output or get items from Kinesis or DynamoDB.

If you're looking for a role which further restricts users' access to the above resource, the ViewOnlyAccess role can alternatively be used, though you may find this to be too restrictive in some environments.

Conditional IAM Policies

One of the more powerful features of IAM policies is its ability to conditionally provide access to resources. This can help teams seperate themselves from other workloads or prevent unwanted actions. Here are some examples:

Tag-based access

The below policy grants access to perform all actions, so long as it has the "Department" tag set to "Finance". This is an easy way to segregate different parts of the business within the same account. Remember, not all services support tagging and account-wide limits still apply to everyone.

{
    "Version": "2012-10-17",
    "Statement": [ {
        "Effect": "Allow",
        "Action": [
            "*"
        ],
        "Resource": "*",
        "Condition": {
            "StringEquals": {
                "aws:RequestTag/Department": "Finance"
            }
        }
    } ]
}

TIMEFRAME RESTRICTION

The below policy grants access to perform all actions only within the timeframe shown. This is useful when users are only permitted to have access during certain periods, such as contractors.

{
    "Version": "2012-10-17",
    "Statement": [ {
        "Effect": "Allow",
        "Action": [
            "*"
        ],
        "Resource": "*",
        "Condition": {
            "DateGreaterThan": {"aws:CurrentTime": "2018-01-01T00:00:00Z"},
            "DateLessThan": {"aws:CurrentTime": "2018-02-31T23:59:59Z"}
        }
    } ]
}

IP-BASED RESTRICTION

The below policy grants access to perform all actions only when the request is made from the IP addresses specified. This can help restrict calls to only occur from within a corporate network, as an extra layer of security. Note that calls made by AWS services, such as CloudFormation when it creates resources, cannot be restricted in this way - however the call to create the stack could be.

{
    "Version": "2012-10-17",
    "Statement": [ {
        "Effect": "Allow",
        "Action": [
            "*"
        ],
        "Resource": "*",
        "Condition": {
            "IpAddress": {
                "aws:SourceIp": [
                    "1.2.3.4/24"
                ]
            }
        }
    } ]
}

Using Permission Boundaries

As of July 2018, IAM permission boundaries may be used to restrict the maximum permissions a user (or in some cases, a resource) can be assigned. If a permission boundary is set on an IAM user, the effective permissions that user has will always be the intersection of the permission boundary and their IAM policies. Here's an example of how this works in practice:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SimpleUserPermissions",
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": "*"
        }
    ]
}

The above policy is a simple example of what permissions a user might have. In this case, users can only perform S3 actions. Let's say this policy was created with the name SimpleUserPolicy. Within this account, there is a person assigned to administer the creation of users. The policy assigned to their IAM user is as follows:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SimpleUserPermissions",
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": "*"
        },
        {
            "Sid": "CreateorChangeUser",
            "Effect": "Allow",
            "Action": [
                "iam:CreateUser",
                "iam:DeleteUserPolicy",
                "iam:AttachUserPolicy",
                "iam:DetachUserPolicy",
                "iam:PutUserPermissionsBoundary"
            ],
            "Resource": "*",
            "Condition": {"StringEquals": 
                {"iam:PermissionsBoundary": "arn:aws:iam::111122223333:policy/SimpleUserPolicy"}
            }
        },
        {
            "Sid": "IAMPermissions",
            "Effect": "Allow",
            "Action": [
                "iam:Get*",
                "iam:List*",
                "iam:Simulate*",
                "iam:DeleteUser",
                "iam:UpdateUser",
                "iam:CreateAccessKey",
                "iam:CreateLoginProfile"
            ],
            "Resource": "*"
        }
    ]
}

This policy grants the IAM user the same permissions as the other users (with the SimpleUserPermissions statement) as well as the ability to browse through and update user details with the IAMPermissions statement. Also granted, is the ability to create users or change their assigned policies with the CreateorChangeUser statement. Crucially, this statement has a condition that applies a permission boundary on the create/update process. The created user must be assigned the SimpleUserPolicy permission boundary or the create user call will fail.

With this, we can ensure that the created users permissions will never be escalated past the permission boundary set by the IAM user administrator.

Summary

IAM Roles and Policies are an important piece of every AWS environments security and when done correctly, can be a very powerful tool. However, these policies can very easily get out of control and can have unexpected consequences. If you are having trouble managing IAM, get in touch with us to find out how we can help you master your AWS environment security.


AI - The Outer Reaches

Screen Shot 2018-08-20 at 1.53.39 AM.png

We'll be the first to admit that AI is scary.  

Recently Ibrahim Diallo, a software developer, was fired by AI.  He wrote a blog post about the experience.  It's an eyeopener, and a reminder of something that we talk about alot --AI without the right implementation doesn't instantly solve problems and can easily create more.

But there's a larger story worth exploring here and it's about the kind of world AI may eventually bring us.  From visions of super-intelligent machines seizing control of the internet, to robot overlords with little regard to human life, to complete human obsolescence, there are an unthinkable number of ways this whole AI thing could go horribly, horribly wrong.  And yet, many scientists, futurists, cyberneticists, and transhumanists are downright excited for the coming dawn of the post-information age lying just beyond the invention of the first superhuman, general AI.  To get in on the hype (and maybe quell some worries), let’s take a look at what exactly has all these smart folks giddy.  What are the possibilities of a world with supergenius silicon titans?  Here’s some possibilities, roughly in order of future development, although general AI technology is so powerful, there certainly are many more mind-bending possibilities than these. 

Filtering noise, making connections, and expanding knowledge (5-10 years away)

What do vitamins, horse racing, and the price of bread have in common?  Possibly, nothing.  But a general AI can figure it out for sure, and also find the other hidden connections between, essentially, everything.  After all, as was put by naturalist John Muir: “When we try to pick out anything by itself, we find it hitched to everything else in the Universe.”  This sentiment is often repeated, but harnessing the power of a general AI offers a real, viable path to discovering the depth of connections present in the world.  

Today, machine-learning algorithms often work by looking at data, making some tweaks, and charting responses to determine how one variable affects another.  Interpreting these changes allows the AI to predict outcomes, determine how to act, or simply classify information.  And when a general AI gets to work on the huge amounts of data that already exist (and will continue to be generated), we as people will be able to learn so much.  A growing percentage of scientific research centers on secondary data analysis, or searching for new trends in data that have already been collected.  Secondary data-driven research is extremely low-cost, efficient, and accessible, in fields ranging from sociology to medicine.  A sophisticated general AI could conduct millions of these studies, 24 hours a day, every day, in every field, at astounding speeds.  And, with advancements in natural language processing, the AI could publish the research for people to look at and understand.  Of course, this sort of connection-searching is problematic; after all, correlation does not equal causation (here’s a website that does a great job of pointing out examples of false-causation trends).  However, the benefit of a true general AI is that it will be able to discern whether or not the correlations it spots are true connections or merely coincidences, at least as well as a human researcher, and much, much faster.  Because testing connections and processing data are the current uses for artificial intelligences, you can be sure there will be rapid development in this field in the coming years.

The end of work (10-20 years away)

Yes, this has been promised before, and is often promised again after new scientific breakthroughs, but it could really be it this time.  Automation in factories didn’t free us from work, exactly, but it did take over jobs within its domain, such as mass-production and precision-assembly positions.  Self-driving cars are on the verge of swooping up 5 million transportation jobs in the United States alone.  And for a general AI, its domain bridges both computation and creative thought.  So, just as automation has taken over factory positions, general AI could replace skilled cognitive workers like programmers, engineers, mathematicians, and even artists like poets, painters and musicians, leaving a whole lot of nothing for people to do all day (for some cool examples of AI-created art, look here, or here, or here).

Even if work doesn’t end entirely, though, count on your workload and life changing for the better.  Have you ever wished for a great personal assistant, someone who can scan through your email and shoot back answers to basic questions automatically, someone who will schedule and manage your appointment schedule, someone who can pick up a little slack when you’re feeling slow?  Google is on it.  Already, Google is rolling out a “basic” reservation-making AI which can call restaurants, make reservations, ask for hours, and the restaurants don’t even know they’re talking to a machine.  Seriously, natural language processing has become sophisticated enough to trick humans in some standard cases, like asking for a table for 5 at your favorite Chinese restaurant (you can see a video of the announcement here).  Soon enough, the AI will function as a full-on secretary, available for everyone, and some of your daily work headache will be alleviated by a tireless computer assistant.  

Intelligence upgrades (and AI camouflage, too?) (20-30 years away)

People have been trying to directly interface with computers since the 1970s, when the first human-computer brain-machine (BMI) interface was invented.  So far, the development has been therapeutic, alleviating symptoms from neurological disorders like epilepsy and ALS, restoring hearing with cochlear implants, and helping quadriplegics move mouse cursors and robotic arms directly with their thoughts.  It’s only a matter of time before enhancements are developed for neurologically-healthy people.  Elon Musk has already thrown his hat in the ring with Neuralink, a company aiming to develop the first surgically-implanted, whole-brain computer interface, for the express purposes of enhancing human computational limits and, secondarily, connecting human intelligence with artificial intelligence (a great, really long write-up for the interested here).  Not only does Musk hope that such a system could allow for offloading mental tasks to computers (would you like to be able compute 196003x3313 in your head?), he also hopes it’ll give us a lifeline when the AIs rise up.  From Musk’s perspective, if you can’t beat them, why not become so intricately intertwined that destroying one would destroy the other?  It’s a pretty neat survival strategy, blurring the line between us and the machine so any self-preservation instincts in the new machine-consciousnesses would automatically extend to us people, too.  A hard pill to swallow, sure, but if we really become second best in the face of general AI, mutually-assured destruction can be a good deal for the little guy (us).

Human immortality (30-??? years away)

Here’s a biggie: general AI could offer a viable path to massive extensions in human life, depending on how far one is willing to stretch the concept of “life.”  Is a mind (and presumably, a consciousness) without a body “alive”?  If you say yes, you could be the first in line to get your brain uploaded.  By encoding your specific neural pathways into a general AI, it’s possible you could continue life, free from physical ailments, disease, and accidents, snugly inside a computer.  And if your new computer architecture allows connections to form within itself, and disconnects old connections no longer considered useful, well, have you lost much besides your squishy, meaty body and brain?  Many techies say no, and amazingly, the first companies promising brain uploads are already starting to crop up.  A particularly grisly startup called Nectome has developed an embalming procedure so advanced that every synapse in your brain is identifiable after under an electron microscope, and will remain perfectly preserved for hundreds of years.  The kicker?  The process is 100 percent fatal.  In order to embalm the brain so efficiently, their preservation fluids need to be pumped in while you’re still alive, euthanizing you but preserving the brain.  Then your perfect brain can sit around for any amount of time until brain-uploading technology is developed, and Nectome will resurrect you in a computer.  Not surprisingly, their target market is terminally-ill patients.  And who knows?  It just might work.

Not only could life be extended by physical protections and material upgrades, it could also be extended, at least perceptually, by upgrading processing speeds (Warning: far-fetched sci-fi logic incoming).  The human brain has a few fundamental frequencies, called brain waves, that seem to dictate perceived consciousness.  These waves range in frequency from 0.5 Hz when sleeping deeply to 30 Hz when absolutely alert, and other estimates put the maximum possible neuron firing rate at about 1000 Hz.  Now, consider that the 8th generation Intel i7 processor released last year is capable of pulling 4.7 gigaHz (that’s 4,700,000,000 Hz!), and try to imagine what it would be like to live in one of those.  Would you think 4 billion times faster?  Would you perceive time as passing 4 billion times slower?  And if you, a mere mortal human, could pack 4 billion seconds (that’s nearly 127 years) into every second, would you?  Even if you lived a normal 70 years, by using a little back-of-the-envelope math, we’re talking about a perceived lifespan of 8.8 exa-years (8,800,000,000,000,000,000 years!).  It’s been 13.8 billion years since the Big Bang, which is .0000138% of one single exa-year.  And all this has been calculated using processor speeds that already exist.  Who knows what our processors will be capable of in 2045 (Ray Kurzweil’s estimation for the first human-computer merger)?

The takeaway

Obviously, the possibilities of generally-intelligent AI are enormous.  As AI technology rapidly progresses, the future is looking more and more like a wild ride.  No matter who you are, AI will both have something tempting to offer and something appalling that will make your skin crawl.  Whether general AI will provide lightning-speed research or human-computer cyborgs is still unclear, but we can be sure the artificial intelligence future holds some drastic changes to human work, health, and the world as we know it.  And look out; it’s all coming sooner than you think.  One things certain, though, especially if you are an enterprise in 2018 --you still need to get your own IT house in order,, AI won't be up to that job for a while.

Security at the Centre

security image for kablamo.jpeg

The following was adapted from a sit down chat recently with our team.  Security is a big challenge in enterprise. You have legacy security teams that have traditionally worked in a check post type model where we have a six month product plan, and these dates we're going to do penetration tests, and at this date we'll do the security architecture review. That kind of all just flies out the window when you start having safe digital teams that are deployed into production multiple times a day.

We've worked pretty closely with some enterprise security teams in helping change their work flow and disseminate ownership of security back into the teams that are actually deploying these services. So you no longer have this ivory tower on a security team saying, "This is bad." How you should be securing the Cloud is by asking how, not saying no.

You need to enable teams to have ownership of their own, or a significant part of their own security model. It is the only way it can really scale. You can take what was traditionally, say, a security orbit and write automation around that and have compliance checks that are doing what we like to call continuance compliance.  We're running these checks --you can be running them every five minutes if you like--and you have a dashboard of high risk items in your Cloud.

But that only goes so far. You need to start again, instilling teams to think about security when they're designing systems. 

 

Introducing Kombustion: Our Open Source AWS Developer Tool

Screen Shot 2018-08-15 at 3.14.22 pm.png

The team is proud to announce the launch of Kombustion.  Here's the media release, want to give Kombustion a try?  Visit: www.kombustion.io

Australia, August 15, 2018 – Kablamo has released its most significant open source software project to date, Kombustion. The AWS plugin provides an additional layer of intelligence for AWS CloudFormation, reducing the time and complexity of managing thousands of lines of code across AWS environments of any size. 

The tool provides benefits for developers and engineers who use AWS, as tasks that previously took days or weeks can now be completed in minutes or a few hours.  For example, setting up a new Virtual Private Cloud in an AWS cloud account has typically required significant work to define and manage up to 30 different AWS resources. With Kombustion, a best practice AWS Virtual Private Cloud can be set up with a small amount of configuration to an existing plugin.

“We developed Kombustion to help solve a common challenge for all AWS CloudFormation users. It was built in-house, and we’d been using it ourselves, but after seeing the benefits Kombustion delivered to our team, we decided to open source the project and share it with everyone,” said Allan Waddell, Founder and Co-CEO of Kablamo. “Our Kablamo values align strongly with the open source software community and we are proud to play our part in making AWS an even better experience for its users.”

CloudFormation is a native AWS service, which provides the ability to manage infrastructure as code. Kombustion is a CloudFormation pre-processor, which enables AWS users to codify and re-use best practices, while maintaining backwards compatibility with CloudFormation itself.

Kombustion is especially useful where multiple CloudFormation templates are required. It enables developers, DevOps engineers and IT operations teams to reduce rework and improve reusability in the management of CloudFormation templates, whilst also enabling access to best practices via freely available Kombustion plugins.

Liam Dixon, Kablamo Cloud Lead and Kombustion contributor, said while the core functionality has been built, it was essentially a foundation and he hoped the wider AWS community would help make the tool even better.

“Different AWS users have different ways of pre-processing CloudFormation templates, but we saw the opportunity to develop a freely available tool with the potential to become widely used in Australia and overseas,” Dixon said. “Kombustion’s publicly available, plugin-based approach, means that the AWS developer community can reduce rework and share best practices in areas such as security, network design and deploying serverless architectures.”

As well as reducing the time and complexity of managing multiple AWS instances, other Kombustion benefits include:

  • Adoption can be incremental so there is no need to completely rewrite current CloudFormation templates;
  • Kombustion plugins can be installed from a Github repository;
  • Cross-platform functionality means Kombustion works on Linux, FreeBSD and MacOS; and,
  • Kombustion is completely free to use, for both personal and commercial use   

The first release of Kombustion is available for download today at: www.kombustion.io  Kablamo is calling for the AWS community to test and provide feedback on Kombustion, and to contribute towards future iterations of the project.

Buzzword: DevOps

Liam: DevOps does get thrown around a lot, and I think there's an aspect where...it is actually one of the phrases in modern-day IT that does get misused or probably misappropriated in terms of everyone has their own opinion on it.

And when we think about it, people define it as a job title. So for example, site reliability engineering, or even traditional operations is literally just going, "You're a DevOps engineer," when it's like, " Mm, not really." It's meant to be an ethos around development and operation collaborating.

Allan: Collaborating. Culture.

Liam: And again, if you can go folding into the security space, where DevSecOps is a role. How do you feel about this? I mean, you've sort of gone into more DevSecOps space...

Marley: Look, it definitely is just a, a chaining of words together, like you said. I mean DevOps, and...you have that, that crossover between developers and operators working closer together, but then you could also take it as, say, operations done through development, where you're just talking about infrastructural automation and not even talking about cross-team cooperation. I mean, look, it is relevant in the security space.

Allan: Mm-hmm. I think it's relevant in the transformation space. And transformation is probably another word, but for an organization that doesn't blend those teams and still have really rigid silos, it doesn't make sense to say DevOps as a center of excellence or, as what AWS would call it. Which obviously it never is in the first go, center of excellence, but there it kind of makes sense, but it's become not a culture word. It's become a role, a DevOps person, which almost in of itself defeats the purpose of why DevOps was created.  You're meant to have the culture of DevOps across your business, not be a DevOps person.

Liam: It’s not hire the DevOps, and then that problem is solved.  

Marley: Yeah, I feel like automation engineer is a better term. Because no matter whether you're doing software development or infrastructure or security...automating any of those workflows is what you're trying to choose as an outcome, right?
 

Competencies.jpg

DevOps is overused.

 

 

What does it really mean, and what's a better word for it?

 

 

 

Read or listen here: 

artificial_intelligence_benefits_risk.jpg

PODCAST: Allan on Walk the Tech Talk

Kablamo's Allan Waddell recently appeared on industry podcast Walk the Tech Talk.  He was grateful for the opportunity and was especially grateful that the host, Harvey Nash's Anna Frazzetto, was just as interested in tackling some of the biggest ideas in tech as he was.  Here's the blurb from the show and audio below.  Enjoy!  If the button below doesn't work, please click here.

On this episode of Walk The Tech Talk, Anna interviews Allan Waddell, Founder of Kablamo, a human-centered, cloud-based software to make efficient end-to-end use of the cloud. Anna and Allan discuss how AI is becoming a key factor in digital transformation and break down what AI initiatives are having an impact and what are just hype. Allan also discusses the main drivers pushing businesses towards AI, his thoughts on AI actually affecting jobs, how AI, machine learning and neural networks all fit together and so much more. Join Anna and learn from the strategies and accomplishments of this episode’s tech trailblazer.

Buzzword: AI

Screen Shot 2018-06-28 at 2.19.39 PM.png

In this week's buzzword chat the team tackles how machine learning & AI aren't going to solve your IT woes -- robust discussion ensues.

Ben: Artificial intelligence, yeah?

Liam: Yeah, that and machine learning probably. The coupling of those two as a discrete entity. I need to do the AI.

Marley: A lot of businesses seem to wrap up that they need AI and machine learning when it’s really just about a bad process issue.

Liam: The amount of times we have companies that have a ridiculous amount of data, we've got so much data we don't know roughly how to use it.  And again, they look at some of the technologies that people are doing around machine learning models and sort of the idea of artificial intelligence... again because there are a lot of groups publishing it. When realistically what they really need to do is just start performing data transformation over the top of it and standard data science pieces on it.

Not every solution needs to be in the machine-learning model. A lot of the data points in terms of what they're actually looking to address and find in terms of the data and about their audience, or looking at some of their outcomes in terms of what they're not doing in terms of their product or in their marketing or in their pitch...you don't need to necessarily put that through copious amounts of machine learning or even the idea of creating an AI engine or compute layer to do that. They are probably the two words that get misused the most if we're honest.

Ben: It's not the future everyone thinks it is. Everyone thinks that you're going to ask AI "How do I..." I don't know, "talk to me in a natural way." And every chatbot that I've ever seen is, is utterly terrible. They're not very intuitive. And everyone's going, "Oh, we want that fantastic chatbot experience," but what they really should be aiming for in AI is something that's, you know, solving the customer problem. And yeah, I think it's just been oversold. It's really, really oversold.

Allan: Yeah, but it's rapidly changing. I mean, I think, I think we're rapidly moving towards a place where the Turing test is going to hold. And I think that is, we're talking about 44% of job replacement by, by AI in the next 20 years across Australia.

Liam: That’s more around robotics and machine learning…

Allan: Actually, no. It’s actually not.

Liam: Okay.

Allan: That's not the physical, it's not the white collar, sorry, it's not the blue collar workers that are going to get hit by AI role-replacement first, it's going to be all the white collar. It’s going to be banking, finance in general, and legal.

Liam: What, accountants, really?

Allan: Yes, effectively.  And software developers. Software development as the processes today is going to change rapidly. We deal a lot with the chat services that would apply at a call center or contact center. And I think you're right. It does need to solve the customer problem first, and I think that's where most companies are really behind is the detailed workflows, you know, consistent form that AI would need in order to be effective. And that's going to take some time just to map it out. To be fair, it's like training your replacement. People are going to resist that change. But yeah, in the background it's really that evolution as well of the humanization of those technologies. I can imagine a time, and it's inside our lifetimes, that you're going to pick up the phone and speak to someone and think you're speaking to somebody, and if it doesn't solve the problem it's moot. So yeah, I completely agree.

Ben: The classic example is I know a guy whose name is Paris, and the chatbot says, "What's your name?" He types in "Paris" and then it goes, "Oh, you want to go to France?" And he's like, "No, my name is Paris. You just asked me that." They can’t even do that right so I’m very skeptical.

Allan: It’s got a way to go.

Liam: I think that’s more an implementation issue there. In terms of the context of what it’s capturing or otherwise.

Ben: Ah, maybe. Maybe...

Liam: If you take Alexa and Google Home, I think there’s an element of their chatbot integration, and again some of their machine learning aspects in terms of NLP space. It has done reasonably well, not to say they’re like the be-all end-all.

Ben: Just saying, I’m yet to see any chatbot that impresses me. And I’ve tried them all.