aws

Importing Custom Findings into AWS Security Hub

Often, organizations have a suite of security products to help maintain their on-premises networks, their cloud networks and to help enforce the best practices they put in place. The AWS Security Hub service was announced at re:Invent 2018 and gives security administrators a centralized view of all of these tools by aggregating their findings in a common format, either within the current account or using a master account.

Though Security Hub is in preview, you can access it in your console now and it comes with out-of-the-box support for AWS services such as GuardDuty, Macie and Inspector as well as from 3rd party providers like Rapid7, Qualys, Splunk, Twistlock and much more.

In addition to the officially supported providers, you can also construct your own providers which will import findings directly into the Security Hub findings dashboard.

Writing a Custom Finding Provider

Troy Hunt created the Have I Been Pwned service which allows you to receive notifications whenever your e-mail address has been detected in a data breach that has been publicly disclosed. In addition to the e-mails, an API is also freely available to query for all breaches a particular identity has been involved in. Here is the template that will regularly poll the API for a number of e-mail addresses and produce findings in the Security Hub findings dashboard: https://github.com/KablamoOSS/Security-Hub-Custom-Provider-Demo

The template creates a Lambda which is triggered periodically by CloudWatch Events. The Lambda then iterates through a list of e-mail addresses and queries the Have I Been Pwned API to determine if any breaches have been detected. If they have, they are included in a batch_import_findings call to import the findings. Findings that are included in subsequent calls which share a common ID, will be updated.

The findings must conform to the AWS Security Findings Format which has optional and mandatory fields, including fields which represent the severity of the finding and finding or resource-specific custom properties. Normally, findings are required to target a specific AWS or 3rd party product, however each account comes with a default product which you can use to import your custom findings with.

Findings shown in the dashboard can be expanded to see the full description and additional detail. Custom Actions can also be created and executed against findings, which triggers a CloudWatch Event so that any supported targets can be executed, such as Lambda or Step Functions.

Once you have findings in the system, you can construct your own Insights which are views give you an overview of findings that match your predefined filters and grouping. The above screenshot shows an overview of the breaches for all Have I Been Pwned findings available.

Gotchas

There are some issues I found when developing for the Security Hub, which are important to note if you plan on developing your own integrations. It should be noted that since the service is still in Preview as of writing this article, these issues may or may not be fixed by the time the service reaches General Availability.

Findings cannot be deleted, but are retained for 90 days

There is currently no functionality to permanently delete a finding, however they have a 90 day retention period after which time the finding will be purged. You can archive findings during this period but unfiltered searches will still return these findings.

Updateable attributes may not update

The AWS Security Findings Format states which attributes can be updated with subsequent import calls, however some of these fields such as “Title” do not seem to update despite successful return codes from the API.

Date formats are sensitive

Though the format specifies any ISO8601-formatted string can be used in its date fields, any value with timezone information is rejected if it is not formatted in Zulu time. Additionally, errors in failed findings are not returned in the response contrary to what the documentation states. You must execute the call with a debugging log level to actually see the error messages.

Fields have undocumented maximum lengths

Also undocumented is that some fields have a maximum length. The “Description” field for example has a maximum length of 1024 characters so you'll need to trim values which are longer than that.

tl;dr

Despite the issues, AWS Security Hub is great offering for organizations looking to build their own SIEM and/or consolidate their findings into a centrally managed account. If you would like to hear more about how your organization might benefit from this new service, get in touch with us to find out more.

AWS Security: Rule-based approvals for CloudFormation

2000px-AWS_Simple_Icons_AWS_Cloud.svg.png

AWS CloudFormation is a great tool for developers to provision AWS resources in a structured, repeatable way that also has the added benefit of making updates and teardowns far more reliable than if you were to do it with the individual resource APIs. It is the recommended approach for teams to utilise these stacks for the majority of their workloads and easily integrates into CI/CD pipelines.

Being so powerful, the complexity of CloudFormation templates can quickly become overwhelming and shortcuts or mistakes can occur. One common solution to this problem is to define a set of rules for the stack resources to be evaluated against. These rules can be generically defined, or specific to a particular teams needs. For example, a common issue that teams face is S3 buckets being incorrectly exposed. A rule may be defined that prevents this from occuring or notifies security teams.

Pre-Deploy vs. Post-Deploy Analysis

There are two distinct approaches to performing CloudFormation analysis; pre-deploy and post-deploy. Pre-deploy analysis reviews the content of the templates before they are created or updated whereas post-deploy analysis will look at the resultant state of the resources created/updated by the CloudFormation action.

Pre-deploy analysis will catch problems before they have a chance to manifest themselves in the environment. It is a more security-conscious approach but has the drawback of being significantly more difficult to predict or simulate the result.

Post-deploy has a much clearer picture of the state of resources and the result of the stacks actions, however some damage may already have been done the moment the resources are placed in this state. Amazon GuardDuty is a service which will alert on an Amazon-managed pre-defined set of rules on all resources within your account and is an example of a post-deploy analysis and alerting tool.

Validating templates before deployment

Let's discuss how a pre-deploy tool might work. The following example is written in Python 3:

def processItem(item):
  if isinstance(v, dict):
    for k, v in item.items():
      processItem(v)
  elif isinstance(v, list):
    for listitem in v:
      processItem(listitem)
  else:
    evaluate_ruleset(item)

processItem(template_as_object)

This is a very rudimentary evaluator that looks for all primitive types (strings, integers, booleans) within the template and evaluates against the ruleset.

Diving in

Consider the following template:

{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Description" : "A public bucket",
  "Resources" : {
    "S3Bucket" : {
      "Type" : "AWS::S3::Bucket",
      "Properties" : {
        "AccessControl" : "PublicRead"
      }
    }
  }
}

A rule that prevents S3 buckets from being publicly exposed may choose to interrogate the AccessControl property of any AWS::S3::Bucket resource for a public ACL and alert or deny based on that. This is how the majority of pre-deployment analysis pipelines work. Things can get tricky though when you involve the CloudFormation intrinsic functions, like Ref. Now consider the following template:

{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Description" : "A public bucket",
  "Resources" : {
    "S3Bucket" : {
      "Type" : "AWS::S3::Bucket",
      "Properties" : {
        "AccessControl" : { "Fn::Join" : [ "", [ "Publi", "cRead" ] ] }
      }
    }
  }
}

You'll quickly notice that even if a tool were to iterate through all properties in every Map and List, they would never find the "PublicRead" keyword intact. It's very common to join strings, or reference mappings in templates so a string-matching approach would be fairly ineffective.

CloudFormation resource specification

AWS produces a JSON-formatted file called the AWS CloudFormation Resource Specification. This file is a formal definition of all the possible resource types that CloudFormation can process. It includes all resources, their properties and information about those fields such as whether or not they need recreation when they are modified.

We can use this file to evaluate the properties for each resource and apply rulesets directly to individual properties, rather than the template as a whole. With logic around the processing of the intrinsic functions, we have created an open-source CloudFormation template simulator that is easily deployable in any environment. This simulator is able to evaluate most intrinsic functions like the example above and properly evaluate with our ruleset.

The template simulator can be found at https://github.com/KablamoOSS/cfn-analyse.

Thinking maliciously

Now consider the following template:

{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Description" : "A public bucket",
  "Resources" : {
    "TopicLower" : {
        "Type" : "AWS::SNS::Topic",
        "Properties" : {
            "TopicName" : "a-b-c-d-e-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t-u-v-w-x-y-z-0-1-2-3-4-5-6-7-8-9"
        }
    },
    "TopicUpper" : {
        "Type" : "AWS::SNS::Topic",
        "Properties" : {
            "TopicName" : "A-B-C-D-E-F-G-H-I-J-K-L-M-N-O-P-Q-R-S-T-U-V-W-X-Y-Z"
        }
    },
    "S3Bucket" : {
      "Type" : "AWS::S3::Bucket",
      "Properties" : {
        "AccessControl" : { "Fn::Join" : [ "", [
            { "Fn::Select" : [ 15, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicUpper", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 20, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 1, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 11, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 8, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 2, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 17, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicUpper", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 4, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 0, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] },
            { "Fn::Select" : [ 3, { "Fn::Split" : [ "-", { "Fn::GetAtt" : [ "TopicLower", "TopicName" ] } ] } ] }
        ] ] }
      }
    }
  }
}

The above template will actually evaluate to produce a public S3 bucket. This is because the S3 buckets "AccessControl" property uses characters from the "TopicName" attribute of the SNS topics. The format of these attributes is not formally documented in the CloudFormation resource specification, nor anywhere else. This means that there is currently no effective way to truly verify that resources with the intrinsic functions "Ref" or "Fn::GetAtt" are truly valid against the defined ruleset.

By the nature of CloudFormation, there isn't a perfect solution for static analysis which is why a multi-layered strategy is the most effective defense. If your organisation needs help protecting their environment, get in touch with us to find out how we can help you better protect yourself against these threats.

Screen Shot 2018-08-29 at 10.10.00 pm.png

ENGINEERING STABILITY INTO KOMBUSTION'S PLUGIN SYSTEM

Kombustion is a CloudFormation tool that uses plugins to pre-process templates. It enables DevOps to re-use CloudFormation best practices, and standardise deployment patterns.

Kombustion's intent is to provide a reliable enhancement to CloudFormation. Starting with native CloudFormation templates, Kombustion uses plugins to enable offline reliable preprocessing transformations.

When you start using a plugin in your template, Kombustion relies on the following formula to guarantee stability of the generated template.

(SourceTemplate, Plugins) => Generated Template

Given the same SourceTemplate, and the same Plugins you will always get the same Generated Template.

To get this stability you need to commit kombustion.yaml, kombustion.lock and .kombustion to your source control. These files and folder are created when you initialise Kombustion, and install a plugin.

It's best practice for Plugins to be pure functions without side-effects. That is, with the same input they will always have the same output.

Kombustion makes an effort to prevent lock in, providing a way to "eject" from using it with kombustion generate. This will save the template after it has been processed with plugins. With the generated template, you can use the aws cli to upsert it.

You don't need to though, as Kombustion has a built in upsert function, with carefully chosen exit codes to make CI integration much easier.

In general when calling upsert if the changes requested (for example: Create Stack, or Update Stack) and are not cleanly applied, an error is returned.

And when calling Delete Stack if the stack is not fully deleted, an error is returned.

In addition Kombustion prints out the stack event logs inline so you have all the information you need to debug a failed upsert or delete, from within your CI log.

Without using any plugins, Kombustion will happily upsert a CloudFormation template. So you can start using it with your existing templates, and add plugins when you need to.

Download Kombustion from kombustion.io.

Follow our guide to writing your first plugin.

Introducing Kombustion: Our Open Source AWS Developer Tool

Screen Shot 2018-08-15 at 3.14.22 pm.png

The team is proud to announce the launch of Kombustion.  Here's the media release, want to give Kombustion a try?  Visit: www.kombustion.io

Australia, August 15, 2018 – Kablamo has released its most significant open source software project to date, Kombustion. The AWS plugin provides an additional layer of intelligence for AWS CloudFormation, reducing the time and complexity of managing thousands of lines of code across AWS environments of any size. 

The tool provides benefits for developers and engineers who use AWS, as tasks that previously took days or weeks can now be completed in minutes or a few hours.  For example, setting up a new Virtual Private Cloud in an AWS cloud account has typically required significant work to define and manage up to 30 different AWS resources. With Kombustion, a best practice AWS Virtual Private Cloud can be set up with a small amount of configuration to an existing plugin.

“We developed Kombustion to help solve a common challenge for all AWS CloudFormation users. It was built in-house, and we’d been using it ourselves, but after seeing the benefits Kombustion delivered to our team, we decided to open source the project and share it with everyone,” said Allan Waddell, Founder and Co-CEO of Kablamo. “Our Kablamo values align strongly with the open source software community and we are proud to play our part in making AWS an even better experience for its users.”

CloudFormation is a native AWS service, which provides the ability to manage infrastructure as code. Kombustion is a CloudFormation pre-processor, which enables AWS users to codify and re-use best practices, while maintaining backwards compatibility with CloudFormation itself.

Kombustion is especially useful where multiple CloudFormation templates are required. It enables developers, DevOps engineers and IT operations teams to reduce rework and improve reusability in the management of CloudFormation templates, whilst also enabling access to best practices via freely available Kombustion plugins.

Liam Dixon, Kablamo Cloud Lead and Kombustion contributor, said while the core functionality has been built, it was essentially a foundation and he hoped the wider AWS community would help make the tool even better.

“Different AWS users have different ways of pre-processing CloudFormation templates, but we saw the opportunity to develop a freely available tool with the potential to become widely used in Australia and overseas,” Dixon said. “Kombustion’s publicly available, plugin-based approach, means that the AWS developer community can reduce rework and share best practices in areas such as security, network design and deploying serverless architectures.”

As well as reducing the time and complexity of managing multiple AWS instances, other Kombustion benefits include:

  • Adoption can be incremental so there is no need to completely rewrite current CloudFormation templates;
  • Kombustion plugins can be installed from a Github repository;
  • Cross-platform functionality means Kombustion works on Linux, FreeBSD and MacOS; and,
  • Kombustion is completely free to use, for both personal and commercial use   

The first release of Kombustion is available for download today at: www.kombustion.io  Kablamo is calling for the AWS community to test and provide feedback on Kombustion, and to contribute towards future iterations of the project.