There’s a lot of noise about penetration testing cloud workloads, much of it from people who don’t really understand cloud security. I thought I’d lay out my thoughts on the matter. This post isn’t about the skills needed, or typical findings. This is here to cover what the point of a penetration test against an AWS workload is, what a penetration testing program should look like, and how to make it a success. Hopefully, it helps people make better decisions when buying and running engagements like these.

As a disclaimer, at the time of writing I’m a principal consultant and the cloud security consulting team lead at WithSecure. This post is informed by my experiences there, but it’s purely an opinion piece and should not be construed to represent either the official position of my employer, or a sales pitch.

What is Penetration Testing?

Penetration testing seems to be a bit of an overloaded term, so for the purposes of this post, I’m going to use the UK’s National Cyber Security Center’s definition: “A method for gaining assurance in the security of an IT system by attempting to breach some or all of that system’s security, using the same tools and techniques as an adversary might.” Penetration testing validates that systems meet expected security standards, supports regulatory or compliance requirements for security testing, and can demonstrate to customers and clients that an organisation is safe to trust with their data.

The key difference compared to a vulnerability assessment is the focus on human-driven activity. It involves offensive security specialists using their knowledge, experience and intuition to hunt down issues that a vulnerability scanner won’t find. Most testers will make use of vulnerability scanners to quickly identify simple issues, but if the end results you receive are just a re-badged vulnerability scan report, then you should probably select a better vendor next time.

It’s also important to note that, whatever you do, it all needs to happen within the bounds of the AWS penetration testing guidelines. Note that not all of AWS’ compute services are listed there - you may need to reach out to your AWS account manager for support and confirmation on some points.

Basic Penetration Testing

Most organisations approach penetration testing piecemeal. For each workload, a discrete engagement will be scoped and commissioned. It’s common to see penetration testing on a fixed schedule, perhaps every six or twelve months, or on the release of a significant update. This makes it easy to demonstrate to clients and auditors that you’re performing security testing, and lines up with “industry best practices” as most will shout about. If your workplace falls into this bucket, each test of a single workload should include:

  • A security assessment of the application. This will look for common application and API vulnerabilities, such as those discussed in the OWASP Top Ten
  • A review of the configuration of the contents of any relevant AWS accounts, including:
    • Automated scanning for common configuration mistakes
    • Human-driven reviews of IAM, the network layout, security groups, secrets management and other components that require understanding of context
  • An unauthenticated assessment looking to exploit any internet-facing components. This often uses data gathered in the AWS configuration review detailed above, and will typically include:
    • Port scanning, DNS reconnaissance and other enumeration to identify exposed services
    • Manual investigation of any identified systems and services to look for vulnerabilities or misconfigurations
    • Exploitation of any identified vulnerabilities to demonstrate impact and find paths further into the environment.
  • Some work to tie those components together, map out and validate any attack paths that combine issues in the app and infrastructure.

Why There Are Better Options

In my opinion, this approach falls flat for a few reasons. Some of these are cloud specific, but some are driven by how the security landscape and industry has evolved.

Driven by Audits, Not Threats

This approach is mostly tied into compliance-driven security, where there’s a need to demonstrate that each app has seen sufficient security testing in isolation. It misses the broader point, which is that many breaches do not start in a particular application. Many attackers will look to breach your AWS estate through phishing, stealing credentials or other vectors orthogonal to the security of any individual workload.

Cloud Development Moves Too Fast

A penetration test is out of date the second you receive the report. It serves as a point-in-time assessment, but cannot be relied on as a statement of the current security of a given platform. This has always been true, but in legacy environments this was a lot less of an issue, as changes were shipped to production far less frequently. With many cloud-native organisations shipping multiple changes to production per day, this is a much bigger problem, and risks leaving significant vulnerabilities deployed in production workloads for long periods of time.

Low Return on Investment

Penetration testing is an expensive way to catch a lot of basic AWS mistakes. High-end cloud-native application protection platform (CNAPP) or cloud security posture management (CSPM) tooling may be out of your budget, but you can get a long way with free or very cheap tools and some time from your cloud engineers. This should be considered before commissioning any penetration testing.

Ignores Critical Supporting Systems

Penetration testing workloads in isolation means that many of the factors that directly affect their security are ignored or not properly contextualised. In my experience, it’s common to find that an organisation’s individual workloads are relatively well secured, but critical supporting systems are not as robust. In particular, the following should always be assessed, but frequently are not:

  • Continuous Integration and Delivery (CI/CD) systems, such as Jenkins, Concourse, GitHub Actions and so on
  • Source Code Management (SCM) systems, like GitHub, Gitlab etc.
  • Single Sign-On and identity management systems such as Active Directory, Okta, Ping, OneLogin and similar
  • Monitoring and Observability tools with significant privileges in the environment. This is common to see with older tools such as Nagios and Zabbix.

The Better Approach

What should we do instead, if periodic workload-specific penetration testing isn’t actually as useful as people think? There’s three key pieces here - automation, human-led reviews of aspects that automation struggles with, and objective-driven assessments.

Automation

If you’ve got the budget, you should invest in a proper CNAPP/CSPM platform, but any competent engineer can make use of AWS Security Hub or open source tools to find common misconfigurations. With periodic or continuous monitoring using open-source and AWS provided tools, an organisation can react much faster to any issues introduced. There are also open source tools to help identify and understand the security weaknesses in your IAM configuration, and secrets scanners to help identify potentially exposed credentials. Some of these will fit into your development processes and CI/CD pipelines, so you can catch issues before they’re even deployed.

A basic level of security automation, combined with a process to get identified issues in front of the right devs quickly, should be in place before commissioning a penetration test. Some suggestions for open source tools to look into:

Human-Led Reviews

There are areas where automation does a poor job, or where some human intelligence is needed to help contextualise and prioritise issues. Bringing in a security specialist is a great way to augment all the automation listed above, and is particularly helpful when trying to get a grip on an existing estate. Particular areas of focus for this kind of work should include:

  • IAM reviews focused on an entire AWS Organization. This might include:
    • Looking for over-permissioned roles that are meaningfully dangerous (no sense in wasting developers’ time fixing things that aren’t really a problem, after all)
    • Mapping out cross-account trust relationships to validate segregation between separate environments (e.g Dev vs Prod)
    • Understanding how external systems are accessing AWS, mapping and valdiating use cases for any remaining IAM users etc
    • Reviewing SCPs and looking for areas to lock them down further
  • Single Sign-On/identity management/privileged access management (PAM) assessments, to look for privilege escalation vectors or ways to bypass PAM controls
  • CI/CD pipeline and SCM assessments, to ensure they’re adequately hardened against both internal and external attackers
  • Testing bastion hosts and other administrative access systems used to access and manage workloads

The other use case for this type of work is a broad-scope assessment of an entire AWS Organization, or large parts of one, hunting for the most critical issues and most likely to be exploited attack paths. It’s common for organisations to struggle to prioritise between the thousands of supposedly high risk findings the automation has reported, and this can be a great way to pull out the most critical things to fix. The methodology for this is likely to look similar to the basic penetration testing approach covered above, only time-limited, focusing only on the AWS infrastructure, and ignoring all the findings that penetration testers commonly report that won’t actually impact a breach.

Objective-Driven Assessments

Once the automation’s taken care of the basics, and we’ve had a good stab at hardening key aspects of the environment, it makes sense to focus on ways a real-world attacker is likely to cause significant damage at the organisation level. These engagements typically have some business-level objectives defined for the testers to target, and some initial starting points given to the testers.

Business-level objectives means actions that have an impact on the business, rather than anything technical. Compromising confidential data, for instance, in comparison to, say, getting access to the Organization management AWS account. Other common objectives include:

  • Exfiltrating sensitive data from key data stores
  • Demonstrating an ability to execute key business processes, such as moving money between bank accounts
  • Pushing malicious code out to production systems

Common starting perspectives include:

  • “Leaked” AWS access keys
  • A compromised developer’s system
  • A compromised business user
  • A compromised application server, or the credentials associated with one

A Note on “Red Team” engagements

It’s common for organisations to procure objective-driven engagements as “red teams”, adversarial, “stealthy” attack simulations that are designed to test both preventative controls and also their ability to detect and respond to a real attack. In my opinion, a red team should be the very last thing you purchase. They unearth far fewer problems than more collaborative engagements, and are only worth engaging in once you’ve done a lot of work to harden your organisation and built up your detection and response capabilities. They’re an exercise in validating all your security capabilities work as expected, not in finding all the problems you have. If a red team is your goal, you should first:

  • Perform a collaborative objective-driven assessment, and use the learnings here to inform further hardening and detection development.
  • Commission a “purple team” exercise, or attack detection capability assessment, to validate your ability to detect malicious activity across your AWS estate.
  • Ensure your incident response plans are up to scratch, and commission a tabletop exercise or two to ensure the processes work in the way that you want them to.

The Outcomes

As someone on the receiving end of a penetration test, all that should really matter are the quality of the outputs, such as a report detailing all the findings. This is in stark contrast to what your average consultant wishes - they’d far rather it was the awesomeness of their testing skills. As a result, the quality of the output varies enormously from vendor to vendor, as do the formats in which that output is delivered.

At a bare minimum, output should provide:

  • A clear breakdown of any identified vulnerabilities, including:
    • Exactly where the vulnerability was found
    • A vulnerability risk rating, and explanation of the business impact, as well as any technical impacts
    • Precise steps to reproduce the vulnerability
    • Advice on how to fix the vulnerability
  • A high-level summary of the engagement and results suitable for presentation to less technical staff, such as product managers and similar

Great output will also cover:

  • Demonstrated attack paths or vulnerability chains, showing how a real-world attacker might combine the findings to achieve greater impact
  • Root cause analysis on the vulnerabilities identified - why were these present, what systemic failures led to their introduction?
  • Advice on improving your security processes to prevent similar vulnerabilities being introduced in future
  • Vulnerability remediation advice tailored to your specific situation - the parameters to change in a Terraform file if your workload is deployed using terraform, for example

Traditionally, penetration testing vendors ship a PDF or similar documenting the engagement and all the findings. While suitable for many use cases, it’s worth thinking about how you’ll use the test results. If you’re going to load them into an issue tracker or similar, it’s worth asking what your vendor can do to make that process easier for you.

Picking a Vendor

There are thousands of penetration testing vendors in the market globally. The quality of what you buy will range from utterly atrocious to exceptional, and it can be very hard to work out whether a vendor is any good before you engage with them. Vendors all have their own specialisms too, so a vendor that did a great job with an on-premises focused red team may be no good for AWS assessments. My recommendations for finding good vendors:

  • Talk to your personal contacts who’ve bought AWS penetration testing services and get their recommendations on which vendors they’ve had good experiences with.
  • Ask around in cloud security industry spaces (like the Cloud Security Forum Slack workspace, for instance). Plenty of people are usually happy to share their experiences.
  • Look for who’s publishing novel research on AWS, releasing relevant open source tooling, speaking at recognized industry conferences (both local to you and the big names like Black Hat, DEF CON etc)
  • Look for an organisation who’s transparent about who’ll be delivering the assessment, their qualifications and so on, to ensure your work isn’t being farmed out to a cheap sweatshop of poorly trained junior consultants.
  • Make sure you have engineers in all the discussions with vendors, to help weed out those who don’t understand AWS well enough to do a good job.

Running an AWS Penetration Test

When it comes to the logistics of managing a penetration test, there are a number of key things to get right to make your life (and the testers’ lives) easier.

Granting Access

Typically, testers will require a read-only view into any AWS accounts in scope, to understand how the resources have been configured. This will likely require the following AWS managed policies, or something equivalent:

Frequently, testers will ask for access equivalent to a regular developer, in order to better understand the impact of an insider threat (or a developer compromised via a phishing campaign or similar).

Any access to an AWS account should be granted via role assumption, ideally managed by your standard single sign-on systems, and the testers will need to be able to use that access to generate temporary access keys. Avoid IAM users like the plague - the associated access keys last forever, and in my experience organisations forget to clean them up afterwards.

Testing Location

It’s common to hear organisations pushing external testers to do all the testing from systems the organisation owns or controls. They see benefits to keeping any data and results entirely within their control, and enforcing any restrictions they see as necessary on testers.

Avoid this where at all possible. You’ll want the testers to be able to complete the assessment from their own systems, because they’re familiar and effective with whatever they’ve got set up, and it will give them access to their full suite of tools. Good testers write a lot of their own automation to improve their ability to do their job, and so cutting them off from that (and any open source tools they might have) does little more than reduce the quality of the results they’ll produce for you.

If you have good reasons for why that’s not possible, then the next best thing is to set up systems that the testers have administrative access to, and where they can install their tools of choice.

Providing IaC or Application Source Code

In my opinion, source code audits are usually an expensive way to get lower quality results than a penetration test. That said, there are good reasons for a tester to look at the source code for whatever they’re testing, even if it’s not a full-up code review.

Testers will frequently ask for source code, both for your infrastructure (if using Terraform, CloudFormation or similar) and the application it’s hosting. You should make this available to them, within reason. Having it available brings several benefits, including:

  • Faster root cause analysis of any identified vulnerabilities
  • Better and more precise advice on fixing any available vulnerabilities
  • Discovering variants of vulnerabilities, or similarly vulnerable code paths

Conclusions

Penetration testing is an important part of any effective security strategy, but it’s common to see a compliance-led approach drive up the cost and reduce the effectiveness of a penetration testing program. Most organisations would make better use of their funds if they were to:

  • Invest in security automation
  • Use human-driven penetration testing and security assessments to fill the gaps left by automation, and add context
  • Conduct periodic collaborative objective-driven assessments

There are a lot of vendors out there, of very varying qualities. Look for word of mouth recommendations, and failing that look for firms that are active in the cloud security community, publish their own research and open source tools etc. Once you’ve found a good vendor, work with them, treat them as a partner, and give them the room and tools to do their job properly for you, if you want to get the most out of your investment.