Over the last few years, it’s become apparent that traditional on-premise security policies are not a good fit for newer cloud-native environments. Even though the writing has been on the wall for a long time, the brewing security crisis still hasn’t been resolved. Far too many organizations continue to use security measures that are ineffective for cloud-native environments, along with inconsistent policy enforcement that leads to huge oversights and gaping organizational loopholes. The sad truth is that today, it doesn’t take a master hacker to infiltrate most cloud-native deployments.
Many of the most infamous security breaches would have been extremely difficult to prevent even with an ideal security environment, but the weakness of the system is endemic, reaching far beyond a handful of vulnerable organizations. Even if some malicious actors could still get through, there’s no reason for organizations to leave their attack surface so blatantly unprotected.
Here are increasingly common cloud security mistakes DevOps teams still make, and our suggestions for how you can stop them in your organization.
Ignore Zombie Workloads
Many organizations ignore zombie workloads that run on their architecture. When your team is stretched and you’re facing a number of serious security issues, it’s easy to dismiss zombie programs as nothing more than a harmless irritation.
In actuality, having zombie resources scattered around your deployment blocks your ability to spot serious cryptojackers. Zombie workloads are often an indication of, and an invitation to, more significant invasions into your infrastructure. A report by SkyBoxSecurity revealed in 2018, cryptojacking was the leading attack vector. DevOps teams need to recognize the threat of hosting a program mining your resources for cryptocurrency, and take immediate steps to block them.
Turn a Blind Eye to Leaky S3 Buckets
AWS service buckets, particularly the S3 bucket, is one of the oldest cloud-native services. As a result, many S3 buckets are still governed by very old rules and regulations that leave many ways for someone to accidentally make the bucket public, open up access to unauthorized users and expose sensitive data.
A study by HTTPCS reports 58% of AWS S3 buckets are accessible to the public, and 20% of them aren’t write-protected. Not only can malicious actors get to your sensitive customer data through S3 buckets, but often they can access your cloud credentials as well. Some of the most disastrous data breaches began from unrestricted access to S3 buckets, and it’s one of the most common causes of data breaches overall. It’s vital to regularly check your AWS console for public buckets.
Allow Runtime Updates to Bypass the CI/CD Pipeline
Every DevSecOps team agrees that your deployment is more secure when runtime updates are passed through the CI/CD pipeline, but that doesn’t mean the policy is always enforced. Developers will repeatedly ignore your security policies through practices, such as using open-source libraries, to avoid the security process of the CI/CD pipeline and speed up deployment.
While this saves developers some time on releasing updates, it places a heavier burden on security teams, who have to run extra scans for these rogue workloads. What’s more, it builds complacency among DevOps teams, who assume there is no way to prevent unauthorized workloads and end up simply accepting them. Eventually, your security posture erodes to the extent that malicious actors can run harmful workloads without attracting your attention–until it’s too late.
Permit Unrestricted Network Access
Instead of spending hours on segmentation and separate access permissions, all too many DevOps teams fall back on a blanket set of network configurations that fall far short of the necessary access restrictions. They often place all their workloads into a single VPC, opening up access to third parties in the process.
Without restrictions on public network access, it takes far longer for security teams to identify and isolate negligent and malicious activities. In a short space of time, DevSecOps teams find themselves overlooking serious oversights like unrestricted public root access, opening up enormous holes in their security profile.
Faulty Rule Configurations for Micro-Segmentation
When DevOps teams use micro-segmentation in containers, they face unique challenges. The more granular you get with your segmentation, the greater the risk you’ll miss faulty policy rules.
Even your most familiar rule sets can create massive vulnerabilities. For example, if you allow your developers to use a specific IP to connect to the production runtime environment through SSH, you could unwittingly end up permitting unrestricted public access to sensitive areas. Sometimes, these faulty rule configurations go overlooked for as long as several months. To avoid supporting faulty rules, it’s important to audit them regularly with a tool such as Amazon Inspector’s Agentless Network Assessment.
Avoiding AWS cloud security mistakes requires a shift in mindset. The harsh truth is a significant number of cloud-native deployments commit not just one, but several of the mistakes listed above. Cloud environments are deliberately decoupled from the network infrastructure in order to enable agile business practices, but DevSecOps teams continue to use network security techniques that aren’t capable of meeting the unique challenges of the cloud.
The only way to fix these mistakes is to turn to new technologies designed for cloud security, such as using cloud workload identities to bridge the chasm between business logic and network infrastructure. Until ACLs and security groups are untethered from the network infrastructure and join cloud-agile business logic, they’ll fail to provide adequate cloud-native security.