With the advent of the cloud and platform as a service (PaaS) offerings, developers are innovating faster than ever. But with this rapid pace comes risk. Companies quickly implement new apps to meet their business demands, but the increasing complexity of the infrastructure being used – both in the cloud, on premise, or a hybrid combination – makes it difficult to ensure vulnerabilities are not introduced or vital corporate information is not exposed. In the quest to ensure that policy and governance are in place and security risks are negated, IT often brings innovation to a screeching halt.
Is there a happy medium that enables developers to continue accelerating innovation, but makes IT confident there’s adequate security? In this article, we’ll look at the importance of governance and policy, and outline the five fundamental security requirements that will help enterprises bridge that chasm that currently separates development and IT.
Policy Debt – The Downside of Faster Innovation
Security flaws are not new. Enterprises have faced serious security problems for decades. As today’s computing landscape becomes increasingly diverse and complex with increasing use of the cloud, new vulnerabilities are introduced, which exacerbate problems with existing security methods, such as simple usernames and passwords and an over-reliance on network perimeter security models.
The rapid introduction of new developer technologies far outpaces an enterprise’s ability to vet them. As a result, enterprise IT faces a tradeoff: rapidly adopt new technologies to satiate business demands thereby accepting more risk, or slow down new technology adoption with the hope of mitigating risk. Tension between enterprise IT and the rest of the business is the inevitable outcome. More importantly, because new apps have so many dependencies, as soon as one is put into production, an enterprise almost immediately incurs policy debt.
Policy should enable enterprises to track these dependencies, but in today’s IT environments, each piece of infrastructure or cloud resource has its own unique policy model, operational mode and lifecycle. As a result, enterprises end up with a layered cake of dependencies, which is difficult to keep track of and ensure no security holes are left unpatched.
Most of today’s security technologies – like gateways and firewalls – are either point-based or palliative solutions. They treat symptoms rather than root causes. And, they only work for traffic that’s routed through them, which precludes most of the server-to-server interactions that happen within the datacenter. Reliance on gateways is akin to locking the deadbolt on the front door of your house but leaving the back door open.
These simply are not enough to address the proliferation of attacks that have paralyzed companies and exposed customer and consumer data. Consider the three major security Linux exploits – Ghost, Shellshock and Heartbleed – that came to light in the past year. To patch their systems against these problems, companies needed to find the vulnerabilities and reset servers to activate the fix, which resulted in a cascading failure of servers, downtime and a host of other performance-related issues – simply because there’s no centralized way to apply policy and determine whether a resource in use is vulnerable.
The Fundamental Five Security Requirements in the Brave New IT World
To fully protect their increasingly complex IT environments, there are five fundamental security requirements no enterprise should be without:
- Resource Naming and Identity – To secure access of a workload or process, it needs to be named so that it can referenced as logic is applied. There is no notion of identity for the applications running today. They’re simply workloads running on a CPU. If enterprises want to apply policy, they will need an overlay operating system that gives identity to that workload, so it can be organized into a manageable, policy-driven hierarchy. This type of naming scheme is being done for Internet domains, but there is nothing comparable for the data center that enables IT to find, maintain and operate workloads residing there, regardless of where they are physically run.
- Chain of Trust – In today’s data center topologies, no actual proof is needed to ensure that a holder of a token is exactly who it claims to be. To guarantee a high level of proof of identity and actual trust of components, enterprises require a central signing authority that issues tokens to all clients and internal components. This ensures that the management backplane is resilient to token and credential theft, and the system offers a high degree of trust.
- Context Aware Policy – Most systems rely solely on role membership to make authorization decisions, such as “who” can do “what” to “what.” But enterprises need to go further than that, with the use of a dynamic scripting language for authorization decisions. Every authorization decision will require information over and above simple role membership (date and time, authentication strength, targeted resource and more). The result is a system that adapts more easily to enterprise authorization requirements.
- Control of the Network – The network is a vital part of any enterprise system, and controlling access to network nodes is key for enterprise IT governance. Today, network access control is a hodge-podge combination of network perimeters, firewalls, VLANs, VXLans, proxies and security appliances. Workloads exist in network segments and workload connectivity is a function of network segment configuration. Over time, these configurations become rigid, brittle and difficult to maintain. To prevent this, enterprises need to control the network within the cluster so that any workload running must be given explicit access to the network and other workloads (via policy) in real time and regardless of physical properties of the infrastructure.
- Ephemeral Service Credentials – Applications and services require connectivity and sometimes credentials to talk to one another. Once an application gets a credential for a service, the application becomes responsible for safeguarding that credential. However, credentials sometimes leak — they can appear in log files or configuration files, and can even be scraped from memory. What enterprises need is a new approach to service access that binds a credential to an individual network path. This ensures that even if a credential were to leak, it couldn’t be used from another location on the network. Moreover, with this method credentials live only as long as an individual workload in the system, rotating every time the application stops and starts again. By taking this approach, the burden on applications for handling credentials is reduced, strains of NetOps and network perimeter security models at scale decreases, and weak links in the trust chain are removed. The result is a more secure system that you can trust without changing your existing investments.
About the Author/Derek Collison
Derek Collison is founder and CEO of Apcera. An industry veteran and pioneer in large-scale distributed systems and enterprise computing, Derek has held executive positions at VMware, Google and TIBCO Software. As CTO and chief architect of cloud application platforms at VMware, he co-founded the cloud computing group and designed and architected Cloud Foundry,the industry’s first open PaaS. During his time at Google, he co-founded the AJAX APIs group.