There are clear benefits to serverless architecture, with many enterprises already making or planning to make this change. Like any paradigm shift however, the move to serverless carries both benefits and risks. Here’s what you need to know.
AWS pioneered the serverless space with its release of AWS Lambda back in 2014, with the other major cloud providers following: Azure Functions as a service, Google Functions and IBM Cloud Functions (Apache Openwhisk). Later entrants in this space took the form of container-based, fully-managed serverless frameworks such as Kubeless.
The Good, the Bad and the Use Cases
Some of the most common serverless use cases today include building batch processing like image compression, cloud job automation, serving IoT devices from edge to the cloud and building single page applications (SPA).
On the other hand, delegating the infrastructure control to the provider means that serverless introduces new set of operational blind spots and observability challenges. It also complicates the placement and management of deep security controls, when compared to container, virtual machine (VM) or even bare-metal workloads.
Serverless is Not the Answer for Everything
Whether building new applications or migrating existing ones, serverless runtimes have significant restrictions. Cloud-hosted functions-as-a-service restrict the runtime duration of a function, require a “warm-up” time for function invocation and don’t guarantee local state preservation across function invocations. This means that application logic that is purely time-bounded, involves short functional transformation of input into output and can handle a slow starting time would fit the serverless paradigm perfectly.
Migrating state-oriented applications or stateful microservices into serverless functions is a different story. This will typically make use of a database—either managed or as a service—or make use of an application cache such as Redis or object store such as S3 to store state across requests.
While cloud-hosted serverless removes the burden of patching and maintaining infrastructure, the ephemeral nature of functions makes it extremely challenging when it comes to establishing perimeters and controls around sensitive data in flight and at rest.
Build, Ship and Run Serverless
Serverless, as young as it may be, is extremely promising. Adoption choices should be balanced around platform maturity for developers, testing, operations and what security and regulations require.
If we take as an example what Docker did for container packaging, with its Container Image format, this is exactly what functions and serverless need: a common language for packaging. A common tool for building, shipping, running, operating and securing function-based workloads would serve as common ground for the variety of platforms already out there.
From a security perspective, unification around secret consumption, service-to-service authentication and authorization between first and third parties, function workflows and access whitelisting, observability, security network monitoring, access policies to the network and access policies to data—all these would lower the friction of adoption.
The Cloud Native Computing Foundation recently released version 0.1 of Cloud Events, the communication message format for function and event triggering. This emphasizes how far serverless is yet from maturity and cohesive standards.
From a security perspective, Type A serverless workload protection systems must be able to inspect the serverless workload runtime. Type B workload protection systems are limited to what the serverless platform provider APIs make available for out-of-band data-oriented security analysis.
Cloud-hosted functions makes Type A protection challenging to achieve, whereas fully managed serverless frameworks (for example, Kubeless), enables enterprises to adopt serverless and open the door for new developer experience. These also accelerate innovation velocity and, with security tools designed for microservices and modern virtualization technologies, can maintain full visibility and runtime control over network and data.
Securing Functions as a Service: A New Arsenal of Security Tools
Functions, just like any other web-based microservice implemented using containers, VMs or bare-metal server, are not free from the well-known top 10 OWASP web application vulnerability list.
Functions, however, strongly rely on non-web, event-based communications and event-based networking channels. These, on public cloud providers, prevent inserting security controls to detect exploitation attempts or data exfiltration and enforce well-defined workload policies. This requires new arsenal of security tools that understand microservices, scale horizontally and coexist with the existing security operations and security stack.
The ephemeral nature of functions-based workloads makes them more difficult to exploit, and it is definitely harder to persist a threat. Yet third-party code vulnerability or open source components gone bad can result in sensitive data flowing into the wrong hands, landing in a misconfigured object store.
As serverless requires persistent data storage in external services, a scenario in which circumventing a running function to leak secrets makes that sensitive data one hop away from an attacker is not unrealistic.
Controlling it all together: What private information can be processed, what sensitive data is in flight and through which event grid channels it’s being received and sent, along with how functions are invoked by API gateways and which API calls which functions make to internal and external services. This is where risk, regulation and modernization collide.
Despite significant adoption and constantly growing interest, serverless is still in its infancy. When it comes to standards, operations, testing and orchestration tooling, workflows and security, a flood of water must yet pass under the innovation bridge of cloud providers, with AWS Lambda leading the adoption curve, Azure FaaS making giant innovative leaps forward and community-driven frameworks built on top of Kubernetes.
Innovation aside, serverless creates huge operational and security blind spots. Understanding those risks and having proper introspection and inspection capabilities is what should dictate the adoption and the choice of the platform.