True serverless helps alleviate some of the administration and management issues related to FaaS
Creating distributed applications demands a thorough understanding of cloud infrastructures and architectures. Engineers must address complex operational issues, including provisioning, configuration and how their code will behave over a distributed architecture.
The current standard is to architect programs into microservices, before deploying to cloud hardware. Docker and similar technologies ensure that the microservices are strongly isolated in-terms of access, dependencies, and the potential overcommit of resources.
However, containerization is an imperfect solution that masks rather than addresses the core problems of running cloud applications. As an application scales, scripting architectures and defining how different containers communicate with one another becomes exponentially complex. The solution—orchestration technologies such as Kuberenetes—adds yet another layer to an already bloated stack ill-designed for the needs of distributed computing.
These problems were, in theory, to be alleviated by the concept of serverless—not the removal of servers, but rather the removal of their administration. However, since its emergence in the early part of the last decade, the term has become conflated with function as a service (FaaS).
FaaS tools such as AWS Lambda break code into computational event-driven units, which only run when called. The event-driven logic scales quickly, efficiently and dynamically, driving down costs as the provider takes care of all server updates and administration, while developers only pay for what is used.
However, functions run on their own custom framework and are limited in memory and execution time, rendering FaaS an inappropriate approach for scaling an entire application. Regularly scheduled database tasks, CPU-bound processes and long-running persistent connections to a server are examples of common patterns that map poorly onto FaaS.
Serverless Is More Than Just FaaS
A platform that looks after the infrastructure of an entire application (as opposed to specific functions) is the logical evolution of FaaS, and represents a truly serverless paradigm. Serverless, in this sense, takes the dynamic scaling properties of FaaS, but without fragmenting the application.
Rather than simply linking together different components of a larger system, a serverless platform ensures long-running processes and fast discovery, enabling services to communicate and cooperate with one another quickly and reliably. Arbitrarily large memory capacity and unbounded execution time are underpinned by predictable execution and robustness guarantees well beyond the layered stack approach of FaaS. Developers can altogether ignore the management and configuration of servers, treating computation in purely algorithmic or data terms.
Whereas containerization and FaaS solve pieces of the distributed computing puzzle, they also introduce new issues as the user must engineer their own infrastructure stack to carry out even basic tasks. By contrast, serverless could reduce an application engineered with Chef, Docker, Kubernetes, the JVM and Akka to a single abstraction. It allows for programs to be easily scaled up and down, harnessing almost limitless computing power, without compromising runtime.