DevOps in the Cloud

Datadog Sees Spike in AWS Lambda Serverless Adoption

Datadog published a report that shows nearly half of organizations using the company’s IT monitoring platform have embraced the AWS Lambda serverless computing framework.

Stephen Pinkerton, a product manager for Datadog, said that number shows serverless computing frameworks are being employed by mainstream IT organizations far more widely than initially might be thought, given the relative age of serverless computing frameworks.

The report finds the median Lambda function invoked by Datadog customers runs for about 800 milliseconds. Nearly one-fifth of functions execute within 100 milliseconds, while about one-third execute within 400 milliseconds. One-quarter of Lambda functions have an average execution time of more than three seconds, while 12% require 10 seconds or more. The duration of Lambda functions is notable because serverless latency impacts not just application performance but also costs. Lambda pricing is based on “GB-seconds” of compute time, which is the memory allocated to your function multiplied by the duration of its invocations.

Not surprisingly, the report notes 47% of functions are configured to run with the minimum memory setting of 128MB. By contrast, only 14% of functions have a memory allocation greater than 512MB, even though AWS will allow up to 3,008MB per function.

As part of an effort to further limit costs, most organizations are not employing a function to call another function and then waiting for a response, which would incur billable invocation time. Rather, serverless functions are making asynchronous calls via a message queue. Functions that are stateless most often read from or write to a separate persistent data store.

Amazon DynamoDB, a document database based on a key-value store architecture, is the most widely used persistent form of storage accessed, followed by an instance of the SQL databases provided by AWS as a service, and then the Amazon S3 cloud storage service.

The Amazon Simple Queue Service (SQS) is the top choice for a message queue for Lambda requests, followed by Amazon Kinesis and Amazon Simple Notification Service (SNS).

The report also notes each Lambda function has a configurable timeout setting, ranging from 1 second to 15 minutes. Two-thirds of configured timeouts are 60 seconds or less. By default, Lambda customers are also limited to 1,000 concurrent executions of all functions in any given region. Only 4.2% of all functions have a configured concurrency limit. A total of 88.6% of companies running Lambda make use of concurrency limits for at least one function in their environment.

In terms of programming languages employed, Python and JavaScript in the form of Node.js are the most widely used on Lambda, the report finds. Nearly half of Datadog customers running Lambda (47%) employ Python, while 39% run Node.js. Python 3 outweighs Python 2, which has reached end of life, by a factor of two to one.

Finally, the report notes there is a high correlation between organizations that have adopted containers and those employing serverless computing frameworks. Nearly 80% of organizations in AWS that are running containers have adopted Lambda. However, Pinkerton says that correlation at this point has more to do with the willingness of organizations to employ leading-edge technologies than it does any effort to weave together containers and serverless computing frameworks.

Pinkerton also surmises the primary reason organizations are employing Lambda is to accelerate application performance. However, it’s also worth noting serverless computing frameworks also tend to reduce to the size of an application by relying on external functions to run code that is not frequently invoked outside the core application.

Datadog plans to evaluate usage of serverless computing frameworks on other cloud platforms once they achieve enough critical mass. In the meantime, it’s clear serverless computing frameworks are rapidly becoming an extension of any DevOps pipeline.

Mike Vizard

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Recent Posts

A Matter of Measurement

We're all asked to assess our skills, sometimes. Surely this answer is as good as any?

13 hours ago

The Commonhaus Way to Manage Open Source Projects

Commonhaus is taking a laissez-faire approach to open source group management.

14 hours ago

Five Great DevOps Job Opportunities

Looking for a great new DevOps job? Check out these available opportunities at Northrup Grumman, GovCIO, Northwestern Mutual, and more.

1 day ago

Tools for Sustainability in Cloud Computing

You’re probably sold on the environmental benefits of moving to the cloud. These tools can help you get there faster…

4 days ago

OpenTofu Denies Hashicorp’s Code-Stealing Accusations

The legal battle between the faux-open-source HashiCorp and the open source OpenTofu heats up.

5 days ago

DevOps Unbound Special Edition from KubeCon Paris 2024 – DevOps Unbound EP 44

During this special KubeCon + CloudNativeCon Europe 2023 edition of DevOps Unbound , Alan Shimel and Mitch Ashley are joined…

5 days ago