At the AWS re:Invent 2023 conference, LaunchDarkly today previewed an ability to leverage generative artificial intelligence (AI) capabilities to create experiments for its feature management platform.
Based on the Bedrock foundational large language models (LLMs) made available via AWS, this extension to the company’s Product Experimentation platform promises to make it simpler to use historical data to create a wider range of offerings that might be made available within an application.
LaunchDarkly is making a case for a SaaS platform that can be used to centrally manage features across multiple workflows and platforms.
Robert Neal, director of engineering for LaunchDarkly, said the company is now extending its testing capabilities using a managed Bedrock service provided by AWS that will enable leveraging generative AI to define a testing hypothesis more easily. That should also make it possible to involve more stakeholders in an iterative testing process within the context of a set of well-defined guardrails, he added.
As more organizations start to deliver different tiers of digital services at varying price points, there is a clear need to be able to restrict access to those digital services. Application development teams have been employing feature management since the 1970s to isolate the development of various components into a branch that can be worked on and tested without impacting the primary build the application is based on.
Feature management enables development teams to experiment with adding new capabilities in a way that doesn’t disrupt the application. When a project is completed, that branch is then usually merged into the primary build or can be deployed as a microservice that might be invoked via an application programming interface (API) by other modules with the application or another external application.
At the core of that capability are what are known as feature flags, also known as feature toggles or feature switches, that make it possible to dynamically turn services on or off based on who is accessing them. Rather than being limited to the application development process, those feature flags are now being employed in production environments to enable the continuous delivery of multiple types of digital services to various classes of users of an application. A feature flag management platform enables an organization to keep track of all the flags being employed to keep the overall application environment consistent.
It’s still early days in terms of understanding the extent to which generative AI can be applied to DevOps workflows, but it’s clear many of the manual processes that conspire to slow down the pace at which applications are built and deployed are going to be either reduced or outright eliminated. Each DevOps team will naturally need to determine for itself to what degree DevOps workflows can be automated using AI, but the overall pace at which applications can be built and deployed is going to clearly accelerate. The challenge now will be finding ways to keep pace with a rate of application development that is about to exponentially increase.