At a CDEventscon event this week, the Continuous Delivery Foundation (CDF) announced it is hosting a CDEvents project through which it hopes to create a vendor-neutral specification for defining the format of event data across multiple services, platforms and systems.
Andrea Frittoli, open source developer advocate at IBM, co-creator of the CDEvents project and member of the CDF technical oversight committee, said the open source consortium is launching this initiative to foster interoperability across continuous delivery (CD) platforms. In the context of the CDF, that mission spans both continuous integration and continuous deployment platforms that are subsets of CD.
CDEvents itself is based on an existing CloudEvent specification created by the Serverless Working Group within the Cloud Native Computing Foundation (CNCF). As part of the effort, the CDF has already built a proof-of-concept implementation that enables Tekton pipelines and Keptn orchestration tools for managing application life cycles to communicate over a common event protocol.
Collecting metrics from multiple DevOps platforms is a challenge because there is no standard for sharing event data. Analytics providers and value stream management platforms that collect metrics need to create connectors for each platform they support. CDEvents would provide a standard interface for collecting metrics from any developer tool or platform. That data would also make it easier to visualize workflows that describe artifacts and their metadata, he added.
That lack of a common way to describe events means developers not only must constantly re-learn how to consume events, it also limits the potential for libraries, tooling and infrastructure to facilitate the delivery of event data across environments ranging from routers and tracing systems to software development kits (SDKs).
Naturally, it may be a while before this continuous delivery specification results in increased interoperability across DevOps environments, but the need for one across what has become a patchwork of fragmented tools and platforms is self-evident. Many organizations are already investing in an assortment of analytics tools in the hopes of improving the rate at which applications can be developed and deployed. In too many instances, DevOps teams have inadvertently created bottlenecks that slow down the rate of development. The challenge is that, as new tools and platforms are added to a DevOps workflow, they are not likely to be able to share data with those applications until a connector is created.
Not everyone, of course, is a fan of having more metrics. The goal should be to coach teams on how to become more efficient rather than implementing a set of draconian mandates. Arguably, such mandates are only going to drive away developers that are trying to strike a balance between coding as an art form and application development as a craft.
Nevertheless, as organizations better appreciate the critical role software plays in driving business processes, the greater the desire to measure the pace and efficacy of their application development efforts—to justify the levels of investment in software development they are making.