This article is a response to “The Future of Jenkins in 2024” article, published late last year.
If you’re a software developer, it is practically impossible that you haven’t at least heard of Jenkins.
Developed initially by Sun Microsystems as Hudson, Jenkins is written in Java and has been the leader in software life cycle management for many years.
When Jenkins was born, the world of software development was less complex and didn’t have concepts like CI/CD, DevOps and so on. Developers worked with less complex architectures: A monolithic application with a database and the infrastructure based on the VM.
But after the introduction of containerized environments (or containerized applications), the world changed! The deployable assets changed. The software life cycle changed and Jenkins changed.
The Legacy Approach
The success of Jenkins is largely due to its plugin approach. It allows you to install many useful plugins to manage every stage of the software life cycle, including clogging your repository, configuring unit tests, configuring custom behavior and more.
Jenkins has a lot of competitors. The new competitors (and Jenkins), new technologies and the contributions of the open source community have adopted a modern ‘as-code’ paradigm. Jenkins itself introduced the as-code (scripted and declarative) pipeline (using DSL in groovy a JVM language).
The git server provider has implemented the webhook behavior.
Has been introduced the CICD term useful to refer to continuous integration and delivery.
The infrastructure has been “dematerialized” with Kubernetes.
The infrastructure configuration and provisioning has become scriptable (CasC and IaC).
Today, you can install Jenkins wherever you want; on a Windows server, Linux server, as a containerized application or on Kubernetes. Jenkins installation is very dynamic.
Moreover, you can use a different Jenkins architecture: Single node (basically with one master node), master-slave (with persistent slave) approach, master-slave runtime approach (using containers) and master-slave runtime approach using pod containers in a Kubernetes cluster. The last example creates a very powerful “cloud native tool” – but we’ll see this architecture in the next sections).
But what does Jenkins offer today? The Jenkins subsystem today supports both models: The legacy approach and the as-code approach using Jenkins pipeline (declarative or scripted). Though many might think, “But … we’re DevOps and we don’t care about the legacy model; we’re interested in the as-code model.”
Wiith the as-code model working within the GitOps paradigm, we’re safer in disaster recovery and also we can implement a self-service approach useful to:
- delivery where you want
- delivery for what you want
- delivery when you want
With Jenkins you can do this always and also using a high level approach since Jenkins uses a DSL based on groovy that is a JVM language.
In my opinion, Jenkins has some advantages:
- No vendor lock-in (Theoretically, I can provision a groovy sandbox, and inside it, I can run my pipeline and it works fine. This is not completely true because if you use a Jenkins plugin like withCredential(), you do not have it inside a sandbox and other mechanisms, but – theoretically – this is possible.)
- Jenkins works with OO and scripted programming languages like groovy. We can use high-level data structures and objects to implement the pipelines.
- “We do not learn the core concept” about a specific tool (I mean, the high-level program languages are familiar to software engineers). If you know Java, then you know groovy; if you are experienced with Python, then you can work with groovy.
- No declarative approach with serialization languages such as YAML
- If you don’t know bash programming you can use a fully-groovy pipe.
- Is open source
- Is pluggable
- The Jenkins infrastructure is completely under our control
- Shared commons libraries on the system side and project side
- You can completely separate the project source code from the pipeline project. Segregation of tasks (and when you work in a big company where the development department is isolated from the “delivery” department, this is very important)
Jenkins Common Architecture
Suppose a business project is versioned into a GitLab instance. The Jenkins pipeline is useful to manage the life cycle of this project as follows:
pipeline {
agent {
label ‘maven’
}
stages {
stage(‘unmarshall payload’) {
steps {
script {
// to do this you can enable “allowEnv: true” in BuildConfig
payload = utils.unmarshall(env.GITLAB_WEBHOOK_PAYLOAD)
}
}
}
stage(‘check GitLab Event & Project Details’) {
when {
expression {
script {
gitlabUtils.isPushEvent(payload)
}
}
}
steps {
script {
gitlabUtils.projectRepositoryDetails(payload)
}
}
}
…
Assume that:
- The pipeline is triggered with webhook mechanism
- The GitLab payload is stored into GITLAB_WEBHOOK_PAYLOAD env var.
- The pipeline use the global Jenkins shared libraries (imported on top with: @Library(‘jenkins-shared-libraries’) _ )
- The gitlabUtils is a groovy bunch function that contains the “action functions” as you can see from the code snippet
- Jenkins works with a plugin called generic webhook plugin (which is a really powerful plugin useful to manage the webhook request)
Focus on isPushEvent(payload) function:
def isPushEvent(payload) {
/**
* This function is used to check if the hook event is a push type event.
**/
return payload != null && payload.get(‘object_kind’) == ‘push’
}
As you can see, using this approach, you can do anything everywhere and when you want, using an OO language. For example:
stage(“environment steps”) {
steps {
script {
…
ClosureParams params = FactoryClosureParams.factory(currentEnvironment, servicesWithLatestTag,
servicesWithoutDockerImages,
serviceWithPreviousSameTag,
deployedServicesBySector,
composeRepoBranch,
hostsGroupToDeploy,
latestCommitArray,
sectorToDeploy)
…
}
…
}
The above snippet is a piece of a generic stage in our pipeline, where:
- ClosureParams is a class
- FactoryClosureParams is a class
These classes are stored into Jenkins shared libraries repo, into the path src/it/example/jenkins/*.
Yes we do. Because with Jenkins we can also work with OO programming!
Focus on FactoryClosureParams class:
package it.example.jenkins
import it.example.jenkins.ClosureParams
import it.example.jenkins.ProductionParams
import it.example.jenkins.DevelopParams
class FactoryClosureParams {
private static final String developRegex = “(.*dev.*)|(.*DEV.*)”
private static final String productionRegex = “(.*prod.*)|(.*PROD.*)”
public static ClosureParams factory(String environment, List servicesWithLatestTag,
List servicesWithoutDockerImages,
Map serviceWithPreviousSameTag,
Map deployedServicesBySector,
String composeRepoBranch,
List hostsGroupToDeploy,
List latestCommitArray,
List sectorToDeploy)
{
/**
* Factory method.
**/
if (environment =~ developRegex) {
List<Object> argsD = [environment, servicesWithLatestTag, servicesWithoutDockerImages,
serviceWithPreviousSameTag, deployedServicesBySector,
composeRepoBranch, hostsGroupToDeploy, latestCommitArray, sectorToDeploy]
println(“INFO. Building Develop parameters “)
return DevelopParams.newInstance(argsD)
}
if (environment =~ productionRegex) {
println(“INFO. Building Production parameters “)
List<Object> argsP = [hostsGroupToDeploy, latestCommitArray, sectorToDeploy]
return ProductionParams.newInstance(argsP)
}
return null
}
}
Assume that:
- FactoryClosureParams is a class into Jenkins’ shared library src root path
We have seen how Jenkins works with its pipeline using a classical jenkins architecture.
But … if I need to build a project from different repos, with different technologies inside – for example – in a docker container, in a single pipeline? How can I design this scenario?
Suppose you have an application composed by:
- A go (golang) agent with some routines that manipulates some files
- A Python backend that call some external service, read and write files, and anything else
- A Java web application that expose the results to localhost:8080
We have seen that the Jenkins pipeline uses a global agent defined on top of the pipeline, but Jenkins also offers a mechanism for defining a stage-side agent.
This mechanism becomes really powerful when Jenkins is provisioned into Kubernetes” because we can use this mechanism using the Kubernetes pods and obviously its containers.
Before seeing how we can use this Jenkins setup, I want to explain to you why I said: “provisioned into Kubernetes.”
You do not have the constraints to provision the Jenkins master on the Kubernetes cluster; you can manage this setup using a Jenkins plugin called Kubernetes Jenkins Plugin.
Jenkins as a Native CI/CD Tool in a Kubernetes Cluster
With the Kubernetes Jenkins plugin, you can use the Kubernetes cluster API to start up some agents (previously configured: UX, CasC) at runtime like Jenkins slaves.
Let me do a recap of some points:
- With the Jenkins pipelines we can define – by design – a pipeline-side agent or the stage-side agents
- Using the Kubernetes Jenkins Plugin, we have the ability to set up a master-slave Jenkins architecture at runtime
- Using the Kubernetes Jenkins Plugin, we can define more than one “agent type” and more then one container inside every agent
- With the Kubernetes Jenkins Plugin you have a special step called container() which is useful to run part of your pipeline inside a specific container (in a specific pod)
And then your pipeline becomes something like this:
stage(“Testing project”) {
steps {
script {
container(‘dotnet5’) {
…
…
…
…
}
stage(“Containerization. Auth, build and push”) {
steps {
script {
prjId = pipelineConfig.get(‘project’).get(‘id’)
container(“aws-cli”) {
aws …
…
…
…
container(‘docker’) {
docker login …
}
}
As you can see, you can use the container() step what and when you want – inside the pipeline.
This mechanism permits you:
- To use a runtime agent (based on k8s pod lifecycle)
- To use indistinctly all containers inside a pod agent
- To use a multi agent (with multiply containers inside it) strategy – stage side
- To build a project (single repo) based on different technology “one shot” (thus into a single – job – pipeline run)
- To build multi projects (multi repos) based on different technology, different versions of the same technology and so on “one shot” (thus into a single job)
- To enforce running the pipeline inside a specific agent
- To start a pod with complex container useful to run a complex analysis inside the pipeline (for example, DataOps or MLOps)
- To use the Kubernetes resource, because your pipeline agents are Kubernetes pods
- To use jenkins as CI/CD tool
JenkinsX
The JenkinsX project was born before some of the other modern and consolidated technology and tools, and some modern technologies developed faster than JenkinsX.
For this reason, the JenkinsX project has become an open source architecture project.
But… what do I mean when I say “open source architecture project”?
I mean that JenkinsX is a set of open source tools useful to create a CI/CD stack into a cloud-native environment (Kubernetes).
And yes, of course, JenkinsX also has custom resources like Preview.
In fact as you can see, the below image represents the architecture of JenkinsX.
Image: From the JenkinsX official site.
A modular architecture that use the extensibility of Kubernetes and powerful tools like Tekton (for pipeline), Lighthouse (for ChatOps – in our case, this is useful to trigger the pipeline), Kubernetes secrets (stored in an external vault and linked by external secrets operator), interfaces (JX CLI, dashboard, octan), etc.
This architecture is really powerful because we have all we need:
- Pipeline engine (Tekton)
- Trigger mechanism (Lighthouse)
- Command CLI (JX CLI)
- UI (octant)
- Kubernetes resources and JX custom resources
But we don’t always need all this.
The latest observation will be the springboard to Tekton.
Tekton
As I said, some other technology has become stable and soon became a cloud-native standard (in Kubernetes). One of these is Tekton.
Tekton is a pipeline engine. And offers all you need! It is a really big Kubernetes-centric project and adds a lot of Kubernetes CR to work.
In summary, Tekton is formed by:
- Tekton pipeline
- Tekton triggers
- Tekton CLI (tkn)
- Tekton dashboard
- Tekton catalog
- Tekton hub
- Tekton operator
- Tekton chain
Tekton also exposes its APIs (available only for Pipeline and Task).
The main Tekton resources useful to implement a pipeline are: Pipeline and Task (with their executors PipelineRun and TaskRun).
Ok, but this article isn’t a documentation guide about Tekton or Jenkins, but if you want, you can learn more about this pipeline engine at the Tekton official site.
The real point is: What is the relationship between Tekton and Jenkins?
As I said before, Tekton is the pipeline engine beside the JenkinsX project, but also has another relationship with Jenkins (without “X”).
Before explaining this relationship, I want to tell you how Tekton works with the Task resource. Basically, a Task is a collection of Steps (that is a common concept in Jenkins), and it runs as a K8ss pod.
Look at the following snippet:
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: example-task-name
spec:
params:
– name: pathToDockerFile
type: string
description: The path to the dockerfile to build
default: /workspace/workspace/Dockerfile
– name: builtImageUrl
type: string
description: location to push the built image to
steps:
– name: ubuntu-example
image: ubuntu
args: [“ubuntu-build-example”, “SECRETS-example.md”]
– image: gcr.io/example-builders/build-example
command: [“echo”]
args: [“$(params.pathToDockerFile)”]
– name: dockerfile-pushexample
– image: docker.io/library/golang:latest
name: ignore-unit-test-failure
onError: continue
script: |
go test .
volumeMounts:
– name: docker-socket-example
mountPath: /var/run/docker.sock
volumes:
– name: example-volume
emptyDir: {}
As you can see, every step is a minimal container (inside a pod) with its features and its commands or script to run. This is the same scenario that you can do with a classical Jenkins instance using Kubernetes Jenkins Plugin (with the container() step).
What do I want to prove with this observation?
Probably nothing… but we have seen the past, the present and the future of one of the most important CI/CD tools in the history.
Or… I want to say that, if you work in a cloud-native context with Kubernetes clusters and you have marked Jenkins as “obsolete tool” (for your context), you probably don’t know the deep powerful truth about Jenkins, and probably don’t know the Tekton project and the relationship with its “grandpa” Jenkins.
In other words, I think that Jenkins (in some ways) will be a main character for the future of the DevOps chains!