The use of scripts as a key tool in the average developer’s bag of tricks has altered the development pipeline in significant ways. It has even led some organizations to outsource their entire team to manage the glut of scripts that have been developed over the course of time.
To help companies move beyond the “scriptocalypse,” we recently participated in a webinar with our friends at Tricentis. Sunil Mavadia, global head of customer journey at Electric Cloud, and Thomas Stocker, director of product management at Tricentis, walked through best practices to follow to avoid the scriptocalypse. Here are some takeaways from their discussion.
Improve Process Robustness and Scalability
Of course, everyone wants to improve their process robustness and scalability, but how? One of the most effective ways of doing this is to employ a tool that helps orchestrate your releases and provides reusable objects. This reduces risk because you no longer have to write scripts for every single application, which creates a cascading effect—not only do you write fewer scripts, but also you don’t have to maintain these scripts across the board. As a result, you are assuming much less risk across many levels of your technology and pipeline. Additionally, system-level visibility across all of your processes will increase—which has many benefits for your entire development life cycle.
Don’t Just Automate, Orchestrate
Many developers make the mistake of thinking that because they have scripts, they’re automated. But to be truly automated, every action needs to be automated and driven by your pipeline. Your approvals, your handoffs … everything. Oftentimes throughout the release process, team members find themselves having to stop in the middle of their tasks to ask for approval. This only creates a process that interrupts the entire pipeline. But what if you could automate all of that?
A great quote from the “2018 Accelerate State of DevOps Report” that speaks to this says, “High performers automate significantly more of their configuration management, testing, deployments and change processes than other teams.” If you look at it purely from a team perspective, the high performers are those that have fully automated all their processes. While this can certainly sound like a daunting task, it’s best to look at it with an agile mindset—this is an iterative process. You have to start somewhere, while keeping the goal of 100 percent automation in view.
Not only should you be automating, you also should orchestrate your pipeline. This is critical. By automating and creating reusable objects, you can get to the point of using your pipeline as a service. Imagine reusing this particular pipeline for every application within your process, so you could have hundreds, even thousands, all using this pipeline and all these objects underneath that are all reusable objects. It’s a pretty picture, isn’t it?
Create a Value Stream Map
When you create a value stream map for a pipeline, you want to be very inclusive about who you invite in the room to create the value stream. Include representatives from release management, development, deployment engineers, team leads, executives and infrastructure engineers. Basically, anyone who has “skin in the game” for your release should be present.
Once you feel you have a solid representation of who is impacted by your releases, sit down and draw out the entire process. This includes figuring out what each script does. While that may be a big task depending on how your pipeline is currently structured, there’s actually a lot of opportunity there. For example, you might find there is one application for which you use one script to do that deployment. Typically, that script is duplicated for another application and so on. Imagine doing that 100 times. You’ll have 100 scripts or variations of that one script for your 100 applications. That’s an obvious and beneficial place to deploy a bit of automation.
This is also a great time to understand what tools you are using for each step. When you’re jumping from toolset to toolset, it can be difficult to get a holistic view of how this is impacting your overall performance without a complete picture of the value stream. When you have that picture, you can start to ask yourself questions such as, “What are all these tools that we interact with that we can integrate into the pipeline? How are we handling approvals?” By including detailed sequencing timelines and identifying bottlenecks, you can realize some impactful efficiencies.
Finally, creating a value stream map gives you an opportunity to organize your scripts. Laying out the entire value stream can give you a picture of the variety and quantity of scripts you’re running—whether they are CI scripts, deployment scripts or test automation scripts—and bucket them in a logical way.
Identify Redundancy Dependencies and Bottlenecks
Of course, identifying (and hopefully rectifying) dependencies and bottlenecks is a good best practice for any aspect of your development pipeline. But it becomes even more important in the context of using scripts. If you have one application dependent on another application, it’s almost impossible to script. For example, imagine you’re trying to do some testing with two dependent apps. If you don’t have the right versions across both, you can’t test one because the other one is not up to speed or it hasn’t been updated in an environment. Something even as simple as that can be really disruptive and make scripting very difficult. By understanding where your dependencies and redundancies exist, you can take the first step toward solving the issues they present.
Scripts are one of the most effective ways of streamlining your pipeline, but they need to be used in a way that will allow your organization’s applications to be scalable and sustainable. By following some of the best practices outlined above, your organization can use scripts like pros—and avoid the dreaded “scriptocalypse.”
For more insights watch the full webinar here.