It is a truth of both agile and DevOps that speed is of the essence. This is a relative statement—some apps/portfolios/orgs envision speed completely differently than others—but if you are releasing more frequently than you were in the past, thanks to DevOps, that’s kind of the point.
Yet, we see a whole selection of things that fall by the wayside or are underserved. Security and testing are the easy/obvious ones. Considering the frequency with which we see “Unsecured Amazon storage bucket” (one announcement of this type spurred this blog), should we not see this folded into standard procedure or tools? Well, it is folded into standard tools for checking your cloud environment—I’ve used a couple of them and seen this warning come up. So shouldn’t organizations be using these tools more?
One thing about speed: It tends to cut corners. And cut corners lead to lapses. You can be a blameless environment that doesn’t punish people for mistakes, but for an organization, a mistake such as exposing hundreds of thousands of users will be punished from the outside, from regulatory agencies and the market in general. They’re not mistakes we should be making, and yet it still happens.
Another one is rollback versus roll forward. Most DevOps shops focus on the ability to fix an issue and plow forward. This works really well, 90+% of the time, and fails spectacularly sometimes. While it is an added step with some slowing of process, being able to roll back changes is important. And more than, “Well, theoretically, Git will allow us to go back to the last marked release …” because there is far more to an application architecture than just the source stored in a repository.
In short, no matter how fast things are going, have a plan for when things go wrong. Because they inevitably will. And take the little bit of extra time for preventatives such as warning someone other than the creator about buckets with no security.
“But there isn’t time …” is the common refrain to blogs such as this one. I get it. The increase in delivery only creates an increased appetite for delivery. But is there time to respond to disaster, should the worst happen? And I’d argue that it must be IT that plans for disasters. When someone gets fired because there was a security breach or a total systems failure, it is rarely an app owner. That puts the responsibility to make certain one doesn’t happen squarely on IT.
We’re delivering faster, offering increased functionality to users, moving forward on every front. You all are out there rocking it daily. Don’t risk all of that being ignored because you failed to take care of basic issues. Check out the tools to help out; feature flags and cloud security scanners both spring to mind with regard to my examples above. The tools are out there, you just have to work them into your process.