Any successful software is effort of five groups: developers, the operations/infrastructure team, QAs or testers, product managers and owners, and the support team. There may be other titles involved, but the majority of work is done by these five groups.
… the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity, evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes.
Under a DevOps model, development and operations teams are no longer “siloed.” Sometimes, these two teams are merged into a single team where the engineers work across the entire application lifecycle, from development and test to deployment to operations, and develop a range of skills not limited to a single function. Quality assurance and security teams may also become more tightly integrated with development and operations and throughout the application lifecycle.
The majority of DevOps leaders included in this 2015 DevOps.com agreed:
Looking at both of these references, from a technical perspective we can summarize that DevOps puts together development and operations, and speed of delivery can be achieved with tools and automation.
This all might sound perfect for on-premises deployments because your Dev, QA, Test and Pre-production boxes will be up and running 24 hours and those who are part of the software success—QA managers, product owners etc.—can access them anytime they want.
In cloud DevOps, however, that may not be the case. Any cost-conscious organization may instruct developers to take infrastructure offline when it’s not needed (most of us know how developers manage infrastructure, so I am not going discuss much about it). But what about other stakeholders of the project? What if the QA team wants to manually test or validate something or product owners want to review something?
Currently, teams have three options:
Each of these has big drawbacks. The first two options waste a lot of manpower. In era where companies are trying to capitalize human power as much as possible by automating trivial things, it doesn’t make sense to use yesterday’s solutions for today’s problems. And, asking these teams to run scripts to bring up the servers introduces the possibility for errors, as these teams are not that hands-on with infrastructure or scripts.
The third option is a perfect example of using old solutions for current problems. In the cloud world especially I see at least two problems with this solution:
Cloud DevOps demands a model that can keep all stakeholders working at peak efficiency without running into resource scheduling issues. It’s a difficult proposition that needs an effective solution.
How is your organization handling resource availability in the cloud?
Redis is taking it in the chops, as both maintainers and customers move to the Valkey Redis fork.
GitLab Duo Chat is a natural language interface which helps generate code, create tests and access code summarizations.
Expect attacks on the open source software supply chain to accelerate, with attackers automating attacks in common open source software…
The emergence of low/no-code platforms is challenging traditional notions of coding expertise. Gone are the days when coding was an…
Datadog today published a State of DevSecOps report that finds 90% of Java services running in a production environment are…
Linux dodged a bullet. If the XZ exploit had gone undiscovered for only a few more weeks, millions of Linux…