Close your eyes and take a deep breath. You are on an island, smelling the iodine coming from the sea, the sun is gently caressing your cheeks and you are sipping on a cocktail in the middle of the afternoon. Your phone is off, in a place you forgot and you do not have direct access to the internet. You are a sysdamin and for the first time since the beginning of your career, you take a break, an actual break to reload your batteries and you are not even thinking about work because you KNOW. You know it is all going to be alright, as it has been since you implemented service virtualization in a really DevOps oriented way.
What is service virtualization?
First of all, let me say that service virtualization is not something new but many are still unaware of what it actually is and only a few good products are available.
You can find a really good explanation of what it is in this post from Chris Riley: https://devops.com/blogs/service-virtualization-window-advanced-devops/ but I will try to sum it up.
In order to avoid confusion, it is also necessary to briefly explain mocks and stubs. Mocks are usually programmers written tests that check the behaviour of a specific piece of code like which methods are called, in which order and if sub processes were created. Stubs on the other hand check that for any given input, the expected result is outed. I often refer to them as logic tests and I/O tests.
Service virtualization has nothing to do with mocks or stubs. It mimics the entire behaviour of an application as if it was an actual one deployed on a server and connected to a database, backend etc. In order to enable this, data will be recorded from QA (and anonymized) and the whole behaviour of the application will be recorded. Going through security layers, calls to a database, updating some values, even timings can be recorded for performance testing. Absolutely every single operation is stored and if the test data is well chosen at the time of recording, all cases can be covered.
For those of you who are into gaming, think of it as an emulator. You do not need a SNES, Genesis or Neo Geo to play your favourite oldies, instead, all you need is an emulator software and the ROMS. The emulator will behave like a gaming console on your computer, reproducing inputs and outputs with high fidelity and translating CPU instructions into different ones your computer can understand.
If you are not into gaming, and that is a shame if you ask me, think about virtual machines. When you install a Fedora in a confined environment on top of your favourite edition of Windows (or Mac OS), what you do is pretty similar to service virtualization as well. Your Guest OS will send instructions to the different components of your computer which the virtual machine will catch, translate into commands your Host OS can understand and send to the concerned components. It sounds complicated but it works like a charm.
What it can do for you.
As it operates on local servers or even better, on containers (check out Docker if you have not done so yet), it is virtually always on. Downtime is a thing from the past and updates can be scheduled and performed when people are not at work. How much of your project you want to cover is up to you and your needs but once you have started the process, why not go all the automated way? If you do, that means that your programmers will have access 100 percent of the time to environments that are up, running and that reflect actual services.
What is the point of having test environments then you might ask. And here is the answer you can give stakeholders: none. You can achieve good environment resilience, actual data tests and higher-than-ever response time through service virtualization. If well applied, and I really want to emphasize on that, service virtualization makes good ol’ test-environments obsolete.
Once again here, the benefits mentionned by Chris Riley will speak for themselves:Â https://devops.com/blogs/service-virtualization-window-advanced-devops/
Taking DevOps one step further.
Like all great things, service virtualization comes at a cost. In order to fine tune it to your requirements, it is going to take great efforts. You have to decide which services, ideally all of them, you will provide your programmers, which data sets are representatives of a functional application and of course, install and configure it all. The biggest benefit that you can identify here already is that developers, testers and operations have to be involved and work in a close collaboration. This is your opportunity to make different teams understand what others do. Get them all together, make them work together in a form they are not necessarily familiar with: cross-teams brainstorming.
Brainstorming, if well done, can be a very powerful tool to come up with creative solutions to problems that were not identified as such beforehand. It has to be directed though. To brainstorm doesn’t mean talking nonsense about non topic related things. Having a neutral person in the room who focuses the discussion around a few pre-established points and who keeps track of time is the way to go. Scrum masters are generally good at this since they have an overview of the project and its problems and can quickly identify the right persons for a task.
Getting developers to update their own service virtualization server with the latest data from QA means better communication with testers. Operations have a critical role here too since they setup the environments and give the how-to to the different teams.
Another glass sir?
Once your teams work together on an almost 24/7 available server and understand how it works, all you have to do as a sysadmin, is to relax a bit for once and focus your brain and time on how to improve processes even more, enhance the quality of deliverables and tighten up the now-existing bonds with your colleagues. Oh yeah, and if you’re on holidays, stop worrying about that potential catastrophic phone call, just worry about yours and yourself.