As DevOps professionals, among other things we are responsible for making developers’ lives a little bit easier. How many times have you heard, “Do you have a working vagrant box?” Or maybe you need to set aside a few days to onboard a new developer.
Many times a development environment is created when a project is started, usually on a developer workstation. This typically means it will work on that laptop. But, as the project evolves and more engineers get involved, the need to share this environment becomes apparent. When the project grows in complexity, requiring more components and third-party services, this organically grown developer environment quickly becomes unmanageable.
Get Ready for Production
Another interesting aspect of the developer environment is the level of similarity with the actual production environment. Even evaluating this aspect requires that you have insight in what production looks like. If projects start on a developer laptop, the desired production setup is seldom considered. More times than I care to remember, I was called into a project when the “thing” that worked well on the development teams’ laptop had no sensible production deployment format.
This teaches us that even if we start out small, we must think about how the project will function in a production environment and how we are going to get it there. In practice this means that any new project requires at least the following:
- Some unit tests, they may even start out as stubs.
- A minimal deployment target environment.
- Continuous integration (CI), running the (stubbed) unit tests.
- Automated delivery, preferably continuous deployment (CD).
- At least one simple integration test, possibly just a
curlto the deployment app to verify it is alive.
This does add some complexity to your project startup, possibly beyond an individual developers’ abilities. It does, however, lay the foundation to, at the very least, be able to deploy the application somewhere. For a seasoned DevOps professional, adding a basic CI/CD only takes an hour or so and there are plenty of free cloud resources for small projects.
When the project grows and the development environment gets more complex, things get a bit more tricky. The above steps will not really help you mitigate the possible frustration of keeping all the moving part inline. A pattern I see often is that developers try to spin up the entire production stack on their laptop. For a containerized environment, this might mean a Docker Compose-based setup or even minikube to run a local kubernetes cluster. These tools often are using shared volumes for quick iteration, which work well until a large number of files need to be kept in sync. Vagrant is a great way to quickly spin up an environment. This does offer some level of production parity, but certain parts are missing: The CI/CD system watching commits to your source code repository is not deploying to your local setup and traffic to and from it are not passing any additional infrastructure such as load balancers or firewalls, to name a few.
When things break in production, “It it works on my laptop!” is a phrase we have all heard at some time or other. What if we could bring the laptop environment a bit closer to production? It’s not the developers’ fault if things break in production but work well on the local workstation. Providing an option for developers to develop iteratively using a remote infrastructure means we can get closer to production parity. Also, it means that onboarding new developers in the team could be simplified significantly because all the complexity is in the remote environment. This remote environment is provisioned in the same automated way as production, so we have a great degree of confidence about its state. For Kubernetes-based environments, many great tools exist including Skaffold and Draft. They both provide an easy way to build, push and deploy applications to a Kubernetes cluster. This makes it easy for the developer to take on the responsibility to ensure the application under development will run in the target environment.
If, for some vague reason, you are not containerizing you applications but run code in a virtual machine instead, there are also options. Most modern editors provide automatic syncing with a remote server, either on-demand or when code is changed. Ksync is a tool for syncing your local code into a pod in a Kubernetes cluster on file change. Syncing code live removes the delay of rebuilding a container image and uploading it every time you want to see it in the deployment environment. As such, code sync tools are better suited for iterative development.
Recently, I worked with a client setting up Telepresence for a development team. This is also a tool for Kubernetes environments but it offers a twist—developers get to run their code locally with all traffic in and out proxied through a remote Kubernetes cluster. Like with sync tools, this means there is no more long delay before viewing changes, IDE debugging works as normal and all the log output is local as well. When code is committed and pushed, the CI/CD system should update the cluster with the latest version, which in turn can be used to iteratively develop on with Telepresence. It’s a relatively young project but offers great potential. An added advantage is that only a small portion of the entire application stack or landscape needs to run on the developers’ laptop, which means it can do with less memory and a cheaper CPU.
Containers and microservices are more widespread than ever, making it increasingly difficult for developers to have a production-like environment to develop in. The key to solving this problem is to not try to load everything into an evermore powerful laptop, but splitting the load and complexity between developer workstation and a remote environment.