The role of software developers has never been more important. Developers touch almost every corner of our lives, from health care and retail to education. It’s an extremely rewarding career—so much so that software development is now ranked the Best Job in the United States.
The nature of software development demands that developers be fast-moving. New tools and technologies are providing developers the ability to adopt new skills, work faster and reshape how they work. In this Q&A, CEO of LunchBadger Al Tsang discusses trends in software development.
Q: Data distribution is still a difficult and complex aspect of breaking down a monolithic architecture. How can enterprise architects avoid this or what is a good strategy for dealing with this snag when they will encounter it eventually?
Al: The data’s location, its dependencies and varying levels of data normalization, make breaking down monolithic architectures a challenge. Microservices initiatives can address these challenges by constructing a comprehensive data model. In the past, we had UML tools such as ERD diagrams. These tools are not as common anymore, however, the practice of what they represent still brings value in understanding what data your applications currently use, what’s obsolete, what’s redundant, and what’s related with dependencies.
Microservices and the tie in to data should be atomic enough so that the data that it operates against is wholly owned and contained. This is easier said than done. There are often dividing lines that can be carved out that mirror the business functionality itself—whether that be by domain, by entity or by business process. In some cases, it’s a combination of all the above.
Q: Why do you think serverless and FaaS (functions as a service) is growing at such a rapid rate and what specific pain points do serverless platforms (such as LunchBadger) address?
Al: I believe serverless and FaaS is growing rapidly for a number of key reasons:
(1) Infrastructure and application platform [are] becoming highly commoditized where a developer can pass the buck to have operational concerns placed on a third party.
(2) The pressure and need to innovate requires an intense focus on working on your core IP as a differentiator while everything else in your code is a supporting role and oftentimes something that is completely boilerplate (e.g. think persistence to a database or NoSQL data source).
(3) The movement and drive behind attaining microservices means getting down to bite-sized pieces of functionality means whittling down your logic do distinct reusable compartmentalized pieces of code—i.e., functions—and “serverless” fits this paradigm nicely by adding that you should only concern yourself with the piece of code on an event driven basis.
(4) New and old companies alike have amassed a zoo of technological languages and platforms where skill sets and resources can be more efficiently utilized if you have a common runtime paradigm and can write your business functionality that supports polyglot usage.
Q: According to a survey in 2017, Node.js was primarily used as back-end infrastructure for APIs. One of the main strengths of Node.js is that you can use the same language on the entire stack. How do you think Express.js and other minimal frameworks like it contribute to its popularity and why?
Al: One reason Node.js has had rapid and wide adoption is because it is a massive equalizer. Node.js developers don’t need to have ninja-level domain expertise in high scalability, multi-concurrent connectivity, threads or similar complex topics just to be able to build a scalable app.
Node.js takes advantage of a language that had the principles of high scalability built inside, namely evented asynchronous programming. Node.js also has an advantage as it’s the language that any web developer would have exposure to thanks to the ubiquity of browsers. Moving the same language and its strengths to the server was a huge win for folks to truly become “full stack.”
Express.js has further contributed to Node.js’ success by adding the basics of web utility to Node.js. Every web application needs a web/app server that exposes URLs as routes. All servers have consumers making requests and processing those requests can often time be chained from one piece of logic to another. Express took the learnings from Ruby and Sinatra and brought them to Node.js. You scaffolded a template project and filled in the blanks, as needed and where needed. It’s highly productive because the starting point for many web applications follow this same model.
Express Gateway galvanizes the success of Express.js—its rich ecosystem of middleware modules and ubiquitous understanding among many developers and make its cloud ready and cloud native by separating out the configuration, metadata and conditionality to be declarative and dynamic. By doing this, Express Gateway runs in containers and orchestrators in a distributed way without worrying about being coupled at any given tier of the application.
Q: Node.js and Kubernetes is a popular combination, but many developers and companies are confused on where to start or how it will affect their tech stack. Can you explain the top roadblocks you’re seeing and how companies can overcome them?
Al: Roadblock #1: Where should you extend Kubernetes or where should you extend your Node.js application?
I think this entangled mess is actually not as bad as it’s often perceived it to be. If it’s a core part of your application and you’ve written it in Node.js, then it should be completely independent of where and how you run it. Before the wide adoption of containers, many Node.js applications were being run in process managers. Many are still running in process managers within a container. In short, if it’s a core piece of your application logic, it should be in Node.js. If it concerns how and where you run the application logic, now you’re looking at configuration or extensions within Kubernetes.
Roadblock #2: Should I use Node.js to orchestrate my application or leave that to Kubernetes and why?
Orchestration means wiring up your application at the infrastructure level. Leave this piece to Kubernetes, which can expose and manage your microservices truly as “services.” When you’re writing Node.js applications, you shouldn’t have to worry about the “I/O” of how different Node.js processes interoperate. The whole premise behind microservices is that there are distinct lines drawn where processes must interoperate without assuming the other process may even be up and running.
Roadblock #3: What if I have a component that has both infrastructure AND application concerns?
This is a tough one where a lot of companies, large and small, are feeling pain. Kubernetes has a notion of an ingress controller, which is really nothing more than routing and exposure of requests externally to a resource designated internally to handle it somewhere with the Kubernetes cluster. This raises the question: Should you put application-level concerns on that ingress controller?
I believe in a clean separation of concerns, even if that means having some runtime overhead. If your business requirements don’t require super low latency or other cases that drive you to blur the lines between infrastructure and application, then it’s much easier to maintain if there is a clean separation. This is really no different than slicing and dicing your application into smaller microservices. It’s just executed in your stack at a horizontal rather than vertical level. Build an ingress controller that is only concerned about routing to the right resource. Then build a separate API gateway that sits in front of the microservice(s) to take care of application-level concerns.