At its annual user conference in Portland, Oregon, in September, NGINX delivered releases focusing on the application platform market and support of delivering microservices architectures:
- NGINX Plus, an application delivery controller that combines a load balancer, content cache and web server.
- NGINX Controller, a centralized management and monitoring platform for NGINX Plus, which orchestrates the delivery of applications across multiple environments, enabling companies to continuously deploy and update applications using tested and proven policies.
- NGINX Unit, a multi-language server for applications, designed to work in highly dynamic environments. It features a full REST API that can be fully automated and used to deploy new application versions with no service disruption. It currently supports PHP, Python and Go with more language support coming soon.
- The NGINX Web Application Firewall (WAF), powered by ModSecurity, which protects web applications against various Layer 7 attacks and provides DDoS mitigation, real-time blacklisting and audit logging.
In addition to launching the NGINX Application Platform, the company has added the ability to use NGINX Plus as a Kubernetes Ingress Controller. Based on the open-source NGINX Ingress Controller for Kubernetes, this new feature enables the deployment of applications within Kubernetes and Red Hat OpenShift anywhere across a cluster so they can be reached by outside traffic.
While these products combine to deliver a complete and manageable microservices solution, the most interesting of these is NGINX Unit. According to Owen Garrett, head of products, the complement of tools facilitates management of both north/south and east/west traffic for cloud-native applications. It’s small, lightweight, fast, polygot-enabled and programmable through an API and works with other Unit instances to deliver a service mesh.
Key features of Unit that should be of particular interest to businesses deploying microservices relate to the ability to foster continuous delivery. For example, when the router accepts new configurations from the Controller process, the worker threads start to handle new incoming connections with the new configuration, while old connections can continue to be processed by the threads according to the previous configuration. That is: Router worker threads can work simultaneously with several generations of configurations.
Additionally, Unit uses interprocess memory to communicate with the applications, thus allowing the Unit to provide greater agility in routing of HTTP requests. So, rather than forcing the application to directly listen, they can delegate network handling to the service mesh providing for increased scalability. They accept the clients’ requests, pass the requests to the application processes, get responses back from the applications and send the responses back to the clients. Each worker thread polls epoll or kqueue and can asynchronously work with thousands of simultaneous connections.
— Alan Shimel