Web Server Load Balancing with NGINX Plus

We are pleased to announce release 1.4.0 of NGINX Ingress Controller for Kubernetes. This represents a milestone in the development of our supported solution for Ingress load balancing on Kubernetes platforms, including Amazon Elastic Container Service for Kubernetes (EKS), Diamanti, Google Kubernetes Engine (GKE), IBM Cloud Private, Microsoft Azure Container Service (AKS), Red Hat OpenShift, and others.

Release 1.4.0 includes:

The complete changelog for release 1.4.0, including bug fixes, improvements and changes, is available on GitHub.

From this release forwards, we will also make an “edge release” available as nginx/nginx-ingress:edge. Based on the latest commit on the main branch, this release is intended for users who wish to experiment with the latest NGINX features in a non‑production or non‑critical environment.

What Is NGINX Ingress Controller for Kubernetes?

NGINX Ingress controller for Kubernetes is a daemon that runs alongside NGINX Open Source or NGINX Plus instances in a Kubernetes environment. The daemon monitors Ingress resources – requests for external access to services deployed in Kubernetes. It then automatically configures NGINX or NGINX Plus to route and load balance traffic to these services.

Multiple NGINX Ingress controller implementations are available. The official NGINX implementation is high‑performance, production‑ready, and suitable for long‑term deployment. Compared to the community NGINX‑based offering, we focus more on maintaining stability across releases than on feature velocity. We provide full technical support to NGINX Plus subscribers at no additional cost, and NGINX Open Source users benefit from our focus on stability and supportability.

NGINX Ingress Controller 1.4.0 Features in Detail

Support for TCP and UDP Load Balancing

Kubernetes Ingress Resources are, by design, HTTP‑centric. They do not provide a natural way for users to configure load balancing for non‑HTTP protocols: TCP protocols such as database or MQTT, or UDP protocols such as DNS or media.

On the other hand, NGINX is widely used to load balance TCP connections and UDP sessions, alongside HTTP and related protocols. TCP and UDP load balancing is configured in the stream{} block of the NGINX configuration. In release 1.4.0, you can now use the new stream-snippets ConfigMap key to insert configuration into this block.

The configuration differs slightly between NGINX Open Source and NGINX Plus:

  • NGINX Open Source does not support dynamic reconfiguration of upstream groups, so it is configured to proxy traffic to the single virtual IP address managed by kube-proxy for the service.
  • NGINX Plus supports dynamic reconfiguration. Configure the service to be headless; NGINX Plus is configured to resolve the service name to determine the addresses and ports of each of the pods backing the service.

The following example illustrates how to configure an NGINX Plus‑based Ingress controller to load balance a UDP‑based protocol. The service is implemented using the DNS name syslog-headless.default.svc.cluster.local.

A simple NGINX Plus configuration looks like the following:

resolver kube-dns.kube-system.svc.cluster.local valid=5s;

upstream syslog-udp {
    zone syslog-udp-zone 64k;
    server syslog-headless.default.svc.cluster.local service=_syslog._udp resolve;

server {
    listen 514 udp;
    proxy_pass syslog-udp;
    proxy_responses 0;
    status_zone syslog-udp;

You then embed this configuration in your Ingress resource as follows:

kind: ConfigMap
apiVersion: v1
  name: nginx-config
  namespace: nginx-ingress
    stream-snippets: |
        resolver kube-dns.kube-system.svc.cluster.local valid=5s;

        upstream syslog-udp {
            zone syslog-udp-zone 64k;
            server syslog-headless.default.svc.cluster.local service=_syslog._udp resolve;

        server {
            listen 514 udp;
            proxy_pass syslog-udp;
            proxy_responses 0;
            status_zone syslog-udp;

You can of course embed any NGINX configuration in the stream-snippets ConfigMap, so you have full access to the entire set of NGINX or NGINX Plus capabilities to manage TCP and UDP traffic, such as health checks, authentication, and custom access logging.

Extended Prometheus Support

NGINX Open Source

The Prometheus Exporter now supports NGINX Open Source as well as NGINX Plus. It can now query the stub_status API in NGINX Open Source and provide the API metrics to Prometheus.

The stub_status API is published locally on port 8080. If you wish to connect to the stub_status API remotely, you can use kubectl to port‑forward traffic to port 8080, as described in the updated installation documentation.


Accompanying the new support for TCP and UDP load balancing, the Prometheus Exporter has been updated to collect and export TCP/UDP metrics in NGINX Plus. These provide insights into TCP and UDP load‑balancing performance, health checks, server response times, and much more.

Note that to collect this data you must include the status_zone directive in the TCP/UDP configuration.

Easy Development of Custom Annotations

Annotations are used to add arbitrary additional data to a Kubernetes resource. NGINX Ingress Controller recognizes a number of these annotations, and uses them to define additional behavior such as JWT validation or performance‑tuning settings. These annotations are built into the Ingress controller application; adding a new built‑in annotation means modifying the code and rebuilding the application.

In release 1.4.0, you can also specify the implementation of annotations in the template that is used to generate the NGINX configuration. This makes it much easier to enrich the NGINX configuration without having to implement the annotation in Go and rebuild the Ingress controller application. You can further use the custom templates introduced in release 1.3.0 to apply these annotation implementations to a running NGINX Ingress Controller.

The capability makes it much simpler to develop custom NGINX configuration templates for features such as caching or authentication. Your application teams can then enable and tune these features easily, using the annotations you have defined.

For more information, check out the Ingress Controller Custom Annotations documentation, and see it in action with the Custom Annotations rate limit example.

Support for the New Random with Two Choices Load‑Balancing Algorithm

In NGINX Plus R16 and open source NGINX 1.15.1 we added a new method that is particularly suitable for distributed load balancers. The algorithm is referred to in the literature as “power of two choices”, because it was first described in Michael Mitzenmacher’s 1996 dissertation, The Power of Two Choices in Randomized Load Balancing. “Power of two choices” avoids the undesirable “herd” behavior that traditional best‑choice algorithms such as Least Connections can exhibit when there are multiple load balancers, each with incomplete and inconsistent views of the cluster.

In NGINX and NGINX Plus, “power of two choices” is implemented as a variation of the Random load‑balancing algorithm, so we also refer to it as Random with Two Choices.

Random with Two Choices is the new default load‑balancing method for NGINX Ingress Controller. You can tune the load‑balancing method using the annotation and lb-method ConfigMap key.

The blog post NGINX and the “Power of Two Choices” Load‑Balancing Algorithm describes the algorithm in more detail, comparing it to other load‑balancing algorithms.

Additional Features

  • Support for debugging – You can run NGINX Ingress Controller with the nginx-debug binary using the -nginx-debug CLI flag. This allows for troubleshooting problems and advanced debugging of both NGINX and NGINX Plus.
  • Restricting access to the dashboard/stub_status – A new command‑line parameter -nginx-status-allow-cidrs can be used to define which IP addresses and subnets can access the stub_status (NGINX open source) and api (NGINX Plus) URIs. By default, only connections from localhost are allowed.

Getting Started with the Ingress Controller

If you’d like to find out more about NGINX Ingress Controller, check out these resources:

NGINX Ingress Controller for Kubernetes supports both NGINX Open Source and NGINX Plus, and is a supported alternative to the community Ingress controller. A feature comparison for the two controllers is available here.

The main design goal of NGINX Ingress Controller is to maintain performance and compatibility across releases. We provide full technical support to NGINX Plus subscribers at no additional cost, and open source users also benefit from the focus on long‑term stability and supportability.

To try NGINX Ingress Controller with NGINX Plus and NGINX App Protect, start your free 30-day trial today or contact us to discuss your use cases.

To try NGINX Ingress Controller with NGINX Open Source, you can obtain the release source code, or download a prebuilt container from DockerHub.

Hero image
NGINX 企阅版全解析



Owen Garrett


Owen is a senior member of the NGINX Product Management team, covering open source and commercial NGINX products. He holds a particular responsibility for microservices and Kubernetes‑centric solutions. He’s constantly amazed by the ingenuity of NGINX users and still learns of new ways to use NGINX with every discussion.


F5, Inc. 是备受欢迎的开源软件 NGINX 背后的商业公司。我们为现代应用的开发和交付提供一整套技术。我们的联合解决方案弥合了 NetOps 和 DevOps 之间的横沟,提供从代码到用户的多云应用服务。访问 了解更多相关信息。