Web Server Load Balancing with NGINX Plus
How long would it take your organization to deploy a change that involves just one single line of code?
– Lean software development guru Mary Poppendieck

In many organizations today, manual processes are slowing down the process of deploying and managing applications. Manual processes create extra work for developers and operations teams, cause unnecessary delays, and increase the time it takes to get new features and critical bug and security fixes into the hands of customers. Automating common tasks – using tools, scripts, and other techniques – is a great way to improve operational efficiency and accelerate the rollout of new features and apps.

The potential improvements to productivity with automation are impressive. With the proper components in place, some companies have been able to deploy new code to production more than 50 times per day, creating a more stable application and increasing customer satisfaction.

High‑performing DevOps teams turn to NGINX Open Source and NGINX Plus to build fully automated, self‑service pipelines that developers use to effortlessly push out new features, security patches, bug fixes, and whole new applications to production with almost no manual intervention.

This blog covers three methods for automating common workflows using NGINX and NGINX Plus.

[Editor – For more information on this topic, watch our on‑demand webinar, 3 Ways to Automate App Deployments with NGINX and read the summary, DevOps Automation with NGINX and NGINX Plus, on our blog.]

Method 1 – Pushing New App Versions to Production

[Editor – This section has been updated to use the NGINX Plus API, which replaces and deprecates the separate module for dynamic configuration module that was originally discussed here.]

Releasing a new version is one of the most common occurrences in the lifecycle of any software application. Common reasons for an update are introducing a new feature or fixing bugs in existing functionality. Updating becomes much simpler and less time‑consuming when it’s automated, as you can ensure that the same process is happening on all your servers simultaneously, and you can define rollback procedures or fallback code.

To facilitate automation, the NGINX Plus API enables dynamic configuration of upstream server groups. With the API, you can modify an upstream group of servers by adding or removing instances as part of a deployment script. The changes are reflected in NGINX Plus immediately, which means you can simply add a line that makes a curl request to your NGINX load balancer as a final step in your deployment script, and the servers are updated automatically.

To enable the NGINX Plus API, create a separate location block in the configuration block for a virtual server and include the api directive. We’re using the conventional name for this location, also /api. We want to restrict use of the interface to administrators on the local LAN, so we also include allow and deny directives:

server {
    listen 8080;          # Listen on a local port, which can be protected by a firewall

    location /api {
        api write=on;
        allow; # Allow access only from LAN
        deny all;         # Deny everyone else

The API requires a shared memory zone to store information about an upstream group of servers, so we include the zone directive in the configuration, as in this example for an upstream group called backend:

upstream backend {
    zone backend 64k;

Using the NGINX Plus API

Let’s say we’ve created a new server with IP address to host the updated version of our application and want to add it to the backend upstream group. We can do this by running curl with the POST method. (For version, substitute the current version of the NGINX Plus API, which is listed in the API module’s reference documentation.)

# curl -X POST -d '{"server":""}' http://localhost:8080/api/version/http/upstreams/backend/servers

Then we want to remove from service the server that’s running the previous application version. Abruptly ending all the connections to that server would result in a bad user experience, so instead we first drain sessions on the server. To identify which server to drain, we need to know its internal ID. We pipe the curl output to the jq utility to filter for just the hostname or address, and ID, of each server:

# curl -s http://localhost:8080/api/version/http/upstreams/backend | jq -c '.peers[] | {server, id}'

We know that the server with the old application version has IP address, and the output shows us its ID is 2. We identify the server by that ID in the command that puts it in draining state:

# curl -X PATCH -d '{"drain":true}' localhost:8080/api/version/http/upstreams/backend/servers/2

Active connections to a server can be tracked using the JSON response object from the NGINX Plus API, where id is the same ID number for the server that we retrieved above. When there are no more connections to the server, we run this curl command to remove it completely from the upstream group:

# curl -X DELETE localhost:8080/api/version/http/upstreams/backend/servers/2

This is just a sample of what you can do with the API, and you can script workflows that use the API to fully automate the release process. For more details, see our three‑part blog series, Using NGINX Plus for Backend Upgrades with Zero Downtime.

Method 2 – Automated Service Discovery

A modern microservices‑based application can have tens or even hundreds of services, each with multiple instances that are updated and deployed multiple times per day. With large numbers of services in deployment, it quickly becomes impossible to manually configure and reconfigure each service every time you deploy a new version or scale up and down to handle fluctuating traffic load.

With service discovery, you shift the burden of configuration to your application infrastructure, making the entire process a lot easier. NGINX and NGINX Plus support several service discovery methods for automatically updating a set of service instances.

NGINX Plus can automatically discover new service instances and help cycle out old ones using a familiar DNS interface. In this scenario, NGINX Plus pulls the definitive list of available services from your service registry by requesting DNS SRV records. DNS SRV records contain the port number of the service, which is typically dynamically assigned in microservice architectures. The service registry can be one of many service registration tools such as Consul, SkyDNS/etcd , or ZooKeeper.

In the following example, the service=http parameter to the server directive configures NGINX Plus to request DNS SRV records for the servers in the my_service upstream group. As a result, the application instances backing my_service are discovered automatically.

http {
    resolver dns-server-ip;

    upstream my_service {
        zone backend 64k;
        server hostname-for-my_service service=http resolve;
   server {
       # ...
       location /my_service {
            # ...
            proxy_pass http://my_service;

With NGINX Open Source, a generic configuration template tool like consul‑template can be used. When new service instances are detected, a new NGINX configuration is generated with the new service instances and NGINX is gracefully reloaded. For a detailed example of this solution, see this blog by Shane Sveller at Belly.

For more details on DNS‑based service discovery with NGINX Plus, see Using DNS for Service Discovery with NGINX and NGINX Plus on our blog.

Method 3 – Orchestration and Management

Large enterprises might deploy many instances of NGINX or NGINX Plus in production to load balance application traffic. It quickly becomes difficult to manage the configuration of each one individually, and this is where DevOps tools for configuration and management – such as Ansible, Chef, and Puppet – come into play. These tools allow you to manage and update configuration at a central location, from which changes are pushed out automatically to all managed nodes.

NGINX Plus has several points of integration to help automate your processes and facilitate infrastructure and application management:

[Editor – The Chef recipes and Puppet manifests from the blog posts mentioned above might not work perfectly with the latest releases of the tools, but you can still use them for guidance.]

Bonus Method – Push‑Button Deployments with Jenkins

Jenkins is a popular open source CI/CD tool, and becomes even more powerful when combined with NGINX and NGINX Plus. DevOps teams can simply check the desired NGINX configuration changes into GitHub, and Jenkins pushes them out to production servers.

Our recent Bluestem Brands case study includes a great example of the combination in action. Using Jenkins, Bluestem Brands automates every aspect of their deployment and keep things in sync: when developers update code, GitHub launches a Jenkins build. After the code updates are deployed to the upstream application instances, it’s just a simple API call to make sure that new instances with a clean cache and the latest code are handling requests.


Maintaining large‑scale deployments is always challenging, but automating the process can alleviate some of the difficulty. Automation allows you to create testable procedures and codify important processes, ensuring that everyone on the DevOps team is on the same page. Numerous NGINX users have told us how automation with service discovery has reduced complexity, eliminated bugs caused by manual processes, and made it easier to roll out new functionality into production.

Both NGINX Plus and NGINX Open Source provide multiple integrations for updating your DevOps workflow and automating your architecture, whether you’re launching your first instance or managing hundreds of them.

To learn more, watch the webinar, 3 Ways to Automate App Deployments with NGINX.

Hero image



Faisal Memon



F5, Inc. 是备受欢迎的开源软件 NGINX 背后的商业公司。我们为现代应用的开发和交付提供一整套技术。我们的联合解决方案弥合了 NetOps 和 DevOps 之间的横沟,提供从代码到用户的多云应用服务。访问 了解更多相关信息。