Web Server Load Balancing with NGINX Plus

In the past, achieving a stable, high‑performance application required specialized, proprietary hardware load balancers. These appliances sat in front of applications and web servers, and used policies and algorithms to distribute traffic in an optimized way to backend servers.

However, as processor speeds continue to improve and as modern software architectures become the choice of many organizations undergoing digital transformation, software‑based load balancers can provide a good alternative to these specialized, on‑premises appliances.

When leveraged correctly, software‑based load balancers are key to enabling DevOps, microservice application architectures, and cloud‑native, hybrid, and multi‑cloud deployments.

Here are five reasons why we at Vizuri often implement software load balancers for our customers when modernizing their systems.

Reason 1: Enhance Agility

Hardware‑based load balancers are expensive, so most organizations use just one (or a few) of them to load balance traffic for all of the many applications hosted in a data center. This can make them a bottleneck for the delivery of new software releases – policy and routing changes needed for a new application version might adversely affect other applications, so the NetOps team needs time to test them carefully.

Software‑based load balancers, on the other hand, are both inexpensive and lightweight enough to be deployed on a per‑application basis. This gives control to the people (a DevOps team, say) closest to the requirements of the application. They can update their application as often as they want and choose the most appropriate automation tools for scripting and controlling those deployments, without worrying about the effect on other teams and their applications. This independence significantly speeds the pace of development.

Modern enterprise software‑based load balancers also provide services such as performance monitoring, active health checks, session persistence, and more. Application teams can custom configure these services to fit their particular application, further improving its stability and availability.

Reason 2: Facilitate Modernization

Hardware‑based load balancers were initially designed to traffic between clients and servers (“north‑south” traffic). Modern software architectures like microservices and containers create large amounts of traffic among services (“east‑west” traffic) that hardware load balancers are not well suited to handle. This type of traffic can strain hardware load balancers and ultimately cause a bottleneck when the hardware’s throughput limits are reached.

Acting as reverse proxies, software‑based load balancers are ideal for east‑west traffic, where they handle interservice authentication and connection. Their low cost and light weight mean you can pair a separate reverse proxy with each service, for example as a sidecar proxy in a service mesh.

The Case of Containers

Containers are another element of modernization, one particularly suited for microservices applications because they isolate the discrete units of functionality represented by the microservices from conflicts with other software in the same environment. Software‑based load balancers can be placed in front of the applications or services, which allows these containerized services to scale independently.

One of our customers leveraged containerized services to provide discrete functionalities across its operation that could be individually managed. Implementing software load balancers enabled the company to efficiently distribute traffic among these services as it scaled to meet increased demand.

It is impractical to use individual hardware load balancers for the many containerized microservices we helped them to implement. As microservices and containers form the foundation of one of our pillars of digital modernization, we naturally gravitate toward a software‑based approach for load balancing and recommend it for organizations trying to modernize.

Reason 3: Increase Flexibility

Before advances in x86 architecture servers (memory, processing power, and network interfaces) it was necessary to have specialized load balancers that ran on proprietary hardware appliances in order to deliver responsive, highly available, and scalable applications. Organizations had to buy enough load‑balancing hardware to support peak load, and this was often very expensive.

Modern software‑based load balancers, or application delivery controllers (ADCs), can be deployed across heterogeneous infrastructures including virtual, container, bare metal, and cloud platforms. The ability to deploy a software‑based load balancer in virtually any environment provides greater flexibility, and makes them more easily scalable and configurable.

The gap between the processing power of commodity servers and specialized hardware‑based load balancers has now been reduced to the point that software‑only approaches to application load balancing can enhance the stability, efficiency, security, and availability of enterprise applications and services to levels previously only possible through specialized hardware‑based load balancers.

Reason 4: Boost Scalability

Hardware‑based load balancers can only accept a finite amount of throughput, after which they stop accepting new connections. Software‑based load balancers run on commodity hardware, so they can be scaled up (by adding servers with additional cores) or scaled out (by adding more servers).

Scalability is one of the major benefits of running applications that are architected in the cloud, and software‑based load balancers are well suited for applications or services deployed in a cloud environment. This scalability is one of the things that an application needs the most to take full advantage of deployment in the cloud environment.

Without scalability, deployment to the cloud provides significantly fewer gains in agility and convenience.

Reason 5: Lower Costs

As mentioned in our customer use case, software‑based load balancers can reduce costs, especially when used in tandem with microservices. Instead of having to centralize traffic on a hardware‑based load balancer to save costs, or purchase additional specialized load balancers to increase available throughput to the highest levels anticipated to be achieved (which often results in significant underutilization), software‑based load balancers can run on commodity hardware, resulting in costs that scale with utilization.

For example, if a company hits peak traffic during the holidays, its hardware‑based load balancer must be able to handle that load at all times throughout the year, even if they are only hitting 30% of that traffic at any other time. If they use a software‑based load balancer, they only pay for the traffic as it occurs.

To optimize costs, especially if carrying out a digital modernization strategy, organizations should consider the potential of software‑based load balancers.

Let us know if you have questions about how software load balancers can fit into your modernization strategy, as it can help you make the most of modern software architectures.

About Vizuri

Vizuri is the commercial consulting division of AEM Corporation. It transforms business through creative software solutions and stands out for work in four core areas: business rules and process management, cloud enablement, enterprise integration and messaging, and microservices and containers.

Vizuri aligns and integrates customers’ business processes and strategies with technologies to develop solutions that show a clear return on investment. With a relentless drive to innovate, Vizuri team members receive frequent awards, speak at industry conferences, publish solutions and strategies, and share code with the open source community.

Want to learn about CI/CD and DevOps automation? Check out the Vizuri DevOps Automation Service Catalogue.

Hero image
NGINX 企阅版全解析



Zack Belcher

Sr. Business Development Manager


F5, Inc. 是备受欢迎的开源软件 NGINX 背后的商业公司。我们为现代应用的开发和交付提供一整套技术。我们的联合解决方案弥合了 NetOps 和 DevOps 之间的横沟,提供从代码到用户的多云应用服务。访问 了解更多相关信息。