Web Server Load Balancing with NGINX Plus

There’s no disputing that technology and infrastructure have changed dramatically over the years. Web sites have long since evolved from “just-a-bunch-of-files-and-scripts” to an intricate mix of modular applications made up of reusable code components – all using HTTP, which has become the dominant protocol for Internet communication. And these web applications now run people’s lives. Users expect immediate response, flawless behavior, and apps that work smoothly on every device.

While the engineer in me finds this all fascinating, the technical manager in me is keenly aware of the challenges brought about by such rapid advancements and their corresponding expectations. Chief among these challenges is the issue of control. We’ve made great progress in transforming our legacy infrastructures to more fluid web architectures, but we’re still using legacy rules to delegate who controls what. That’s why, in many companies today, network teams still control application delivery – not the DevOps engineers who are building these applications and are most responsible for accelerating their performance. Needless to say, this can be problematic.

Competing Priorities and Complexity Hamper Team Performance and Efficiency

Do the following struggles sound familiar? Perhaps you’ve witnessed these situations within your own organization. If so, it’s time to consider how to remedy the tension and conflict caused by this imbalance of control and accountability.

Perhaps you’re a developer and all you want to do is add or change an application‑level access‑control list (ACL) for added protection. Or maybe you’re redirecting traffic temporarily to a standby application server. But instead of just taking care of these tasks yourself, you are required to carefully construct a request to your network operations team and (as they are likely overwhelmed with requests from all over your organization) wait for them to find a free moment to process your request.

Eventually, if you bug them enough, they might be willing to delegate some of these simple controls to you, as they deal with their other (many times more important) assignments. If things go wrong, however, then you’re the one to blame when outages result. All because you forgot to explicitly allow certain traffic where the vendor’s ACL has a default of “implicit deny,” or maybe you messed up the “reverse wildcard masks,” or you just applied the ACL to the wrong interface. You can’t win, and it’s eating up time on both sides either way you look at it.

The network team retains control over application delivery in many instances because companies use hardware‑based application delivery controllers. These centralized application delivery solutions are already vulnerable as single points of failure. When you combine that risk with management by an overburdened network infrastructure team in an organization trying to move quickly in building and deploying applications, you’re taking a serious gamble.

Application developers often outnumber network infrastructure personnel by a wide margin. It’s not uncommon to hear of companies with thousands of virtualized instances, dozens of applications, a hundred people in DevOps – and only two network engineers managing a couple of big application delivery controller boxes in front of a private cloud. Obviously, the network team is uneasy about making many concurrent changes to the web acceleration layer, but they’re equally disconcerted about the application logic moving into the network.

Remember, however, that a fast network doesn’t always mean fast applications. Neither a potentially conflicting configuration line nor a kludge script on the networking box will change that. In fact, all you achieve by sending a never‑ending flow of configuration changes to the networking team is mounting resentment.

Another source of discord centers on the modern, agile development processes used by today’s application system engineers. They can no longer just build a product and then hand it off to someone else to deploy and operate; rather, they must build an iteration, deploy, and repeat the cycle. And this cycle requires full, end‑to‑end control over performance and scalability issues associated with the “heavy lifting” of HTTP. Having to manage this while negotiating who controls what slows the engineers down and can make it difficult for them to make changes that bolster performance.

Software‑Based Solutions: A New Hope

So what’s a DevOps engineer to do? You need to find a viable way to gain control over the complex functions that are critical to your work in web acceleration and application performance, while still respecting the performance and security requirements of your organization.

Well, the bad news is you can’t turn to any tools that were introduced to the market 10 or 15 years ago. They were built for big old monolithic applications (both consumer and enterprise), so they don’t lend themselves to today’s modern, distributed, loosely coupled, and composeable applications. And network engineers can’t do much these days to help application engineers overcome the complexities of HTTP. They’re still primarily in charge of hardware networking appliances that are focused on packets, not applications.

Wait, you might say. What about sending Network Function Virtualization (NVF) to the rescue? That’s a possible – but only partial – solution. To begin with, it offers predominantly Layer 2 and sometimes Layer 3/4 functions, while functions related to Layer 7 aren’t really well developed yet. The other problem with NFV and Software Defined Networking (SDN) is that, so far, they really only address the problems of commoditized network infrastructure, not the challenges of managing integrated Layer 7 functions in a DevOps environment. You need a solution that puts control of the appropriate layers in the right hands. The network team needs Layer 3/4 control, but DevOps engineers need Layer 7 control.

Last but not least, most application frameworks also do not provide any good way to quickly and effortlessly deal not only with the complexities inherent to HTTP – like concurrency handling, security, and access control, and so on – but also with Layer 7 traffic management in general.

What’s the answer then? When neither the “network” nor application frameworks are really helpful in end-to-end web acceleration, are there any software tools to resolve the tension, and fully enable that much‑needed empathy in DevOps?

Fortunately, the answer is yes. There are a few very useful tools around – HAProxy, NGINX Plus, Varnish, and Apache Traffic Server, among others – all of which have been successful in many DevOps environments. These tools combine the elegance and convenience of a compact, software‑only package, suitable for any generic operating system, with the capabilities of a web acceleration tool. And they can be deployed in a matter of seconds on any physical or virtual server instance.

Some of these tools specialize on just a subset of web acceleration techniques – such as server load balancing or SSL termination – while others (like our own NGINX Plus) cover all main functions, such as Layer 7 load balancing, request routing, application and content delivery, web caching, SSL termination, on‑the‑fly compression, protocol optimization, and more.

These DevOps tools follow “application‑style” configuration, and they are most suitable for automation with other tools like Puppet or Docker. Likewise, they run very efficiently in virtualized environments, are easily deployable across cloud environments, are easily co‑located with applications, and lastly are applications themselves. At the same time, these products now equal or surpass the performance of hardware‑based networking appliances, with no limits on throughput at a fraction of the cost.

Finally, and perhaps most important, they also give DevOps teams long‑awaited end‑to‑end control over HTTP performance and scalability.

DevOps Tools Put Control in the Right Hands

Adopting DevOps tools, then, is how you can realize peaceful empathy. DevOps and network infrastructure teams can co‑exist happily and cooperate more effectively when Layer 7 functions are implemented in a software‑based, scaled‑out layer – built and managed entirely by DevOps teams – while the task of moving packets, controlling QoS, and other traditional network‑bound functions remains with the network infrastructure team.

Who benefits? Everyone. The engineering organization benefits from smoother operations, faster deployment and configuration, and streamlined application development. The network team has less of a burden placed on them and retains control where necessary, while being freed to focus on their own work. Users benefit from a better application experience. And, ultimately, the company benefits from the ability to efficiently deliver a high‑performance application that addresses market expectations.

We encourage you to try this software‑based approach for yourself, and see just how much faster, more efficient, and happier your teams can be. Start your free 30-day trial of NGINX Plus today as an alternative to your hardware‑based application delivery controller, or contact us to learn more about how NGINX can help you deliver your applications with performance, security, reliability, and scale in a DevOps‑friendly way.

Hero image
Cut Costs and Increase Flexibility

See why software load balancers are ideal for your applications


Andrew Alexeev

Co-founder, Product Owner for NGINX Amplify

Andrew Alexeev is a co-founder of NGINX Inc, and currently owns the Amplify project.
Prior to joining NGINX Inc, Andrew worked for service providers in telecom, and enterprise IT sectors.


F5, Inc. 是备受欢迎的开源软件 NGINX 背后的商业公司。我们为现代应用的开发和交付提供一整套技术。我们的联合解决方案弥合了 NetOps 和 DevOps 之间的横沟,提供从代码到用户的多云应用服务。访问 了解更多相关信息。