Web Server Load Balancing with NGINX Plus

This blog post is one of six keynotes by NGINX team members from nginx.conf 2017. Together, these blog posts introduce the NGINX Application Platform, new products, and product and strategy updates.

The blog posts are:

You may also wish to view the conference video playlist on the NGINX YouTube Channel.

Chris Stetson: Good morning everyone, and thanks for joining us here for the first demonstration of our newest product, the NGINX Controller. I’m Chris Stetson, and I’m Head of Engineering for the Controller team. Joining me is Rachael Passov, who’s one of our engineers. She’ll be doing the hard work of actually demoing the product.

We’re very excited to show you the Controller product. It’s a critical part of the application platform that Owen Garrett was just talking about.

Where NGINX was focused on a single instance, running on a single host, and managing traffic on that host, Controller focuses on managing your Layer 7 networks, integrating all of your systems holistically.

We have a lot to show you today, and it’s just a subset of what will be in the beta that we’re delivering soon. Come by the booth afterwards so we can show you the product, and get some feedback from you about what you think is important.

Now, I know you guys saw all of the smoke and ash out there in the sky, and you probably thought that was from “forest fires”. In fact, it was from the bonfire that I lit in the back parking lot as a sacrifice to the demo gods, because we’re showing an alpha product, and you never know what happens. Hopefully, they’ll be appeased, and it will all go smoothly. If it doesn’t, you know why: it’s because I didn’t give them the shrubbery.

What is the Controller? When we started, we had a number of goals we were trying to achieve: the first and most important was making NGINX easy.

Many of you probably remember the first time you tried to edit an NGINX configuration file. You were struggling with all the different contexts: the main context, the http context, the server context, and how all the directives kind of work together.

We wanted to make that super simple, but not lose all of the power that NGINX provides. Most importantly, we wanted to implement best practices by default. No matter what you deployed to your NGINX instances, it would be a high‑quality configuration.

We also wanted to abstract NGINX objects into things like a virtual load balancer so that you would be able to understand the relationship between all the components.

We wanted to show traffic flowing in and among all of your instances. And we wanted to make hard things easy, like cluster management, blue‑green deployment, and high‑speed encryption networks.

We set the bar high, and we hope you agree that we’ve achieved it. We’re going to show you what we’ve done so far. Rachael, can you walk us through what we’re looking at?

Rachael Passov: Sure. Let me log in as the admin.

When you come in in the morning, this is the screen you’d log into. I’ll do a quick overview of the layout first:

On the left, we’ve got all of the NGINX instances that are being managed by the Controller.

In the center, we’ve got virtual load balancers, routes, and backend pools.

On the right, this sidebar is the config viewer where the NGINX config will be constructed as you work.

Let’s take a closer look at each one of these things:

On the sidebar on the left, you have your instances. Each one of these grey boxes is a virtual machine running NGINX. Some are already assigned to load balancers, and some are still waiting for assignment.

Each one of these boxes: you’ve got your hostname, a cloud badge, AWS – you can see where it lives – and the IP address in the instance.

In the center, you’ve got your service groups, virtual load balancers, routes, and backend pools of microservices. This is the heart of your network; it’s where all of the data flows. Each one of these boxes is a collapsed version of the load balancers that you’re managing.

On the left is the domain node. That’s where traffic would enter into the system. It would then move to this box, and these are your service groups or your collections of load balancers.

Let’s take a closer look at one of these service groups, and expand this one.

We’ve got that it’s HTTP at the top. We’ve got our protocols here at the bottom: SSL, HTTP/2. The green box on the left – these are the ports that are accepting your inbound Layer 7 traffic. These green boxes on the right are the routes to your server pools on the backend. You can see the server pools on the far right. This right sidebar is the config viewer: as you assemble all of these elements together, this config will build out live, dynamically, as you work, to show you the configuration that you’re constructing.

Chris: You can see that we’ve built the NGINX Microservices Reference Architecture as the application that we’re load balancing to. You can see that we’re accepting traffic on port 80 and 443, so HTTP and HTTPS. We’re redirecting HTTP traffic back to HTTPS.

At the bottom, you can see where we’re accepting HTTP/2 traffic.

And on the right, we have all of the routes going up to the various microservices: pages for the web pages, user-manager for account information plus login and logout. We even have information about the loa‑ balancing algorithms that we’re using: Least Time and Least Connections.

Rachael’s now going to check to make sure that this system is up and running.

Rachael: I’ll go to – that’s the URL. And there it is: it’s up.

Chris: Our QA team has been hammering away on this application. In fact, I just got a Slack from them saying that it’s ready to go.

Rachael: Then we’re ready to move into production?

Chris: I believe so. Let’s do it.

In the move to production, the first thing we need to do is create a new service group that we’re going to call “Ingenious Production”, which is what Rachael is doing right now.

Rachael: Check the Advanced settings.

Chris: This is where your advanced configurations can go.

Rachael: Everything’s just like I left it.

Chris: Excellent. Success.

Now, we’re going to be using the handy‑dandy clone feature in the Ingenious Stage service group to copy all of the configuration elements over to our Ingenious Production service group. Now she’s specifying Ingenious Production as the place where the clone will go. She’s making some changes to the Domains and Domain Alias fields, so that they match the production system. We have HTTP/2, HTTPS. We’re redirecting HTTP traffic. Again, our advanced configurations are preserved.

Now, let’s take a look at the SSL certs. As you can see, we have Let’s Encrypt built in, so you can always deploy SSL with your application. We also want to show you how easy it is to upload your own certs, in case you have wildcard certs or extra authentication of your system.

Right now, Rachael is uploading our production certs for NGINX Lab. She put in the cert, the private key, and the root cert.

Again, taking a look at the advanced configuration, we’re running on TLS 1.2 and a high cipher.

You can see our notifications are telling us that was saved. Obviously, that was quite quick and easy. It preserved all of the routing and server pools that we had built into the staging system into our new production system.

Thank you, Rachael.

Now, having a configuration does not a web server make. We need to actually assign it to some instances.

Let’s take a look at the unassigned instances we have on the left. Hopefully, because this is a production system, we want some really massive, beefy servers.

Rachael: It looks like this one has eight core processors at 2.4 GHz.

Chris: That’s sweet. Let’s assign three of them.

Rachael: How many of you guys have struggled to keep your NGINX configs in sync before? Now, it’s just drag‑and‑drop. There we go. Is this ready to deploy?

Chris: Let’s do it. Rachael’s going to push the config to those instances.

The config pushed. Excellent.

Rachael: We’ll smoke test the system anyway before it goes public, so we could always roll back if it goes wrong.

Chris:: Rachael’s going to go to an SSH window and make sure that the system is up and running. She’s going to log in and make sure the configuration that we were looking at before is actually loaded on the server.

The more important thing is actually being able to look at it in the browser. We’ll switch back to the browser and take a look at In fact, it did deploy.

Rachael: That was all super easy. I was able to do everything, get everything that I needed to do done, because I’m a superuser in the system.

Sometimes people like Gus Robertson, our CEO, want to be able to come into the system. They want to see what I’ve been doing. They want to be able to poke around. That scares me. I don’t want Gus to be able to go in there and muck up all the configs that I’ve worked so hard to create. So I make Gus a special account, just so that he can come in and see what’s going on.

Chris, why don’t you play the role of Gus? I’ll sign out, and you can log in as him.

Chris: Gus is a very metrics‑driven guy, and he loves to see the analytics that we have available. Let me log in as Gus. As you can see, I can see all of the configurations, all of the service groups.

Rachael: Maybe you logged in with the wrong account.

Chris: I did. I always like to log in as admin because that’s the kind of user I am. The monitoring user is actually quite limited and doesn’t allow you to access all the features and functionality. Of all of the CRUD (Create, Read, Update, Delete) operations, you get the R for read.

Most importantly, he would be able to look at the analytics and see that running. As you can see, our team has not started doing the smoke testing on the production system yet, but you can see we have CPU usage, and some NGINX current connections and requests, so that he can actually see how the system is running on production.

Let’s log you back in, Rachael, and show them the rest of the functionality that we want to demo.

Okay, we have a very nice GUI. That’s a first for NGINX, and something that many of our customers have asked for for a long time. It’s particularly useful for newbies to NGINX who’ll take time to figure out all the power of NGINX. And it’s a really good way to consume the analytics that our systems will generate.

But most of you are power users, and you don’t want to have to use a GUI all the time. You want to be able to use a command‑line tool, and actually be able to use an API to do app integration. We have a command‑line tool that we’ll be showing at the booth a little bit later.

Right now, we want to show you the API. Rachael is bringing up the GraphQL API interface that we have. We use GraphQL, which is very similar to Swagger and an easy way to interact with our API.

We chose GraphQL because it’s much more efficient at doing submissions and queries for complex object graphs, which is essentially what an NGINX network looks like. It will allow you to really supercharge your app integration with the Controller.

Rachael, let’s show them the power of GraphQL and our API.

Rachael: I’ll run this query to fetch all of the service groups. They’re there.

Chris: As you can see, it brought back all of the things: the service groups, the virtual load balancers, the routes, pools, and instances, all in one fell swoop. We do it using the ability of GraphQL to connect all of the objects and query them together.

What else can we do?

Rachael: I’ve got a query here to create an entire new application in one click. There we go. Looks like it created successfully. Let’s go back to the GUI and see what happened. There it is, created with one query.

Chris: Excellent. Clearly, we’ve shown you a lot today, and there’s a ton more in development. One of the coolest is the blue‑green deployment that we have in development right now. We’re going to show you an illustration of that – or maybe we’re not. It’s a very cool feature, and we will show you at the booth if you come on by.

(Applause.) Oh, the demo gods!

In any case, among the other things that we’re building in the Controller are object level versioning and rollback. You’ll be able to roll back individual virtual load balancers, for example. We’ll be able to import your existing NGINX configurations. We have team‑oriented rollback access control, so you can configure the system to limit access to just certain parts of the application.

We’re going to integrate analytics throughout the tool, so you can see upstreams, and virtual load balancers, and all the components working together, either in aggregated mode or individually. We’re building canary testing and A/B testing, global search, event logging for auditing and compliance, and a whole host of other features.

Please come by the booth after the keynotes to see our application and give us any feedback about what you’d like to see in a system like this. Thank you.

Hero image
NGINX 企阅版全解析



Chris Stetson

Chief Architect & Senior Director of Microservices Engineering

Chris Stetson oversees the Controller engineering team, the NGINX Microservices Reference Architecture (MRA), and the NGINX professional services team. Previously, Chris architected and developed applications, including the first versions of,,, Sirius Satellite Radio, and


F5, Inc. 是备受欢迎的开源软件 NGINX 背后的商业公司。我们为现代应用的开发和交付提供一整套技术。我们的联合解决方案弥合了 NetOps 和 DevOps 之间的横沟,提供从代码到用户的多云应用服务。访问 了解更多相关信息。