NGINX.COM
Web Server Load Balancing with NGINX Plus

This blog post is one of six keynotes by NGINX team members from nginx.conf 2017. Together, these blog posts introduce the NGINX Application Platform, new products, and product and strategy updates.

The blog posts are:

Nick Shadrin: Modern applications have changed. They’ve grown from being very small websites, from simple applications designed to do one thing, to very large and complex web properties. Today, they consist of multiple parts: you create small services and then you connect them together. With the new changes in infrastructure, you’re also able to scale them up and down easily. You create new containers – those new machines in the cloud – and connectivity to the application. And connectivity between the application’s parts is extremely important.

By Matt Britt [CC BY 2.5] via Wikimedia Commons

With NGINX, you already know how to connect to your application. You know how the application works and how the connectivity to it works. But with the Unit project, we went further and deeper, down to the application code. It provides you with the platform for running the application and for running the application code.

We looked at existing solutions and found that they lacked some fundamental technologies. Many of them are big, slow, and not designed for cloud‑native applications.

NGINX Unit is built from scratch. It’s built using the core principles of NGINX engineering by the core engineering team. Unit is an essential part of the NGINX Application Platform. It fits the same way for monolithic applications as for microservices apps. It provides you with the way to migrate and separate services out of old‑school applications. It gives you a uniform way to connect not only to the applications you’re building today, but to the applications that will be built tomorrow.

Let’s talk a bit about the functionality NGINX Unit gives you. First of all, it’s a fully dynamic application server designed for cloud-native apps. What does “dynamic” mean? With NGINX, you’re familiar with the well‑known command for reload. You probably already reload frequently.

And when the reload is done right, you’re not losing connections. You’re fine – the application is working. You can continue making changes by reloading the whole server. However, reloads are sometimes taxing on the server resources, and many of our big users and customers can’t really reload as frequently as they’d like.

With Unit, the system doesn’t reload the processes, it only changes that part of its memory and the parts of the processes that are required for a particular change. What it gives you is the ability to make changes as frequently as you like.

The next thing is how it’s configured. It’s configured through a simple API. Today, everybody likes to do the API calls for configuration of servers. Every management system understands that, and we built a very easy‑to‑understand API that’s based on industry‑standard JSON.

What’s very important is that this API is available remotely. What were you doing when you weren’t able to configure a server in a remote way? You were building a small agent – a sidecar of sorts – in order to perform those configuration steps.

With Unit, you can expose the API to your private networks and to your remote agents, to enable doing configuration in a very easy, native, and remote fashion.

Next, Unit is polyglot. It understands multiple languages. We support PHP, Python, and Go, and other languages are coming soon. What that gives you is the ability to run any code written in any of those languages at the same time, on the same server. But what’s even more interesting is that you can run multiple versions of the same language on the same server as well.

Have you ever migrated an old PHP application from PHP 5 to PHP 7? Now, it’s as easy as one API call. Have you ever tried running the same applications in Python 2.7 and Python 3.3? I see some people laughing in the audience. Yes, sometimes that doesn’t even work. Now, we’re giving you the same platform for running the application in the language and in the version of the language that this application understands. What’s interesting is how that’s made possible.

I’ll ask Igor Sysoev, the original author of NGINX, to the stage to talk about the architecture of NGINX Unit. Igor has an amazing quality: Igor builds applications in a fundamental way. He looks at a problem at a deeper level. He doesn’t have any preconceptions or make compromises when he’s looking at how the application can be built.

Igor, please come up on stage. Let’s talk a bit about the architecture of NGINX Unit.

Igor Sysoev: Good morning. My name is Igor Sysoev. I’m the original author of NGINX, co‑founder of the NGINX company, and architect of our new project, Unit.

Here’s the architectural scheme of Unit: all the parts are separate processes in one system. The processes are isolated for security reasons; only the main process runs as root. Other processes can be run as non‑privileged users.

The architecture is quite complex, so I’ll elaborate on the most important parts.

The key feature of Unit is dynamic configuration. The performance is comparable to existing application servers.

What does “dynamic configuration” mean? It means that there is no actual configuration file at all. You interact with the Controller process with a RESTful JSON API on a UNIX domain socket or a TCP socket. You can upload the whole configuration at once, or just a part.

You can change or delete any part of the configuration and Unit will not reload the entire configuration – only the relevant part is reloaded. This means that you can change your configuration as frequently as you want. When the Controller process accepts request to change configuration, it will update the [full stored] configuration and send the appropriate parts to the router and main processes.

The router process has several worker threads that interact with clients. They accept the clients’ requests, pass the requests to the application processes, get responses back from the applications, and send the responses back to the clients. Each worker thread polls epoll or kqueue and can asynchronously work with thousands of simultaneous connections.

When the Controller sends a new configuration to the router, the router’s worker threads start to handle new incoming connections with the new configuration, while existing (old) connections continue to be processed by the worker threads according to the previous configuration.

So the router worker threads can work simultaneously with several generations of configuration without reloading.

When the router receives requests for an application that has not been started yet, it asks the main process to start the application. Currently, application processes are started only on demand. Later, we will add prefork capabilities.

So the main process forks a new process, dynamically loads the required application module in the new process, sets the appropriate credentials, and then runs the application code itself in the new process.

The module system allows you to run different types of applications in one server and even different versions of PHP or Python in one server.

Go applications are different animals. A typical Go application listens on the HTTP port by itself. You have to build everything into the application, including all networking and the management features. With Unit, you can control your applications without this additional code.

In the case of a PHP or a Python application, you don’t need to change your code at all [to run it with Unit]. For Go applications, you have make some tiny changes: you have to import the Go package for Unit, change just two lines in the source code for the Go application (calling the Unit package instead of the standard HTTP package), and build the Go application with the package.

When the main process needs to run a Go application, it forks a new process and executes the statically built Go application in the new process.

The Unit package is compatible with the standard Go HTTP package, so your application can still run as a standalone HTTP server, or as part of the Unit server. When it runs standalone, it listens on an HTTP port and handles HTTP requests as usual.

When a Go application is run by Unit, it does not listen on an HTTP port. Instead, it communicates with the Unit router process, which handles all HTTP requests and communicates internally with the Go application.

The router and application processes communicate via Unix socket pairs and several shared memory segments. The socket pair and shared memory segments are private for each application process, so if an application process exits abnormally, the router process will handle this failure gracefully, and no other processes and connections will be affected.

When the Go application is run by Unit, it will communicate with the router process. The router process will handle all HTTP requests and internally communicate with the application’s wire socket pairs and shared segment memory.

And now, Nick Shadrin will tell you more about Unit API configurations and our future roadmap.

Nick Shadrin: Thank you, Igor.

Did I tell you that using the NGINX Unit API is easy? Well, yes, it is.

Right here, you can see a simple example of the Unit API. I want to talk a little bit about how it’s configured, and how to make changes to the environment using this API.

The first object you can define is the application object [in the applications section of the configuration]. You can give it a nice, user‑friendly name, and define the type of the application as the language and the language version. Then you can define other parameters for the application that vary based on the application type. PHP applications have some specific parameters. Go applications will have some other parameters.

You can also define the application with a different user name from the group name in the UNIX system so they would be separated for security reasons in your environment.

In addition to defining the applications, you’ll define the listeners, and listeners will be the IP addresses and/or ports for the application.

Then you specify how a particular listener binds to the application that you define. You can create many listeners and many applications, and bind them together the way you like.

Now, how do you make changes? The first and easiest way is to reload the server again. You probably don’t want to do that. You can send the whole JSON payload as a PUT request to the NGINX Unit control socket, or you can make the changes one by one, accessing each of the objects and each of the strings separately by their own URLs. We‘re giving you flexibility on how you make changes.

That’s what you have now. Let’s talk a bit about the plans for this project.

Yesterday, we released NGINX Unit in open source. It’s available in public beta. We encourage everybody to try it and use it.

Our first priority right now is to give you a track record of stable releases and stable code, and we want you to be as confident in NGINX Unit as you already are with NGINX.

There’s a long list of new languages we’ll be adding to Unit, but the first languages we’re going to be working on are Java and Node.js. Once we get more languages and more contributions of different languages from the community, you’ll see that it’s really easy to extend NGINX Unit to support the application language you prefer.

Next, we’ll be adding functionality around HTTP/2 and more HTTP features. For service‑mesh and service‑to‑service communication, we’ll add proxy and networking features directly into NGINX Unit.

Yesterday, we uploaded the code to GitHub and released it publicly for everybody. We already see hundreds of comments in social media. We’re the top story on Hacker News with this project.

We have hundreds of stars. We already have pull requests and issues created in the GitHub repo. The response from the community is overwhelming, and it’s only been 24 hours since the project release.

We encourage you to go to GitHub to start going through the code, to read it, and to contribute to it. We’ll be making this software together with you, and the NGINX Unit core engineering team will work with you on the pull requests and GitHub issues.

Now, let’s see what other resources we’ve prepared so you can start working on NGINX Unit.

We uploaded the documentation at unit.nginx.org, and the code is also available in our standard Mercurial repository and a GitHub repository. You can contribute to the code either by using the well‑known process for contributing to the NGINX project, or by using the GitHub process.

Today, just after the break, we’ll have a deep‑dive session in NGINX Unit in this room. In the deep‑dive session, we won’t have any slides. As we worked on a live demo of the project, and we found that in order to show you all of the functionality and how to work with it, the demo will actually take the whole session.

Be prepared to see a lot of command‑line output and a lot of new ways of running multiple PHP, Python, and Go applications in the same server. If you want to work with us in a mailing list, it is unit@nginx.org. It’s already available, and you can subscribe to it either on the Web or by sending an email to unit-subscribe@nginx.org.

What’s even more amazing for our NGINX Plus commercial users is that they already have this amazing channel of communication to the technical table at NGINX, which is NGINX Support. And if you have questions on NGINX Unit, you can ask them using the same support channel you already know.

That’s what I have for you about NGINX Unit today. Let’s build the software together, let’s see how it works out, and let’s see how we can run new applications using Unit. Thank you.

Hero image
免费白皮书:
NGINX 企阅版全解析

助力企业用户规避开源治理风险,应对开源使用挑战

关于作者

Igor Sysoev

Igor Sysoev

Senior Architect

Igor Sysoev’s passion for solving the C10K problem led him to become the author of NGINX, which was initially released in 2004. NGINX’s popularity and use became so widespread that Igor co‑founded NGINX, Inc. in 2011 and served as its Chief Technology Officer. He has 15 years of experience in deeply technical roles and is a Senior Architect at F5.

关于作者

Nick Shadrin

Software Architect

关于 F5 NGINX

F5, Inc. 是备受欢迎的开源软件 NGINX 背后的商业公司。我们为现代应用的开发和交付提供一整套技术。我们的联合解决方案弥合了 NetOps 和 DevOps 之间的横沟,提供从代码到用户的多云应用服务。访问 nginx-cn.net 了解更多相关信息。