Web Server Load Balancing with NGINX Plus

[Editor – This is a guest post by Enrique Garcia of 3scale. At the time of publication, 3scale used NGINX as an API proxy in their APItools offering.]

Two Simultaneous Uses

NGINX is commonly used as a reverse proxy server as well as a web server, amongst other things.

It is however a bit less common to use it simultaneously for both tasks at the same time.

Each APItools monitor is an intelligent proxy, controllable via a web interface which has its own JSON API. All of it is managed with an NGINX build that includes Lua.

We use the root location for both API and static content in our example. We implement this division of labor by using different ports for the web app (port 7071) and proxy (port 10002).

The Web App (Port 7071)

The web app is a regular HTTP app. We rely heavily on AngularJS to handle the interactivity in the browser window, so the application is mostly an initial dump of HTML, which triggers the loading of some CSS and JavaScript. The rest is communication with a JSON API.

Our (heavily redacted) app configuration looks like this:

server {
    listen 7071;
    location /app {
        try_files $uri /index.html;
    location / {
        try_files /../public$uri $uri @app;
        header_filter_by_lua_file 'lua/apps/csrf.lua';
    location @app {
        content_by_lua_file "lua/apps/api.lua";

The first location block is in charge of sending the initial HTML. The second block serves static files (like the CSS and JavaScript we talked about before). It also ensures we have CSRF protection, using a config file similar to Lapis’.

The last location block is where the API requests are handled. Most of the heavy work is done by a Lua file, api.lua.

Most of the work in api.lua consists on configuring our router to parse each request URL and params, and to invoke the appropriate controller. Here’s a simplified view of api.lua:

local router        = require 'router'
local error_handler = require 'error_handler'
local services      = require 'controllers.services_controller'
-- [1] Configure the routes
local r =
r:get( '/api/services'     , services.index)
r:get( '/api/services/:id' ,
r:post('/api/services'     , services.create)
-- [2] Invoke the appropriate controller function
local method = ngx.req.get_method():lower()
local ok, route_found = error_handler.execute(function()
  r:execute(method, ngx.var.uri, ngx.req.get_uri_args())
if ok and not route_found then
  ngx.status = ngx.HTTP_NOT_FOUND

The two main parts of the file configure the router with all the possible API routes (marked [1]) and call a controller function according to the URL and the routes ([2]). There is also some error handling – if a service doesn’t exist, for example, the services controller raises an error, which is captured by error_handler and transformed into a JSON response with a 400 status and an error message. The final conditional ensures that requests that don’t match any route are also dealt with correctly, because r:execute(...) does not raise an error when a match is not found – it just returns false.

The Proxy (Port 10002)

The proxy part of the APItools monitor is the part that acts as an “intelligent middleman”, storing and sometimes modifying the requests and responses as they arrive.

Our proxy configuration (lots of details removed for brevity):

server {
    listen 10002;
    location / {
        content_by_lua_file 'lua/apps/proxy.lua';

While the lua/apps/proxy.lua file looks like this (again, this is an extremely simplified version):

local host_parser   = require 'host_parser'
local error_handler = require 'error_handler'
local Service       = require 'service'
-- [1] Deduce the service, user and url from the host
local service_name, user = host_parser.get_service_and_user_from_host(
local service, url       = Service:find_by_endpoint_code(service_name)
  assert(service, "no service for "..
  -- [2] Execute the middleware pipeline

Hopefully this example is clear enough: the proxy consists mainly of a parsing step, which deduces what service, user, and URL need to be executed, and an execution phase, which executes the appropriate middleware. There’s also some error handling, which catches Lua errors and transforms them into ngx response with a 4xx HTTP status and error message.


We found that using a single NGINX server for both the proxy and the web app part was simple to implement and performed well enough for our needs.

Each NGINX instance takes around 6 MB of server memory to run. That’s important for us, since we run one NGINX machine per monitor – but we’ll talk more about that in our Docker article. Later on we’ll also write about how we handle redis, queues, and multithreading.

We’re constantly amazed at what this modern tool set allows us to do.

This article was originally published on the APItools blog.

Hero image
《NGINX 完全指南》2024 年最新完整版




F5, Inc. 是备受欢迎的开源软件 NGINX 背后的商业公司。我们为现代应用的开发和交付提供一整套技术。我们的联合解决方案弥合了 NetOps 和 DevOps 之间的横沟,提供从代码到用户的多云应用服务。访问 了解更多相关信息。