We are pleased to announce that NGINX Plus Release 18 (R18) is now available. NGINX Plus is the only all-in-one load balancer, content cache, web server, and API gateway. Based on NGINX Open Source, NGINX Plus includes exclusive enhanced features and award‑winning support. R18 simplifies configuration workflows for DevOps and enhances the security and reliability of your applications at scale.
More than 87% of websites now use SSL/TLS to encrypt communications over the Internet, up from 66% just three years ago. End-to-end encryption is now the default deployment pattern for websites and applications, and the explosion in SSL/TLS certificates means some companies are managing many thousands of certificates in production environments. This calls for a more flexible approach to deploying and configuring certificates.
New in this release is support for dynamic certificate loading. With thousands of certificates, it’s not scalable to define each one manually in the configuration for loading from disk – not only is that process tedious, but the configuration becomes unmanageably large and NGINX Plus startup unacceptably slow. With NGINX Plus R18, SSL/TLS certificates can now be loaded on demand without being listed individually in the configuration. To simplify automated deployments even further, certificates can be provisioned with the NGINX Plus API and they don’t even have to sit on disk.
Additional new features in NGINX Plus R18 include:
80‑90
. This enables NGINX Plus to support a broader range of applications, such as passive FTP, that require port ranges to be reserved.Rounding out this release are simplified configuration for clustered environments, and new and updated dynamic modules (including Brotli). Updates to the NGINX Plus ecosystem include modular code organization with the NGINX JavaScript module and direct Helm installation of the official NGINX Ingress Controller for Kubernetes.
Obsolete APIs – NGINX Plus R13 (August 2017) introduced the all‑new NGINX Plus API for metrics collection and dynamic reconfiguration of upstream groups, replacing the Status and Upstream Conf APIs that previously implemented those functions. As announced at the time, the deprecated APIs continued to be available and supported for a significant period of time, which ended with NGINX Plus R16. If your configuration includes the status
and/or upstream_conf
directives, you must replace them with the api
directive as part of the upgrade to R18.
For advice and assistance in migrating to the new NGINX Plus API, please see the transition guide on our blog, or contact our support team.
Updated listen
directive – Previously, when the listen
directive specified a hostname that resolved to multiple IP addresses, only the first IP address was used. Now a listen socket is created for every IP address returned.
NGINX JavaScript Module (njs) changes – The deprecated req.response
object has been removed from the NGINX JavaScript module. Functions declared using the function(req,res)
syntax that also reference properties of the res
object generate runtime errors, returning HTTP status code 500
and a corresponding entry in the error log:
YYYY/MM/DD hh:mm:ss [error] 34#34: js exception: TypeError: cannot get property "return" of undefined
Because JavaScript code is interpreted at runtime, the nginx
-t
syntactic validation command does not detect the presence of invalid objects and properties. You must carefully check your JavaScript code and remove such objects before upgrading to NGINX Plus R18.
Further, JavaScript objects that represent NGINX state (for example, r.headersIn
) now return undefined
instead of the empty string when there is no value for a given property. This change means that NGINX‑specific JavaScript objects now behave the same as built‑in JavaScript objects.
Older operating systems removed or to be removed:
With previous releases of NGINX Plus, the typical approach to managing SSL/TLS certificates for secure sites and applications was to create a separate server
block for each hostname, statically specifying the certificate and associated private key as files on disk. (For ease of reading, we’ll use certificate to refer to the paired certificate and key from now on.) The certificates were then loaded as NGINX Plus started up. With NGINX Plus R18, certificates can be dynamically loaded, and optionally stored in the in‑memory NGINX Plus key‑value store rather than on disk.
There are two primary use cases for dynamic certificate loading:
In both cases, NGINX Plus can perform dynamic certificate loading based on the hostname provided by Server Name Indication (SNI) as part of the TLS handshake. This enables NGINX Plus to host multiple secure websites under a single server configuration and select the appropriate certificate on demand for each incoming request.
With “lazy loading”, SSL/TLS certificates are loaded into memory only as requests arrive and specify the corresponding hostname. This both simplifies configuration (by eliminating the list of per‑hostname certificates) and reduces resource utilization on the host. With a large number (many thousands) of certificates, it can take several seconds to read all of them from disk and load them into memory. Furthermore, a large amount of memory is used when the NGINX configuration is reloaded, because the new set of worker processes loads a new copy of the certificates into memory, alongside the certificates loaded by the previous set of workers. The previous certificates remain in memory until the final connection established under the old configuration is complete and the previous workers are terminated. If configuration is updated frequently and client connections are long lived, there can be multiple copies of the certificates in memory, potentially leading to memory exhaustion.
Lazy loading of certificates from disk is ideal for deployments with large numbers of certificates and/or when configuration reloads are frequent. For example, SaaS companies commonly assign a separate subdomain to each customer. Onboarding new customers is difficult because you have to create a new virtual server for each one, and then copy the new configuration and the customer’s certificate to each NGINX Plus instance. Lazy loading removes the need for the configuration changes – just deploy the certificates on each instance and you’re done.
To support lazy loading, the ssl_certificate
and ssl_certificate_key
directives now accept variable parameters. The variable must be available during SNI processing, which happens before the request line and headers are read. The most commonly used variable is $ssl_server_name
, which holds the hostname extracted by NGINX Plus during SNI processing. The certificate and key are read from disk during the TLS handshake at the beginning of each client session and cached in memory in the filesystem cache, further reducing memory utilization.
A secure site configuration becomes as simple as this:
This same server
configuration can be used for an unlimited number of secure sites. This has two benefits:
server
block for each hostname, making the configuration much smaller and thus easier to read and manage.Note that lazy loading makes the TLS handshake take 20–30% longer, depending on the environment, because of the filesystem calls required to retrieve the certificate from disk. However, the additional latency affects only the handshake – once the TLS session is established, request processing takes the usual amount of time.
You can now store SSL/TLS certificate data in memory, in the NGINX Plus key‑value store, as well as in files on disk. When the parameter to the ssl_certificate
or ssl_certificate_key
directive begins with the data:
prefix, NGINX Plus interprets the parameter as raw PEM data (provided in the form of a variable that identifies the entry in the key‑value store where the data actually resides).
An additional benefit of storage in the key‑value store rather than on disk is that deployment images and backups no longer include copies of the private key, which an attacker can use to decrypt all of the traffic sent to and from the server. Companies with highly automated deployment pipelines benefit from the flexibility of being able to use the NGINX Plus API to programmatically insert certificates into the key‑value store. Additionally, companies migrating applications to a public cloud environment where there is no real hardware security module (HSM) for private key protection benefit from the added security of not storing private keys on disk.
Here’s a sample configuration for loading certificates from the key‑value store:
One way to upload certificates and private keys to the key‑value store with the NGINX Plus API is to run the following curl
command (only the very beginning of the key data is shown). If using the curl
command, remember to make a copy of the PEM data and replace every line break with \n
; otherwise the line breaks are stripped out of the JSON payload.
$ curl -d '{"www.example.com":"-----BEGIN RSA PRIVATE KEY-----\n..."}' http://localhost:8080/api/4/http/keyvals/ssl_key
Using the key‑value store for certificates is ideal for clustered deployments of NGINX Plus, because you upload the certificate only once for automatic propagation across the cluster. To protect the certificate data itself, use the zone_sync_ssl
directive to TLS‑encrypt the connections between cluster members. Using the key‑value store is also ideal for short‑lived certificates or automating integrations with certificate issuers such as Let’s Encrypt and Hashicorp Vault.
As with lazy loading from disk, loading certificates from the key-value store happens during each TLS handshake, which incurs a performance penalty. For the fastest TLS handshakes, use the ssl_certificate
and ssl_certificate_key
directives with a hardcoded parameter to a file on disk. In addition, ECC certificates are faster than RSA certificates.
Note that while the key‑value store makes it more difficult for an attacker to obtain private key files than from disk storage, an attacker with shell access to the NGINX Plus host might still be able to access keys loaded in memory. The key‑value store does not protect private keys to the same extent as a hardware security module (HSM); to have NGINX Plus fetch keys from an HSM, use the engine:engine-name:key-id
parameter to the ssl_certificate_key
directive.
NGINX Plus supports OpenID Connect authentication and single sign‑on for backend applications through our reference implementation. This has been both simplified and enhanced now that the key‑value store can be modified directly from the JavaScript module using variables (see below).
The OpenID Connect reference implementation now issues clients with opaque session tokens in the form of a browser cookie. Opaque tokens contain no personally identifiable information about the user, so no sensitive information is stored on the client. NGINX Plus stores the actual ID token in the key‑value store, and substitutes it for the opaque token that the client presents. JWT validation is performed for every request so that expired or invalid tokens are rejected.
The OpenID Connect reference implementation now also supports refresh tokens so that expired ID tokens are seamlessly refreshed without requiring user interaction. NGINX Plus stores the refresh token sent by an authorization server in the key‑value store and associates it with the opaque session token. When the ID token expires, NGINX Plus sends the refresh token back to the authorization server. If the session is still valid, the authorization server issues a new ID token, which is seamlessly updated in the key‑value store. Refresh tokens make it possible to use short‑lived ID tokens, which provides better security without inconveniencing users.
The OpenID Connect reference implementation now provides a logout URL. When logged‑in users visit the /logout URI, their ID and refresh tokens are deleted from the key‑value store, and they must reauthenticate when making a future request.
A server
block typically has one listen
directive specifying the single port on which NGINX Plus listens; if multiple ports need to be configured, there’s an additional listen
directive for each of them. With NGINX Plus R18, you can now also specify port ranges, for example 80‑90
, when it is inconvenient to specify a large number of individual listen
directives.
Port ranges can be specified for both the HTTP listen
directive and TCP/UDP (Stream) listen
directive. The following configuration enables NGINX Plus to act as a proxy for an FTP server in passive mode, where the data port is chosen from a large range of TCP ports.
This configuration sets up a virtual server to proxy connections to the FTP server on the same port the connection came in on.
When the key‑value store is enabled, NGINX Plus provides a variable for the values stored there based on an input key (typically a part of the request metadata). Previously, the only way to create, modify, or delete values in the key‑value store was with the NGINX Plus API. With NGINX Plus R18, you can change the value for a key directly in the configuration, by setting the variable that holds the value.
The following example uses the key‑value store to maintain a list of client IP addresses that recently accessed the site, along with the last URI they requested.
The set
directive (line 7) assigns a value ($last_uri
) for each client IP address ($remote_addr
), creating a new entry if it is absent, or modifying the value to reflect the $uri
of the current request. Thus the current list of recent clients and their requested URIs is available with a call to the NGINX Plus API:
$ curl http://localhost:8080/api/4/http/keyvals/recents{
"10.19.245.68": "/blog/nginx-plus-r18-released/",
"172.16.80.227": "/products/nginx/",
"10.219.110.168": "/blog/nginx-unit-1-8-0-now-available"
}
More powerful use cases can be achieved with scripting extensions such as the NGINX JavaScript module (njs) and Lua module. Any configuration that utilizes njs has access to all variables, including those backed by the key‑value store, for instance r.variables.last_uri
.
NGINX Plus’ active health checks routinely test backend systems, so that traffic is not directed to systems that are known to be unhealthy. NGINX Plus R18 extends this important feature with two additional capabilities.
When defining a health check for a backend application, you can use a match
block to specify the expected value for multiple aspects of the response, including the HTTP status code and character strings in the response headers and/or body. When the response includes all expected values, the backend is considered healthy.
For even more complex checks, NGINX Plus R18 now provides the require
directive for testing the value of any variable – both standard NGINX variables and variables you declare. This gives you more flexibility when defining health checks because variables can be evaluated with map
blocks, regular expressions, and even scripting extensions.
The require
directive inside a match
block specifies one or more variables, all of which must have a non‑zero value for the test to pass. The following sample configuration defines a healthy upstream server as one that returns headers indicating the response is cacheable – either an Expires
header with a non‑zero value, or a Cache-Control
header.
Using map
blocks in this way is a common way to incorporate OR logic into NGINX Plus configuration. The require
directive enables you to take advantage of this technique in health checks, as well as to perform advanced health checks. Advanced health checks can also be defined by using the JavaScript module (njs) to analyze additional attributes of the responses from each upstream server, such as response time.
When NGINX Plus acts as a Layer 4 (L4) load balancer for TCP/UDP applications, it proxies data in both directions on the connection established between the client and the backend server. Active health checks are an important part of such a configuration, but by default a backend server’s health status is considered only when a new client tries to establish a connection. If a backend server goes offline, established clients might experience a timeout when they send data to the server.
With the proxy_session_drop
directive, new in NGINX Plus R18, you can immediately close the connection when the next packet is received from, or sent to, the offline server. The client is forced to reconnect, at which point NGINX Plus proxies its requests to a healthy backend server.
When this directive is enabled, two other conditions also trigger termination of existing connections: failure of an active health check, and removal of the server from an upstream group for any reason. This includes removal through DNS lookup, where a backend server is defined by a hostname with multiple IP addresses, such as those provided by a service registry.
NGINX Plus has supported cluster‑wide synchronization of runtime state since NGINX Plus R15. The Zone Synchronization module currently supports the sharing of state data about sticky sessions, rate limiting, and the key‑value store across a clustered deployment of NGINX Plus instances.
A single zone_sync
configuration can now be used for all instances in a cluster. Previously, you had to configure the IP address or hostname of each member explicitly, meaning that each instance had a slightly different configuration. You can now have the zone_sync
server listen on all local interfaces by specifying a wildcard value for the address:port parameter to the listen
directive. This is particularly valuable when deploying NGINX Plus into a dynamic cluster where the instance’s IP address is not known until time of deployment.
Using the same configuration on every instance greatly simplifies deployment in dynamic environments (for example, with auto‑scaling groups or containerized clusters).
The following dynamic modules are added or updated in this release:
The NGINX JavaScript module (njs) has been updated to version 0.3.0. The most notable enhancement is support for the JavaScript import
and export
modules, which enables you to organize your JavaScript code into multiple function-specific files. Previously, all JavaScript code had to reside in a single file.
The following example shows how JavaScript modules can be used to organize and simplify the code required for a relatively simple use case. Here we employ JavaScript to perform data masking for user privacy so that a hashed (masked) version of the client IP address is logged instead of the real address. A given masked IP address in the log always represents the same client, but cannot be converted back to the real IP address.
[Editor – This is just one of many use cases for the NGINX JavaScript module. For a complete list, see Use Cases for the NGINX JavaScript Module.]
We put the functions required for IP address masking into a JavaScript module that exports a single function, maskIp()
. The exported function depends on private functions that are only available within the module, and cannot be called by other JavaScript code.
This module can now be imported into the main JavaScript file (main.js), and the exported functions referenced.
As a result, main.js is very simple, containing only the functions that are referenced by the NGINX configuration. The import
statement specifies either a relative or absolute path to the module file. When a relative path is provided, you can use the new js_path
directive to specify additional paths to be searched.
The new features much improve readability and maintenance, especially when there are a large number of njs directives in use, and/or a large amount of JavaScript code. Separate teams can now maintain their own JavaScript code without needing to perform a complex merge into the main JavaScript file.
You can now install NGINX Ingress Controller for Kubernetes directly from our new Helm repository, without having to download Helm chart source files (though that is also still supported). For more information, see the GitHub repo.
The proxy_upload_rate
and proxy_download_rate
directives now work correctly for UDP datagrams.
Previously, NGINX Plus might crash when the health_check
directive was included in a location that made reference to the $proxy_protocol_tlv_0xEA
variable, for example within an AWS PrivateLink environment.
Previously, if an upstream server responded slowly enough for long time, it might never be selected again because its value for the specified time metric was so high compared to other upstream servers. Now, a previously slow upstream server is eventually re‑introduced into the load‑balancer selection process as new measurements allow the moving average to reduce.
This applies to load‑balancing algorithms that use upstream response time as a selection metric, specifically least_time
and random
with the least_time
parameter.
If you’re running NGINX Plus, we strongly encourage you to upgrade to NGINX Plus R18 as soon as possible. You’ll also pick up a number of additional fixes and improvements, and it will help NGINX, Inc. to help you when you need to raise a support ticket.
Please carefully review the new features and changes in behavior described in this blog post before proceeding with the upgrade.
If you haven’t tried NGINX Plus or the NGINX ModSecurity WAF, we encourage you to try them out – for security, load balancing, and API gateway, or as a fully supported web server with enhanced monitoring and management APIs. You can get started today with a free 30‑day evaluation. See for yourself how NGINX Plus can help you deliver and scale your applications.
"This blog post may reference products that are no longer available and/or no longer supported. For the most current information about available F5 NGINX products and solutions, explore our NGINX product family. NGINX is now part of F5. All previous NGINX.com links will redirect to similar NGINX content on F5.com."