The Payment Card Industry (PCI) Data Security Standard (DSS), or PCI DSS, is a certification standard for protecting consumer’s credit card numbers and other personal data. It’s easy to implement PCI DSS best practices with NGINX Plus. This blog post tells you how.
Moving from SSL to the Latest Version of TLS
Secure Sockets Layer (SSL) is dead, and has been for more than a decade. In 2015 the PCI Security Standards Council officially declared it so and published an advisory on moving to the latest versions of Transport Layer Security (TLS), the protocol that supersedes SSL. Here’s the executive summary:
For over 20 years Secure Sockets Layer (SSL) has been in the market as one of the most widely used encryption protocols ever released, and remains in widespread use today despite various security vulnerabilities exposed in the protocol.
Fifteen years ago, SSL v3.0 was superseded by TLS v1.0, which has since been superseded by TLS v1.1 and v1.2.
To date, SSL and early TLS no longer meet minimum security standards due to security vulnerabilities in the protocol for which there are no fixes. It is critically important that entities upgrade to a secure alternative as soon as possible, and disable any fallback to both SSL and early TLS.
SSL has been removed as an example of strong cryptography in the PCI DSS, and can no longer be used as a security control after June 30, 2016.
For the past two years applications were compliant with PCI DSS if they used any version of TLS (1.0 through 1.2). Starting July 1, 2018, however, users of TLS 1.0 will not be PCI DSS compliant unless they can document that they use other approaches to mitigate attacks and minimize risks. Again according to the PCI Security Standards Council:
After 30 June 2018, if the entity still has SSL or early TLS in their environment, they will need to document that it has been verified the systems are not susceptible to the vulnerability and complete the Addressing Vulnerabilities with Compensating Controls process for their particular environment.
If you work for an entity that is audited for PCI DSS compliance, you probably already have a plan for migrating from TLS 1.0 to a later version before June 30 so as to remain compliant with PCI DSS 3.2.1. Here we explain how to use NGINX Plus to incorporate five DCI PSS best practices into your plan.
1: Turn On TLS and Test
The first step is to modify your NGINX Plus configuration to use the latest protocols and ciphers (for legibility, we’ve put a line break in the parameter to the
ssl_protocols TLSv1.1 TLSv1.2;
2: Encrypt Communications to Upstreams
Using the various
proxy_ssl_* directives, you can ensure that communication with your upstream servers is also protected by strong encryption. For example, the following configuration specifies TLS 1.2 for communication between NGINX Plus and upstream servers:
Another option is to use client certificates, as described in the NGINX Plus Admin Guide.
The whole PCI DSS compliance process is continuous – there will always be new attack vectors and new protocols such as TLS 1.3. Be prepared for the next audit!
3: Turn Off TLS 1.1
If all your clients can communicate over TLS 1.2, is there still a reason to use TLS 1.1? One is that not all browsers are compatible with every version of TLS. Wikipedia provides an overview of the protocols supported by current web browsers.
Then, if you possibly can, disable TLS 1.1 completely; TLS 1.2 is preferred.
When securing a site, you can additionally enable HTTP/2 at the edge. HTTP/2 enables more efficient communication between browsers and servers that support it. (HTTP/2 does not give much advantage for communication between servers at a site, as they usually occur over an internal network rather than the Internet.) All recent versions of major browsers support HTTP/2, including Chrome, Edge, Firefox, Internet Explorer, Opera, and Safari.
NGINX Plus supports HTTP/2 on supported operating systems that are built with OpenSSL 1.0.2 or later. NGINX Plus R15 and later also supports two much-requested auxiliary features, HTTP/2 server push and gRPC. With these features in use, in some cases encrypted traffic is faster than unencrypted traffic over HTTP/1.1!
4: Add a WAF
In a PCI DSS environment, a web application firewall (WAF) is mandatory. With NGINX Plus, the most popular option is the NGINX ModSecurity WAF, created in partnership with ModSecurity. You can use either or both the open source OWASP Core Rule Set (CRS) and the commercial rules from Trustwave® SpiderLabs®.
With any set of rules, it’s important to monitor your log files – not only for rules that were triggered, but also for the detection of possible false positives. To help with this, NGINX Plus can be configured to log to a remote syslog server.
To help with analysis, you can feed the logs into tools such as the Elastic Stack, the latest version of the Elasticsearch/Logstash/Kibana (ELK) stack. Additionally, you can back them up to write once‑read many (WORM) storage on a remote server. Data in WORM storage cannot be altered, so the logs are stored securely with no possibility of tampering.
Another advantage of this method is that you avoid the performance hit incurred from disk I/O when you write to log files on the server where NGINX Plus is running.
5: Encrypt All Communication
Many efforts go into securing the edge and ensuring all standards are met, while the internal network is often considered trusted. However, if the auditor – or an intruder – has network sniffers on your internal networks, they can read all unencrypted traffic, which may include database connections, application data, and authentication data.
As an example: PCI DSS requires that your users authenticate with your site over a TLS connection that protects traffic as it travels across the Internet. But internally the authentication process can be performed against an unencrypted LDAP server.
One solution is to use LDAPS for the internal traffic, not relying on the fact that your internal network is secure.
If you want to see what’s being sent over the wire within your system, Wireshark is a valuable tool. In the following figure, Wireshark captures server-to-server transmissions, and we discover that we can easily see a password in plaintext, as in this example for an LDAP query.
Another example: developers might access an unencrypted development application server directly, bypassing the proxy server. In this case, their credentials – which no doubt give them broad (if not total) access – can easily be sniffed.
Another recommendation when using a TLS proxy in production is to make the development environment as similar to production as possible.
Notes and Options
The five best practices described above are required for PCI DSS compliance. In addition, there are several options and some additional information that may help you further improve the security of your site.
Avoid Working as Root on Linux
By default, NGINX is started by the system, sets up the environment – binds privileged ports, processes certificates – and then drops privileged permissions by changing the effective user to non-root.
The default setup lets you secure your SSL certificates, since they are only accessible by the root user and not by the unprivileged user assigned to NGINX.
But there are a number of different options. As an example, to avoid starting NGINX Plus as root, you could do this:
setcap CAP_NET_BIND_SERVICE /usr/sbin/nginx
to allow NGINX binding of ports < 1024 without running as a privileged user.
In case you require IP transparency, you could add
CAP_NET_RAW capabilities to NGINX, e.g.:
setcap CAP_NET_RAW /usr/sbin/nginx
However, for upstream servers, low level ports may not always be a requirement; you can run NGINX Plus on non-privileged ports as well.
Apart from the filesystem permissions, there are several notes:
- All ports for the unprivileged user must be ≥ 1024 (unless you assign the
- If any sockets are used, ensure the NGINX processes can access these sockets
- The user directive in the NGINX Plus configuration should be commented out, since the
setuid-calldoes not work for an unprivileged user that runs NGINX Plus
In order to avoid unnecessary logins to the NGINX Plus servers for configuration changes to upstream server groups, make the changes dynamically with the NGINX Plus API or use DNS for dynamic service discovery. You can then change upstream servers without manually modifying NGINX Plus configuration files.
Third‑party authentication methods can be integrated into NGINX Plus using subrequests. That makes it easy to integrate two‑factor authentication (2FA) methods, such as hardware tokens, for authentication.
General Best Practices
While this blog post focuses on changes you make to NGINX Plus, and not related efforts such as OS hardening, we also recommend further best practices for improving security:
- Don’t expose version numbers (such as the NGINX Plus or PHP version) unnecessarily. You can use the
server_tokensdirective to omit the NGINX Plus version from error pages and the
Serverresponse header; in NGINX Plus R9 and later, you can also set the value explicitly.
- If your servers cannot access the NGINX Plus repo – either directly, or using a proxy server – consider creating a local repo mirror.
- Encrypt traffic everywhere.
- Even if you do not require Federal Information Processing Standard (FIPS) 140-2 compliance, it is reasonably easy to set up. On a FIPS 140-2 compliant system, NGINX Plus can be used to decrypt and encrypt TLS‑encrypted network traffic, with any weak ciphers disabled due to FIPS 140-2 compliance.
- Limit access to any involved system as much as possible.
- Documentation is key – write down what you were trying to achieve, and how you sought to achieve it, so you (or a successor) can explain your setup to the auditor.