NGINX.COM
Web Server Load Balancing with NGINX Plus

Microservices is the most popular development topic among readers of the NGINX blog. NGINX has been publishing regularly on microservices design and development for the past 2 years. To help you on your journey to adopting microservices, this post identifies some of the foundational posts we’ve published on the subject.

Many of our leading blog posts come from three series on microservices development. Our first series describes how Netflix adopted microservices enthusiastically, using NGINX as a core component of their architecture.

Netflix then shared their work with the open source community. The lessons they learned are described in three blog posts covering:

Source: Adrian Cockcroft
A cross-functional, DevOps-based approach is needed for
microservices development. Source: Adrian Cockroft

Our second series is Microservices: From Design to Deployment, which is a conceptual introduction to microservices. The series addresses practical concerns, such as:

These blog posts help you in building microservices applications and optimizing microservices performance.

To start migrating from a monolith to a microservices architecture, implement new functionality as microservices; continue routing requests for legacy functionality to the monolith until there is a replacement microservice
Pulling a single microservice out of a monolithic application

We’ve pulled this seven-part series into an eBook with examples for NGINX Open Source and NGINX Plus. The principles described form the foundation for our webinar, Connecting and Deploying Microservices at Scale.

Our third series began as a microservices example, but grew into our NGINX Plus‑powered Microservices Reference Architecture (MRA). The MRA is a microservices platform, a set of pre-developed models for microservices applications:

  • The Proxy Model puts a single NGINX Plus server in the reverse proxy position. From there, it can manage client traffic and control microservices.
  • The Router Mesh Model adds a second NGINX Plus server. The first server proxies traffic and the second server controls microservices functionality.
  • The Fabric Model is the most innovative. There’s still one NGINX Plus server in front, proxying traffic. But, instead of a second server to control the services, there’s one NGINX Plus instance per service instance. With its own instance of NGINX Plus, each service instance hosts its own service discovery, load balancing, security configuration, and other features. The Fabric Model allows SSL/TLS support for secure microservices communications with high performance, because individual NGINX Plus instances support robust persistent connections.
In the Fabric Model of the Microservices Reference Architecture from NGINX, NGINX Plus is deployed within each container and becomes the forward and reverse proxy for all HTTP traffic going in and out of the containers
The Fabric Model features an NGINX Plus instance paired with every microservice instance

Our strong focus on microservices will continue in 2017, beginning with an ebook detailing our Microservices Reference Architecture. To start experimenting with microservices yourself, start a free 30-day trial of NGINX Plus today. To stay up to date with our microservices content and more, sign up for the NGINX newsletter today.

Hero image
Kubernetes:
从测试到生产

通过多种流量管理工具提升弹性、可视性和安全性

关于作者

Floyd Smith

内容营销总监

自 Macintosh 推出以来,Floyd Earl Smith 一直在参与应用开发,并撰写了 20 多本有关硬件和软件主题的书籍。他是 NGINX 博客的撰稿人之一,内容包括有关 NGINX 微服务参考架构(一种突破性的微服务框架)的多篇文章和线上讲座。

关于 F5 NGINX

F5, Inc. 是备受欢迎的开源软件 NGINX 背后的商业公司。我们为现代应用的开发和交付提供一整套技术。我们的联合解决方案弥合了 NetOps 和 DevOps 之间的横沟,提供从代码到用户的多云应用服务。访问 nginx-cn.net 了解更多相关信息。