What is NGINX?

NGINX is open source software that powers web servers and enables reverse proxying, caching, load balancing, and media streaming. It was originally designed as a web server with high performance and reliability. Besides functioning as an HTTP server, NGINX acts as a proxy server for email (IMAP, POP3, and SMTP) and a reverse proxy and load balancer for HTTP, TCP, and UDP servers.

NGINX server architecture: How does NGINX work?

NGINX uses a predictable process model that is sensitive to available hardware resources:

  • The master process performs privileged tasks such as reading configuration and binding ports, and spawns a small number of child processes.
  • The cache loader process runs at startup to load the disk-based cache into memory, and then exits. Because it is scheduled sparingly, it has low resource requirements.
  • The cache manager process runs periodically to remove entries from the disk cache, keeping it within the configured size.
  • Worker processes do the day-to-day work of the NGINX web server. They handle network connections, read and write disk content, and communicate with upstream servers.

In most cases, the recommended NGINX configuration of one worker process per CPU core makes the most efficient use of hardware resources. It can be customized by setting the worker_processes directive in NGINX configuration.

When the NGINX server is active, only the worker process is busy. Each worker process handles multiple connections in a non-blocking manner, reducing the number of context switches.

Each worker process is single-threaded and runs independently to acquire and process new connections. Processes can communicate using shared memory to obtain shared cache data, session persistent data, and other shared resources.

NGINX server architecture: How does NGINX work?

Image Source: NGINX

Each NGINX worker process is initialized with an NGINX configuration and comes with a set of listening sockets, provided by the master process.

The NGINX worker process first waits for events on the listen socket. These events are created by new incoming connections. These connections are assigned to state machines – HTTP state machines are most commonly used, but NGINX also provides state machines for streaming (raw TCP) traffic and email protocols (SMTP, IMAP, and POP3).

NGINX also provides state machines for streaming (raw TCP) traffic and email protocols

Image Source: NGINX

A state machine is a set of instructions that tells NGINX how to handle requests. Most web servers that do the same thing as NGINX use a similar state machine.

NGINX products and solutions

NGINX Plus

NGINX Plus is a cloud native API gateway that also includes a content delivery network, load balancer, reverse proxy, and web server. It provides features like proactive health checks, high availability, domain name server (DNS) discovery, RESTful API management, and session persistence. 

NGINX load balancing, built into NGINX plus, enables integrating advanced monitoring tools, Kubernetes container tuning, enhanced security controls, and debugging and diagnostics of complex application architectures.

NGINX Unit

NGINX Unit is a general-purpose web application server. It is designed to be a building block of web architectures, and can be used at any scale or in any type of organization. It is suitable for both modern microservice environments and modern legacy and monolithic applications.

NGINX Unit simplifies the application stack for web applications and APIs by combining multiple layers into a single component. With Unit, NGINX devices can:

  • Serve static media assets to web servers
  • Run application code natively in multiple languages
  • Reverse proxy to backend servers

NGINX Ingress Controller

NGINX Ingress Controller is a traffic management solution for cloud native applications in containerized Kubernetes environments. This tool is designed for high performance, security, and reliability. Ingress Controller provides performance monitoring and visibility so you can quickly identify and fix performance bottlenecks and anomalous behavior.

NGINX Service Mesh

The NGINX Service Mesh lets you control Kubernetes deployments through a unified data plane. It is designed for high performance and scalability. NGINX Service Mesh provides traffic management, load balancing, encryption, and identity management.

NGINX Service Mesh is not widely used by companies for production applications. 

NGINX Management Suite

Management Suite provides visibility and control over NGINX instances, application-delivered services, API management workflows, and security solutions.

The core functionality of Management Suite is implemented within the NGINX Instance Manager, which is part of the control plane. Its key features include:

  • Discovering configuration issues and suggesting fixes.
  • Finding and renewing expired certificates and detecting NGINX instances exposed to CVEs and other security issues.
  • Controlling access to NGINX configurations using role-based access control (RBAC).
  • Detecting if NGINX App Protect WAF is installed and checking applied version and signature packages.

NGINX App Protect

NGINX App Protect is a modern application security solution that protects against advanced threats and subtle attacks. It provides a web application firewall (WAF) and application-level denial-of-service (DoS) security defenses, providing built-in protection for web servers.

NGINX Amplify

NGINX Amplify is a free SaaS-based monitoring tool for NGINX open source and NGINX Plus. It is easy to set up and lets you monitor performance, track infrastructure assets, and improve configurations through static analysis. NGINX Amplify also monitors the underlying operating system, application server, database, and other components.

NGINX and Kubernetes 

Kubernetes is an open source container orchestration platform. It provides a complete platform for scaling and managing applications deployed in containers.

Several NGINX products can run in a Kubernetes environment:

  • NGINX Plus – a reverse proxy and load balancer that can take on multiple roles:
    • Sidecar for NGINX service mesh
    • Ingress controller in Kubernetes cluster to manage ingress and egress traffic
    • Per-service and per-pod application firewall proxies when deployed with NGINX App Protect
    • A service-to-service API gateway between containers and pods
  • NGINX Service Mesh – a lightweight, full-featured service mesh based on NGINX Plus, which provides data plane security, scalability, and cluster traffic management.

NGINX Ingress Controller – an enterprise ingress and egress controller for Kubernetes cluster traffic management and API gateway use cases.

The basics of NGINX configuration

NGINX configuration is typically done using a configuration file, which is usually named nginx.conf and is located in the NGINX installation directory. The configuration file is written in a specific format, with a set of directives that control how NGINX behaves and what it does when it receives requests.

Configuration concepts

Here are some basic concepts to understand when working with NGINX configuration:

  • Directives: Instructions that tell NGINX what to do. They consist of a name and one or more parameters. Directives can be placed at different levels in the configuration file, and the level at which a directive is placed determines its scope and how it is interpreted.
  • Blocks: Collections of directives that are grouped together. They are surrounded by curly braces { and } and can contain other blocks. Blocks can be nested to create a hierarchy of configuration settings.
  • Context: The context in which a directive is placed determines its meaning and how it is applied. NGINX has several different contexts, including the main context, which applies to the entire NGINX server, and the server context, which applies to a specific server block.
  • Server blocks: A server block is a block of directives that define the configuration for a specific virtual server. Virtual servers allow you to host multiple websites on a single NGINX instance by specifying different configurations for each server block.

Here is an example of a simple NGINX configuration file:

events {
  worker_connections  2048;  ## Default: 1024
}
http {
    server {
        listen 80;
        server_name example.com;
        root /var/www/html;
        location / {
            try_files $uri $uri/ =404;
        }
    }
}

This configuration file contains a single server block that listens on port 80 and serves content from the /var/www/html directory. It also has a location block that specifies how to handle requests for different URLs.

Rate limiting with NGINX 

Rate limiting is a technique that is used to control the rate at which requests are processed by a server. It can be used to protect a server from being overwhelmed by excessive traffic, or to prevent malicious actors from launching denial of service (DoS) attacks or other types of abuse.

NGINX provides several directives that can be used to implement rate limiting. The most commonly used directive for this purpose is limit_req, which allows you to specify a maximum rate at which requests will be accepted from a particular client or group of clients.

For example:

http {
    limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
    server {
        listen 80;
        server_name example.com;
        root /var/www/html;
        location / {
            limit_req zone=one burst=5;
            try_files $uri $uri/ =404;
        }
    }
}

In this example, we have defined a limit request zone called one that is based on the client’s IP address ($binary_remote_addr). The zone allows a maximum of 10 requests per second (rate=10r/s), with a burst of up to 5 requests (burst=5). This means that clients can make up to 5 requests above the specified rate, but any additional requests will be delayed or rejected. The rejected requests will have HTTP 503 status.

NGINX alternatives

Apache

Apache is open source software developed and maintained by an open developer community, which runs on a variety of operating systems. The Apache architecture consists of the Apache Core and modules: 

  • Core components provide basic server-like functionality that accepts connections and manages concurrency. 
  • Modules correspond to different functions performed by each request. A given Apache distribution can be configured to include modules with security features, dynamic content management, and basic HTTP request handling.

Apache web server features include:

  • Handling static files
  • Automatic indexing
  • .htaccess and URL rewrite
  • Compatibility with IPv6 addresses
  • Bandwidth limiting
  • HTTP/2 support
  • FTP connections
  • Gzip compression and decompression
  • Perl, PHP, and Lua scripts
  • Load balancing
  • Session tracking
  • Geolocation based on IP address

HAProxy

HAProxy is a fast, reliable load balancer solution. It is an open source product with an active community. It supports modern architectures, including microservices, cloud native, and virtualized environments.

HAProxy leverages cloud native protocols to be a complete solution for environments such as Red Hat OpenShift, OVH, Rackspace, Digital Ocean, and Amazon Web Services (AWS). It is also supported by OpenStack as its reference load balancer.

HAProxy products include:

  • HAProxy One – a next-generation, end-to-end application delivery platform designed to secure and simplify modern application architectures; offers a complete suite of solutions including application delivery software and turnkey and appliance services monitored and managed by a central control plane.
  • HAProxy Fusion Control Plane – enables organizations to streamline workflows, orchestrate traffic routing and security protocols, increase transfer rates, and scale applications.
  • HAProxy Edge – an application delivery network (ADN) that provides a wide range of turnkey application services with incredible scale and complete visibility.
  • HAProxy ALOHA Hardware or Virtual Load Balancer – a virtual load balancer or plug-and-play hardware built on HAProxy Enterprise, designed to support layer 4 and layer 7 proxies.

HAProxy Enterprise Kubernetes Ingress Controller – designed to manage traffic flow into Kubernetes clusters. Can automatically detect anomalies and changes in Kubernetes infrastructure and distribute traffic to healthy pods, while avoiding downtime due to pod health degradation or scaling changes.

LiteSpeed

LiteSpeed Web Server (LSWS) is a proprietary, lightweight web server that provides high performance and resource savings without compromising security. It also provides built-in DDoS protection and allows per-IP connection and bandwidth throttling.

LSWS features include:

  • Apache drop-in replacement – follows a configuration format similar to that of the Apache web server and provides compatibility with the Apache web server. It can be a drop-in replacement and requires no changes to the operating system or Apache configuration.
  • Server management with zero downtime – prevents server stability issues. While other web servers block connections during software updates, LSWS enables graceful restart during updates, with minimal downtime that does not block connections.
  • Handle concurrent connections – handles concurrent connections faster than Apache, because it relies on an event-based architecture. Spawns a new process for each connection, so it can handle more connections faster and consume fewer resources.
  • Edge Side Includes (ESI) – a markup language that lets users split pages into smaller pieces and process them separately from the rest of the page.
  • LightSpeed Cache (LSCache) – high-performance acceleration of dynamic content. It can purge specific URLs with automatic page caching and reservations. LSCache also provides separate caches for desktop and mobile views. Plugins are available for popular CMS platforms.

Using Solo Gloo Gateway to replace NGINX

The Solo Gloo Gateway and Gloo Mesh are often used to replace NGINX for either API gateway or service mesh functionality, due to the use of more widely used open source projects (Envoy Proxy and Istio Service Mesh). In addition, Solo Gloo Platform leverages an integrated control plane across both API Gateway and Service Mesh, as well as using a consistent Envoy-based data plane, which simplifies overall operations, security, and observability. 

Get started with Solo Gloo Mesh,  Gloo Gateway, or Gloo Platform today!

BACK TO TOP