Nginx Learning (1)

前言:

作为战斗民族的发明,Nginx在服务器网络处理方面因其稳定,高效,简洁的配置,出色的表现而被广泛使用

Docs written in English are the best material of mastering a specific technology.

(ref: https://docs.nginx.com/nginx/)

Controlling NGINX Processes at Runtime

Master and Worker Processes

NGINX has one master process and one or more worker processes. If caching is enabled, the cache loader and cache manager processes also run at startup.

The main purpose of the master process is to read and evaluate configuration files, as well as maintain the worker processes.

The worker processes do the actual processing of requests. NGINX relies on OS-dependent mechanisms to efficiently distribute requests among worker processes. The number of worker processes is defined by the worker_processes directive in the nginx.conf configuration file and can either be set to a fixed number or configured to adjust automatically to the number of available CPU cores.

Controlling NGINX

To reload your configuration, you can stop or restart NGINX, or send signals to the master process. A signal can be sent by running the nginx command (invoking the NGINX executable) with the -s argument.

The kill utility can also be used to send a signal directly to the master process. The process ID of the master process is written, by default, to the nginx.pid file, which is located in the /usr/local/nginx/logs or /var/rundirectory.

Creating NGINX Plus and NGINX Configuration Files

NGINX and NGINX Plus are similar to other services in that they use a text?based configuration file written in a particular format.

Feature-Specific Configuration Files

To make the configuration easier to maintain, we recommend that you split it into a set of feature?specific files stored in the /etc/nginx/conf.d directory and use the include directive in the main nginx.conf file to reference the contents of the feature?specific files.

Contexts

A few top?level directives, referred to as contexts, group together the directives that apply to different traffic types:

events – General connection processing
http – HTTP traffic
mail – Mail traffic
stream – TCP and UDP traffic

Virtual Servers

In each of the traffic?handling contexts, you include one or more server blocks to define virtual servers that control the processing of requests. The directives you can include within a server context vary depending on the traffic type

Sample Configuration File with Multiple Contexts

The following configuration illustrates the use of contexts.

user nobody; # a directive in the ‘main‘ context

events {
    # configuration of connection processing
}

http {
    # Configuration specific to HTTP and affecting all virtual servers  

    server {
        # configuration of HTTP virtual server 1
        location /one {
            # configuration for processing URIs starting with ‘/one‘
        }
        location /two {
            # configuration for processing URIs starting with ‘/two‘
        }
    } 

    server {
        # configuration of HTTP virtual server 2
    }
}

stream {
    # Configuration specific to TCP/UDP and affecting all virtual servers
    server {
        # configuration of TCP virtual server 1
    }
}

Inheritance

In general, a child context – one contained within another context (its parent) – inherits the settings of directives included at the parent level.

Load Balancer

Overview

Load balancing across multiple application instances is a commonly used technique for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault?tolerant configurations.

Proxying HTTP Traffic to a Group of Servers

To start using NGINX Plus or NGINX to load balance HTTP traffic to a group of servers, first you need to define the group with the upstream directive. The directive is placed in the http context.

Servers in the group are configured using the server directive (not to be confused with the server block that defines a virtual server running on NGINX). For example, the following configuration defines a group named backend and consists of three server configurations (which may resolve in more than three actual servers):

http {
    upstream backend {
        server backend1.example.com weight=5;
        server backend2.example.com;
        server 192.0.0.1 backup;
    }
}

To pass requests to a server group, the name of the group is specified in the proxy_pass directive (or the fastcgi_passmemcached_passscgi_pass, or uwsgi_pass directives for those protocols.) In the next example, a virtual server running on NGINX passes all requests to the backend upstream group defined in the previous example:

server {
    location / {
        proxy_pass http://backend;
    }
}

The following example combines the two snippets above and shows how to proxy HTTP requests to the backendserver group. The group consists of three servers, two of them running instances of the same application while the third is a backup server. Because no load?balancing algorithm is specified in the upstream block, NGINX uses the default algorithm, Round Robin:

http {
    upstream backend {
        server backend1.example.com;
        server backend2.example.com;
        server 192.0.0.1 backup;
    }

    server {
        location / {
            proxy_pass http://backend;
        }
    }
}

Choosing a Load-Balancing Method

1 Round Robin – Requests are distributed evenly across the servers, with server weights taken into consideration. This method is used by default (there is no directive for enabling it):

upstream backend {
   server backend1.example.com;
   server backend2.example.com;
}

Least Connections – A request is sent to the server with the least number of active connections, again with server weights taken into consideration:

upstream backend {
    least_conn;
    server backend1.example.com;
    server backend2.example.com;
}

others are IP Hash, Generic Hash and Least Time (NGINX Plus only)

Server Weights

By default, NGINX distributes requests among the servers in the group according to their weights using the Round Robin method. The weight parameter to the server directive sets the weight of a server; the default is 1:

upstream backend {
    server backend1.example.com weight=5;
    server backend2.example.com;
    server 192.0.0.1 backup;
}

In the example, backend1.example.com has weight 5; the other two servers have the default weight (1), but the one with IP address 192.0.0.1 is marked as a backup server and does not receive requests unless both of the other servers are unavailable. With this configuration of weights, out of every six requests, five are sent to backend1.example.com and one to backend2.example.com.

Server Slow-Start

The server slow?start feature prevents a recently recovered server from being overwhelmed by connections, which may time out and cause the server to be marked as failed again.

In NGINX Plus, slow?start allows an upstream server to gradually recover its weight from zero to its nominal value after it has been recovered or became available. This can be done with the slow_start parameter to the server directive:

upstream backend {
    server backend1.example.com slow_start=30s;
    server backend2.example.com;
    server 192.0.0.1 backup;
}

The time value (here, 30 seconds) sets the time during which NGINX Plus ramps up the number of connections to the server to the full value.

Note that if there is only a single server in a group, the max_failsfail_timeout, and slow_startparameters to the server directive are ignored and the server is never considered unavailable.

Enabling Session Persistence

Session persistence means that NGINX Plus identifies user sessions and routes all requests in a given session to the same upstream server.

Sticky cookie – NGINX Plus adds a session cookie to the first response from the upstream group and identifies the server that sent the response. The client’s next request contains the cookie value and NGINX Plus route the request to the upstream server that responded to the first request:

upstream backend {
    server backend1.example.com;
    server backend2.example.com;
    sticky cookie srv_id expires=1h domain=.example.com path=/;
}

In the example, the srv_id parameter sets the name of the cookie. The optional expires parameter sets the time for the browser to keep the cookie (here, 1 hour). The optional domain parameter defines the domain for which the cookie is set, and the optional path parameter defines the path for which the cookie is set. This is the simplest session persistence method.

others are Sticky route and Cookie learn method

Limiting the Number of Connections

With NGINX Plus, it is possible to limit the number of connections to an upstream server by specifying the maximum number with the max_conns parameter.

If the max_conns limit has been reached, the request is placed in a queue for further processing, provided that the queue directive is also included to set the maximum number of requests that can be simultaneously in the queue:

upstream backend {
    server backend1.example.com max_conns=3;
    server backend2.example.com;
    queue 100 timeout=70;
}

If the queue is filled up with requests or the upstream server cannot be selected during the timeout specified by the optional timeout parameter, the client receives an error.

Note that the max_conns limit is ignored if there are idle keepalive connections opened in other worker processes. As a result, the total number of connections to the server might exceed the max_conns value in a configuration where the memory is shared with multiple worker processes.

原文地址:https://www.cnblogs.com/geeklove01/p/9191126.html

时间: 2024-08-06 23:01:11

Nginx Learning (1)的相关文章

Nginx Learning (3)

NGINX Load Balancing – TCP and UDP Load Balancer Configuring Reverse Proxy For UDP traffic, also include the udp parameter. TCP is the default protocol for the stream context, so there is no tcp parameter to the listen directive Optionally, you can t

Nginx Learning (4)

Configuring NGINX Plus as a Web Server At a high level, configuring NGINX Plus as a web server is a matter of defining which URLs it handles and how it processes HTTP requests for resources at those URLs. At a lower level, the configuration defines a

Nginx Installation、Configuration、Rreverse Proxy、Load Balancing Learning

目录 1. Nginx简介 2. Nginx安装部署 3. Nginx安全配置 4. Nginx反向代理实践 5. Nginx负载均衡实践 1. Nginx简介 0x1: Nginx的基本特性 Nginx("engine x")是一个高性能的HTTP和反向代理服务器,也是一个IMAP/POP3/SMTP代理服务器Nginx可以在大多数Unix like OS上编译运行,并有Windows移植版.它的的源代码使用2-clause BSD-like licenseNginx是一个很强大的高

Nginx +WAF使用总结

曾经研究过一段时间,并做了一下简单的总结,感觉还比较实用,放在有道云笔记里躺了很长时间,感觉有点浪费,拿出来分享一下,让新手少走弯路,高手请绕行.... 环境: linux平台,redhat系统: 一 Nginx 安装 1.所需组件 若已有这几个组件,可直接查看安装部分 1.1  gcc编译器 若没有gcc编译器, yum -y install gcc* 这一步时间有点长 1.2  pcre-8.32 安装pcre过程: http://sourceforge.net/projects/pcre/

Nginx安全加固配置手册

第1章   概述 1.1   目标 Nginx(发音同engine x)是一款轻量级的Web服务器/反向代理服务器及电子邮件(IMAP/POP3)代理服务器,由俄罗斯的程序设计师Igor Sysoev所开发,可以稳定地运行在Linux.Windows等操作系统上,其特点是占用内存少,并发能力强. 同其他软件一样,Nginx也出现过一些安全漏洞,利用这些漏洞可以对Web服务器进行渗透攻击.本文主要描述互联网架构中常用产品Nginx 的配置和安全加固工作,最终用以指导系统实施. 1.2   预期读者

nginx上传模块—nginx upload module-

一. nginx upload module原理 官方文档: http://www.grid.net.ru/nginx/upload.en.html Nginx upload module通过nginx服务来接受用户上传的文件,自动解析请求体中存储的所有文件上传到upload_store指定的目录下.这些文件信息从原始请求体中分离并根据nginx.conf中的配置重新组装好上传参数,交由upload_pass指定的段处理,从而允许处理任意上传文件.每个上传文件中的file字段值被一系列的uplo

nginx反向代理服务器域名解析配置实操

1.进入nginx目录cd conf:创建文件夹 sudo mkdir vhost,cd vhost,sudo vim www.fanxing.store.conf,添加如下内容 server { default_type 'text/html'; charset utf-8; listen 80; autoindex on; server_name www.fanxing.store; access_log /usr/local/nginx/logs/access.log combined;

6、nginx配置

linux 下nginx的安装 安装依赖的库文件 yum install gcc-c++ yum install pcre pcre-devel yum install zlib zlib-devel yum install openssl openssl-devel 下载nginx的安装包 wget http://learning.happymmall.com/nginx/linux-nginx-1.10.2.tar.gz 解压并且进入nginx的主目录 tar -zxvf nginx-xxx

Nginx使用Naxsi搭建Web应用防火墙(WAF),防xss、防注入×××

Naxsi是一个开放源代码.高效.低维护规则的Nginx web应用防火墙(Web Application Firewall)模块.Naxsi的主要目标是加固web应用程序,以抵御SQL注入.跨站脚本.跨域伪造请求.本地和远程文件包含漏洞.官网地址:https://github.com/nbs-system/naxsi Naxsi 不要求任何特定的依赖,它需要的 libpcre ,libssl ,zlib ,gzip 这些 Nginx 已经集成了. 下载Naxsi模块 [[email prote