分布式存储之MogileFS基于Nginx实现负载均衡(Nginx+MogileFS)

MogileFS分布式文件系统特点:

  1.具有raid的性能

  2.不存在单点故障

  3.简单的命名空间: 每个文件对应一个key:用于domain定义名称空间

4.不共享任何数据

5.传输中立,无特殊协议:可以通过NFS或HTTP进行通信

6.自动文件复制:复制的最小单位不是文件,而class

  7.应用层: 用户空间文件系统,无须特殊的核心组件

Nginx+MogileFS的好处:

  1、将请求代理至后端MogileFS服务器集群中,能实现负载均衡的效果。

   2、能对后端的tracker节点进行健康检测。

   3、将第三方模块“nginx_mogilefs_module”编译进Nginx中,能实现直接使用key访问对应的文件,如下:

        使用nginx做代理之前:http://192.168.80.137:7500/dev2/0/000/000/0000000007.fid

        使用nginx做代理之后:http://192.168.80.132/image/1.jpg

MogileFS是由三个组件组成的:

  1、tracker:MogileFS的核心,是一个调度器,服务进程为mogilefsd,职责:删除数据、复制数据、监控、查询等。

  2、database:为tracker存储元素据

  3、数据存储的位置,通常是一个HTTP(webDAV)服务器,用来做数据的创建(put)、删除(delete)、获取(get),监听端口7500, storage节点使用http进行数据传输, 依赖于perbal, 进程为mogstored。

理想中模型:

实验架构:

MariaDB节点配置:

对各mogilefs节点的主机名进行解析:

[[email protected] ~]# vim /etc/hosts
     192.168.80.136 mog1.daixiang.com
     192.168.80.137 mog2.daixiang.com
     192.168.80.138 mog3.daixiang.com
                           

二进制包安装MariaDB:

[[email protected] mysql]# useradd -r -s /sbin/nologin mysql
[[email protected] ~]# tar xf mariadb-10.1.14-linux-x86_64.tar.gz -C /usr/local/
[[email protected] ~]# ln -sv /usr/local/mariadb-10.1.14-linux-x86_64 /usr/local/mysql
[[email protected] mysql]# chown -R mysql.mysql /usr/local/mysql/
[[email protected] mysql]# ./scripts/mysql_install_db --user=mysql --datadir=/data
[[email protected] mysql]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld
[[email protected] mysql]# chmod +x /etc/rc.d/init.d/mysqld
[[email protected] mysql]# cp support-files/my-large.cnf /etc/my.cnf
[[email protected] mysql]# vim /etc/my.cnf
         datadir = /data
[[email protected] mysql]# ln -sv /usr/local/mysql/include/ /usr/include/mysql/
[[email protected] mysql]# vim /etc/ld.so.conf.d/mysql.conf
         /usr/local/mysql/lib
[[email protected] mysql]# vim /etc/profile.d/mysql.sh
         export PATH=/usr/local/mysql/bin:$PATH
[[email protected] mysql]# ldconfig
[[email protected] mysql]# ldconfig -p | grep mysql
	libmysqld.so.18 (libc6,x86-64) => /usr/local/mysql/lib/libmysqld.so.18
	libmysqld.so (libc6,x86-64) => /usr/local/mysql/lib/libmysqld.so
	libmysqlclient_r.so.16 (libc6,x86-64) => /usr/lib64/mysql/libmysqlclient_r.so.16
	libmysqlclient.so.18 (libc6,x86-64) => /usr/local/mysql/lib/libmysqlclient.so.18
	libmysqlclient.so.16 (libc6,x86-64) => /usr/lib64/mysql/libmysqlclient.so.16
	libmysqlclient.so (libc6,x86-64) => /usr/local/mysql/lib/libmysqlclient.so
	libgalera_smm.so (libc6,x86-64) => /usr/local/mysql/lib/libgalera_smm.so

  

对用户进行授权:

MariaDB [(none)]> grant all on *.* to ‘root‘@‘192.168.80.%‘ identified by ‘rootpass‘;
Query OK, 0 rows affected (0.06 sec)

MariaDB [(none)]> grant all on mogilefs.* to ‘moguser‘@‘192.168.80.%‘ identified by ‘mogpass‘;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

  

MogileFS配置:

[[email protected] ~]# yum install *.rpm -y
[[email protected] ~]# yum install perl-IO-AIO -y

[[email protected] ~]# yum install *.rpm -y
[[email protected] ~]# yum install perl-IO-AIO -y

[[email protected] ~]# yum install *.rpm -y
[[email protected] ~]# yum install perl-IO-AIO -y 

修改mogilefsd进程的配置文件:

 1 [[email protected] ~]# vim /etc/mogilefs/mogilefsd.conf
 2
 3 # Enable daemon mode to work in background and use syslog
 4 daemonize = 1                                           #以守护进程的形式运行
 5 # Where to store the pid of the daemon (must be the same in the init script)
 6 pidfile = /var/run/mogilefsd/mogilefsd.pid
 7 # Database connection information
 8 db_dsn = DBI:mysql:mogilefs:host=192.168.80.135      #定义数据库名为mogilefs和数据库服务器地址
 9 db_user = moguser                                     #定义管理此数据库的用户名
10 db_pass = mogpass                                     #定义密码
11 # IP:PORT to listen on for mogilefs client requests
12 listen = 0.0.0.0:7001                                  #定义监听的地址和端口
13 # Optional, if you don‘t define the port above.
14 conf_port = 7001
15 # Number of query workers to start by default.
16 query_jobs = 10                                         #定义启动查询线程个数
17 # Number of delete workers to start by default.
18 delete_jobs = 1                                          #定义启动删除线程个数
19 # Number of replicate workers to start by default.
20 replicate_jobs = 5                                       #定义启动复制线程个数
21 # Number of reaper workers to start by default.
22 # (you don‘t usually need to increase this)
23 reaper_jobs = 1                            #响应客户端请求,在磁盘失败后将请求重新放到队列中
24 # Number of fsck workers to start by default.
25 # (these can cause a lot of load when fsck‘ing)
26 #fsck_jobs = 1                              #对磁盘进行检测,默认没有启动
27 # Minimum amount of space to reserve in megabytes
28 # default: 100                               #默认保留100M空间
29 # Consider setting this to be larger than the largest file you
30 # would normally be uploading.
31 #min_free_space = 200                #最小空闲空间为200M
32 # Number of seconds to wait for a storage node to respond.
33 # default: 2                                   #等待存储节点相应的时间,默认为2s
34 # Keep this low, so busy storage nodes are quickly ignored.
35 #node_timeout = 2                        #等待节点的超时时长
36 # Number of seconds to wait to connect to a storage node.
37 # default: 2                                     #连接存储节点的超时时长
38 # Keep this low so overloaded nodes get skipped.
39 #conn_timeout = 2
40 # Allow replication to use the secondary node get port,
41 # if you have apache or similar configured for GET‘s
42 #repl_use_get_port = 1

设置tracker对应的数据库,生成mogilefs库:

[[email protected] ~]# mogdbsetup --dbhost=192.168.80.135 --dbport=3306 --dbname=mogilefs --dbrootuser=root --dbrootpass=rootpass --dbuser=moguser --dbpass=mogpass --yes

#多个tracker共享一个库,所以其他tracker节点上就不需要在做此操作,直接修改配置文件/etc/mogilefs/mogilefs.conf就可以

    注意:此处有一bug,本人也不知道是什么原因造成的:在生成mogilefs库的时候会出现错误,说root用户没对mogilefs库的访问权限(“Failed to grant privileges: Access denied for user ‘root‘@‘192.168.80.%‘ to database ‘mogilefs‘”),但是再一次执行上面的设置数据库的命令时就会成功。前提是确保mariadb对用户授权是正确的。如果有哪位大神知道是什么原因造成的,求指点。。  

 到MariaDB节点上查看mogilefs库是否成功生成:

[[email protected] ~]# mysql  

 1 MariaDB [(none)]> show databases;
 2 +--------------------+
 3 | Database           |
 4 +--------------------+
 5 | information_schema |
 6 | mogilefs           |
 7 | mysql              |
 8 | performance_schema |
 9 | test               |
10 +--------------------+
11 5 rows in set (0.00 sec)
12
13 MariaDB [(none)]> use mogilefs;
14 Database changed
15 MariaDB [mogilefs]> show tables;
16 +----------------------+
17 | Tables_in_mogilefs   |
18 +----------------------+
19 | checksum             |
20 | class                |
21 | device               |
22 | domain               |
23 | file                 |
24 | file_on              |
25 | file_on_corrupt      |
26 | file_to_delete       |
27 | file_to_delete2      |
28 | file_to_delete_later |
29 | file_to_queue        |
30 | file_to_replicate    |
31 | fsck_log             |
32 | host                 |
33 | server_settings      |
34 | tempfile             |
35 | unreachable_fids     |
36 +----------------------+
37 17 rows in set (0.00 sec)

启动mogilefsd进程:

[[email protected] ~]# service mogilefsd start

1 [[email protected] ~]# ss -tnl | grep ‘7001‘
2 LISTEN     0      128                       *:7001                     *:*  

对mogstored进程进行配置:

[[email protected] ~]# vim /etc/mogilefs/mogstored.conf

maxconns = 10000
httplisten = 0.0.0.0:7500
mgmtlisten = 0.0.0.0:7501
docroot = /dfs/mogdata

将mogilefsd进程和mogstored进程的配置文件都同步到其他mog节点上去:

[[email protected] ~]# scp /etc/mogilefs/mogilefsd.conf 192.168.80.137:/etc/mogilefs/
[[email protected] ~]# scp /etc/mogilefs/mogilefsd.conf 192.168.80.138:/etc/mogilefs/

[[email protected] ~]# scp /etc/mogilefs/mogstored.conf 192.168.80.137:/etc/mogilefs/
[[email protected] ~]# scp /etc/mogilefs/mogstored.conf 192.168.80.138:/etc/mogilefs/

在存储节点上创建挂载点

[[email protected] ~]# mkdir /dfs/mogdata/dev1
[[email protected] ~]# chown -R mogilefs.mogilefs /dfs/mogdata/

[[email protected] ~]# mkdir /dfs/mogdata/dev2 -pv
[[email protected] ~]# chown -R mogilefs.mogilefs /dfs/mogdata/

[[email protected] ~]# mkdir /dfs/mogdata/dev3 -pv
[[email protected] ~]# chown -R mogilefs.mogilefs /dfs/mogdata/

    注意:在生产环境中,需要将存储磁盘挂载在/dfs/mogdata目录下,当块磁盘挂载成功之后,需要在磁盘上创建共享块设备,即这里博主就不挂载磁盘了,直接创建共享块设备。
            挂载磁盘的配置方法:
                          [[email protected] ~]# mkdir /dfs/mogdata/
                          [[email protected] ~]# mount -t ext4 /dev/sdb1 /dfs/mogdata/
                          [[email protected] ~]# mkdir /dfs/mogdata/dev1
                          [[email protected] ~]# chown -R mogilefs,mogilefs /dfs/mogdata/

启动mogstored进程:

[[email protected] ~]# service mogstored start
[[email protected] ~]# service mogstored start
[[email protected] ~]# service mogstored start

[[email protected] ~]# ss -tnlp | grep ‘mogstored‘
LISTEN     0      128                       *:7500                     *:*      users:(("mogstored",2288,4))
LISTEN     0      128                       *:7501                     *:*      users:(("mogstored",2288,9))

[[email protected] ~]# ss -tnlp | grep ‘mogstored‘
LISTEN     0      128                       *:7500                     *:*      users:(("mogstored",2288,4))
LISTEN     0      128                       *:7501                     *:*      users:(("mogstored",2288,9))

[[email protected] ~]# ss -tnlp | grep ‘mogstored‘
LISTEN     0      128                       *:7500                     *:*      users:(("mogstored",2288,4))
LISTEN     0      128                       *:7501                     *:*      users:(("mogstored",2288,9))

  

对各节点进行管理:

[[email protected] ~]# mogadm --trackers=192.168.80.136:7001 host add mog1 --ip=192.168.80.137 --status=alive
[[email protected] ~]# mogadm --trackers=192.168.80.136:7001 device add mog1 001 --status=alive

[[email protected] ~]# mogadm --trackers=192.168.80.136:7001 host add mog2 --ip=192.168.80.137 --status=alive
[[email protected] ~]# mogadm --trackers=192.168.80.136:7001 device add mog2 002 --status=alive

[[email protected] ~]# mogadm --trackers=192.168.80.136:7001 host add mog3 --ip=192.168.80.137 --status=alive
[[email protected] ~]# mogadm --trackers=192.168.80.136:7001 device add mog3 003 --status=alive

[[email protected] ~]# mogadm domain add linux1
[[email protected] ~]# mogadm domain add python1

[[email protected] ~]# mogadm class add linux1 class1 --mindevcount=3
[[email protected] ~]# mogadm class add linux1 class2 --mindevcount=2
[[email protected] ~]# mogadm class add python1 dx1 --mindevcount=2

[[email protected] ~]# mogadm check

Checking trackers...
  127.0.0.1:7001 ... OK

Checking hosts...
  [ 1] mog1 ... OK
  [ 2] mog2 ... OK
  [ 3] mog3 ... OK

Checking devices...
  host device         size(G)    used(G)    free(G)   use%   ob state   I/O%
  ---- ------------ ---------- ---------- ---------- ------ ---------- -----
  [ 1] dev1            16.509      3.901     12.608  23.63%  writeable   4.4
  [ 2] dev2            16.509      3.901     12.608  23.63%  writeable   0.3
  [ 3] dev3            16.509      3.897     12.612  23.60%  writeable   0.0
  ---- ------------ ---------- ---------- ---------- ------
             total:    49.527     11.699     37.828  23.62%

[[email protected] ~]# mogadm domain list
 domain               class                mindevcount   replpolicy   hashtype
-------------------- -------------------- ------------- ------------ -------
 linux1               class1                    3        MultipleHosts() NONE
 linux1               class2                    2        MultipleHosts() NONE
 linux1               default                   2        MultipleHosts() NONE   

 python1              default                   2        MultipleHosts() NONE
 python1              dx1                       2        MultipleHosts() NONE   

  详细介绍其查看上一篇博客:"分布式存储之MogileFS分布式文件系统简单应用"

上传文件进行测试:

[[email protected] ~]# mogupload --trackers=192.168.80.136:7001 --domain=linux1 --key=‘/1.jpg‘ --file=‘/root/centos.jpg‘
[[email protected] ~]# mogfileinfo --tracker=192.168.80.137:7001 --domain=linux1 --key=‘/1.jpg‘

[[email protected] ~]# mogupload --trackers=192.168.80.136:7001 --domain=python1 --key=‘/fstab.html‘ --file=‘/etc/fstab‘
[[email protected] ~]# mogfileinfo --tracker=192.168.80.137:7001 --domain=python1 --key=‘/fstab.html‘  

[[email protected] ~]# mogfileinfo --tracker=192.168.80.137:7001 --domain=linux1 --key=‘1.jpg‘
- file: 1.jpg
     class:              default
  devcount:                    2
    domain:               linux1
       fid:                    5
       key:                1.jpg
    length:               134783
 - http://192.168.80.137:7500/dev2/0/000/000/0000000005.fid
 - http://192.168.80.136:7500/dev1/0/000/000/0000000005.fid

配置Nginx:

[[email protected] ~]# # yum groupinstall "Development Tools" "Server Platform Deveopment" -y
[[email protected] ~]# # yum install openssl-devel pcre-devel -y
[[email protected] ~]# useradd -r nginx
[[email protected] ~]# ./configure   --prefix=/usr/local/nginx   --sbin-path=/usr/sbin/nginx   --conf-path=/etc/nginx/nginx.conf   --error-log-path=/var/log/nginx/error.log   --http-log-path=/var/log/nginx/access.log   --pid-path=/var/run/nginx/nginx.pid    --lock-path=/var/lock/nginx.lock   --user=nginx   --group=nginx   --with-http_ssl_module   --with-http_flv_module   --with-http_stub_status_module   --with-http_gzip_static_module   --http-client-body-temp-path=/var/tmp/nginx/client/   --http-proxy-temp-path=/var/tmp/nginx/proxy/   --http-fastcgi-temp-path=/var/tmp/nginx/fcgi/   --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi   --http-scgi-temp-path=/var/tmp/nginx/scgi   --with-pcre   --with-debug   --add-module=/root/nginx_mogilefs_module-1.0.4    #添加的第三方模块,实现直接使用key作为uri进行访问

[[email protected] ~]# make && make install

提供nginx启动脚本:

[[email protected] ~]# vim /etc/rc.d/init.d/nginx

#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig:   - 85 15
# description:  Nginx is an HTTP(S) server, HTTP(S) reverse #               proxy and IMAP/POP3 proxy server
# processname: nginx
# config:      /etc/nginx/nginx.conf
# config:      /etc/sysconfig/nginx
# pidfile:     /var/run/nginx.pid

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

nginx="/usr/sbin/nginx"
prog=$(basename $nginx)

NGINX_CONF_FILE="/etc/nginx/nginx.conf"

[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx

lockfile=/var/lock/subsys/nginx

make_dirs() {
   # make required directories
   user=`nginx -V 2>&1 | grep "configure arguments:" | sed ‘s/[^*]*--user=\([^ ]*\).*/\1/g‘ -`
   options=`$nginx -V 2>&1 | grep ‘configure arguments:‘`
   for opt in $options; do
       if [ `echo $opt | grep ‘.*-temp-path‘` ]; then
           value=`echo $opt | cut -d "=" -f 2`
           if [ ! -d "$value" ]; then
               # echo "creating" $value
               mkdir -p $value && chown -R $user $value
           fi
       fi
   done
}

start() {
    [ -x $nginx ] || exit 5
    [ -f $NGINX_CONF_FILE ] || exit 6
    make_dirs
    echo -n $"Starting $prog: "
    daemon $nginx -c $NGINX_CONF_FILE
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {
    echo -n $"Stopping $prog: "
    killproc $prog -QUIT
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {
    configtest || return $?
    stop
    sleep 1
    start
}

reload() {
    configtest || return $?
    echo -n $"Reloading $prog: "
    killproc $nginx -HUP
    RETVAL=$?
    echo
}

force_reload() {
    restart
}

configtest() {
  $nginx -t -c $NGINX_CONF_FILE
}

rh_status() {
    status $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}

case "$1" in
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart|configtest)
        $1
        ;;
    reload)
        rh_status_q || exit 7
        $1
        ;;
    force-reload)
        force_reload
        ;;
    status)
        rh_status
        ;;
    condrestart|try-restart)
        rh_status_q || exit 0
            ;;
    *)
        echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
        exit 2
esac

[[email protected] ~]# chmod +x /etc/rc.d/init.d/nginx  

修改nginx配置文件如下:

http {

    upstream images {
        server 192.168.80.136:7001;
        server 192.168.80.137:7001;
        server 192.168.80.138:7001;
    }

    server {
        listen       80;
        server_name  localhost;
        location /image {
            mogilefs_tracker images;
            mogilefs_domain linux1;
            mogilefs_methods GET;
            mogilefs_noverify on;
            mogilefs_pass {
                proxy_pass $mogilefs_path;
                proxy_hide_header Content-Type;
                proxy_buffering off;
            }
        }

        location /files {
            mogilefs_tracker images;
            mogilefs_domain python1;
            mogilefs_methods GET;
            mogilefs_noverify on;
            mogilefs_pass {
                proxy_pass $mogilefs_path;
                proxy_hide_header Content-Type;
                proxy_buffering off;
            }
        }
    }
}   

启动nginx:

[[email protected] ~]# service nginx start

访问测试:

 

 

  

  

 

时间: 2025-01-07 06:17:28

分布式存储之MogileFS基于Nginx实现负载均衡(Nginx+MogileFS)的相关文章

基于nginx的负载均衡概述与实现

前言: 前面我们提到了lvs和keepalived结合起来的高可用负载均衡,lvs根据原目ip地址及端口将其调度转发至后端 的某个主机,是一种四层的实现,因为lvs是四层的,所以不会受限于套接字或打开的文件数量.不过,如果我们想实现一些更高阶的功能,lvs就显得力不从心了,比如基于uri,cookie,header头部信息的负载均衡,此时我们就可以选择一些7层的负载均衡实现,比如nginx或haproxy等.本次我们就先来讲讲nginx的负载均衡把~ 正文: 其实,如果对lvs的各种类型和调度有

使用nginx sticky实现基于cookie的负载均衡

在多台后台服务器的环境下,我们为了确保一个客户只和一台服务器通信,我们势必使用长连接.使用什么方式来实现这种连接呢,常见的有使用nginx自带的ip_hash来做,我想这绝对不是一个好的办法,如果前端是CDN,或者说一个局域网的客户同时访问服务器,导致出现服务器分配不均衡,以及不能保证每次访问都粘滞在同一台服务器.如果基于cookie会是一种什么情形,想想看, 每台电脑都会有不同的cookie,在保持长连接的同时还保证了服务器的压力均衡,nginx sticky值得推荐. 如果浏览器不支持coo

使用nginx sticky实现基于cookie的负载均衡【转】

在多台后台服务器的环境下,我们为了确保一个客户只和一台服务器通信,我们势必使用长连接.使用什么方式来实现这种连接呢,常见的有使用nginx自带的ip_hash来做,我想这绝对不是一个好的办法,如果前端是CDN,或者说一个局域网的客户同时访问服务器,导致出现服务器分配不均衡,以及不能保证每次访问都粘滞在同一台服务器.如果基于cookie会是一种什么情形,想想看, 每台电脑都会有不同的cookie,在保持长连接的同时还保证了服务器的压力均衡,nginx sticky值得推荐. 如果浏览器不支持coo

Nginx基于TCP的负载均衡的配置例子

原文:https://blog.csdn.net/bigtree_3721/article/details/72833955 nginx-1.9.0 已发布,该版本增加了 stream 模块用于一般的 TCP 代理和负载均衡. The ngx_stream_core_module module is available since version 1.9.0. This module is not built by default, it should be enabled with the -

Nginx的负载均衡方案详解

Nginx的负载均衡方案详解 作者:chszs,转载需注明.博客主页:http://blog.csdn.net/chszs Nginx的负载均衡方案有: 1.轮询 轮询即Round Robin,根据Nginx配置文件中的顺序,依次把客户端的Web请求分发到不同的后端服务器. 配置的例子如下: http{ upstream sampleapp { server <<dns entry or IP Address(optional with port)>>; server <&l

使用nginx+Apache负载均衡及动静分离

使用nginx+Apache负载均衡及动静分离 介绍    LB负载均衡集群分两类: LVS (四层)和 nginx或haproxy (七层)    客户端都是通过访问分发器的VIP来访问网站 在七层中的网站页面有: .php .html .png .jpeg .jsp 等, 有动态页面有静态页面. 需要在应用层基于不同的应用进行分发. 一:实验拓扑图:     二:实验目标 实战:使用Apache+nginx实现动静分离的负载均衡集群 三:实验环境 主机作用分类 主机名 IP地址 安装软件 N

基于Apache+Tomcat负载均衡的两种实现方法

Apache+Tomcat实现负载均衡的两种实现方法 如果我们将工作在不同平台的apache能够实现彼此间的高效通信,因此它需要一种底层机制来实现--叫做apr Apr的主要目的就是为了其能够让apache工作在不同的平台上,但在linux上安装apache的时候通常都是默认安装的 [[email protected] ~]#rpm -qi aprName                 :apr                                        Relocation

redhat6.5搭建nginx+tomcat负载均衡,memcached高速缓存

实验环境: redhat6.5版本虚拟机3台: server1 :172.25.44.10,内存1024M,CPU双核 server2:172.25.44.20,内存512M,CPU单核 server3:172.25.44.30,内存512M,CPU单核 这三台虚拟机iptables为关闭状态,selinux为disabled状态. apache和nginx的区别: nginx相对于apache的优点:轻量级,同样是web服务,nginx比apache占用更少的内存和资源.nginx处理请求是异

6.Nginx作为负载均衡服务器应用

案例:Nginx作为负载均衡服务器应用 nginx的负载均衡功能是通过upstream命令实现的,因此他的负载均衡机制比较简单,是一个基于内容和应用的7层交换负载均衡的实现.Nginx负载均衡默认对后端服务器有健康监测能力,但是监测能力较弱,仅限于端口监测,在后端服务器比较少的情况下(10台以下)负载均衡能力表现突出.而对于有大量后端节点的负载应用,由于所有访问请求都从一台服务器进出,容易发生请求堵塞进而引发连接失败,因此无法充分发挥后端服务器的性能. Nginx负载均衡算法 Nginx的负载均

负载均衡——nginx理论

 nginx是什么? nginx是一个强大的web服务器软件,用于处理高并发的http请求和作为反向代理服务器做负载均衡.具有高性能.轻量级.内存消耗少,强大的负载均衡能力等优势.  nginx架构? 如上官方示意图所示,nginx启动以后,会在系统中以daemon的方式在后台运行,其中包括一个master进程,n(n>=1)个worker进程. 其中,master进程用于接收来自外界的信号,并给worker进程发送信号,同时监控worker进程的工作状态. worker进程则是外部请求真正的处