1、微服务:每个模块都可以独立运行,都可以被其它程序通过标准接口所调用;docker容器用来运行每一个单一简单的程序;然后容器编排系统将运行有程序的容器从各个机器组合运行到容器编排系统上。容器编排系统可以监控容器中程序的状态,并且当容器中程序停止运行时将它重新自动启动,实现了运维的简单,转向容器编排系统的维护。
2、运维三大核心工作:发布(换代码)、变更(增加机器,减少机器和改变配置文件等)、故障处理;
3、配置文件的管理:手动管理-->版本控制中心-->配置中心管理 提高效率
4、数据存储:非结构化数据、半结构化数据和结构化数据
非结构化数据:只能放到文件系统和对象存储上
文件系统;节点表,inode、block,元数据和内容数据;
文件系统缺点:
元数据集中管理,一旦发生丢失,都会丢失;
数据量过大时,效率很低;
对象存储系统:元数据和内容数据存放到一起;每个文件都是一个对象;
缺点:不是文件系统结构,不是集中的元数据,没办法挂载访问;只能通过应用客户端
基于API接口进行访问;
图片一般存放在分布式对象存储系统
半结构化数据:数据存放到键值系统(nosql系统),键和值在一起存放;而mysql则是分开存放
的。如帖子;nosql系统支持cap(数据一致性、可用性、分区容错性)理论;大多数nosql系统是
分布式系统
结构化数据:存放到关系型数据库中,如交易,严格支持事务机制;
5、sre:站点可靠性工程师,Google,掌控运维的方向,开发好用工具,尽量避免人为介入。
6、传输层以下四层在内核空间实现,应用层在用户空间实现;如httpd服务实际上是用户空间的一个守护进程,通过向内核请求注册一个套接字接听80端口,等待用户访问;当用户访问请求到达内核时,进行一层层的解封装,到套接字80端口,符合要求,进行响应;只有工作在用户空间的进程才会进行端口监听,而在内核空间的进程不会进行监听
7、HAProxy是工作在用户空间的,不能直接处理用户请求,伪四层,需要注册监听
8、https:贵、慢;会话是在lvs和后端服务器之间进行的,缓存不好用,每个后端服务器还要配置证书等;
9、会话卸载:可以在客户端和lvs之间进行加密,而lvs到后端就没有加密,这需要lvs机器支持7层调度;
实验:实现HAProxy的调度功能,默认有健康性检查
(1)A机器,调度机,ip为172.18.62.61
yum install haproxy
vim haproxy.cfg
frontend websrvs *:80
default_backend mywebsrvs
backend mywebsrvs
balance roundrobin
server srv1 172.18.62.60:80 check
server srv2 172.18.62.63:80 check
log 127.0.0.1 local5
vim /etc/rsyslog.conf 设置日志
$ModLoad imudp 去掉注释
$UDPServerRun 514
local5.* /var/log/haproxy.log
systemctl restart haproxy
systemctl restart rsyslog
tail /var/log/haproxy.log -f
(2)B机器,ip为172.18.62.60
echo RS1 > /var/www/html/index.html
systemctl start httpd
(3)C机器,ip为172.18.62.63
systemctl start httpd
echo RS2 > /var/www/html/index.html
(4)D机器,ip为172.18.62.50
for i in {1..1000};do sleep 0.5;curl 172.18.62.61;done
当将rs关掉一个时,立马切换到另一个rs调度,速度很快
10、docker必须工作在前台
HAProxy:
LB Cluster:
四层:
lvs, nginx(stream),haproxy(mode tcp)
七层:
http: nginx(http, ngx_http_upstream_module), haproxy(mode http), httpd, ats,
perlbal, pound...
HAProxy:
http://www.haproxy.org
http://www.haproxy.com
文档:
http://cbonte.github.io/haproxy-dconv/
HAProxy is a TCP/HTTP reverse proxy which is particularly suited for high availability
environments. Indeed, it can:
: - route HTTP requests depending on statically assigned cookies
: - spread load among several servers while assuring server persistence
: through the use of HTTP cookies
: - switch to backup servers in the event a main server fails
: - accept connections to special ports dedicated to service monitoring
: - stop accepting connections without breaking existing ones
: - add, modify, and delete HTTP headers in both directions
: - block requests matching particular patterns
: - report detailed status to authenticated users from a URI intercepted by the
application
版本:1.4, 1.5, 1.6, 1.7
程序环境:
主程序:/usr/sbin/haproxy
主配置文件:/etc/haproxy/haproxy.cfg
Unit file:/usr/lib/systemd/system/haproxy.service
配置段:
global:全局配置段
进程及安全配置相关的参数
性能调整相关参数
Debug参数
用户列表
peers
proxies:代理配置段
defaults:为frontend, listen, backend提供默认配置;
fronted:前端,相当于nginx, server {}
backend:后端,相当于nginx, upstream {}
listen:同时拥前端和后端
简单的配置示例:
frontend web
bind *:80
default_backend websrvs
backend websrvs
balance roundrobin
server srv1 172.16.100.6:80 check
server srv2 172.16.100.7:80 check
global配置参数:
进程及安全管理:chroot, daemon,user, group, uid, gid
log:定义全局的syslog服务器;最多可以定义两个路径;
log <address> [len <length>] <facility> [max level [min level]]
nbproc <number>:要启动的haproxy的进程数量;
ulimit-n <number>:每个haproxy进程可打开的最大文件数;每个套接
字都需要一个文件描述符,即一个被打开的文件以跟踪进程状态;这里
haproxy可以实现自动调整;
ulimit:用来限制核心资源的使用
性能调整:
maxconn <number>:设定每个haproxy进程所能接受的最大并发连接
数;Sets the maximum per-process number of concurrent connections
to <number>.
总体的并发连接数:nbproc * maxconn
maxconnrate <number>:Sets the maximum per-process number of
connections per second to <number>. 每个进程每秒种所能创建的最大
连接数量;#防止一瞬间大量连接请求,创建文件,内存速度跟不上,
导致服务器压力很大;
maxse***ate <number>:
maxsslconn <number>: Sets the maximum per-process number of
concurrent SSL connections to <number>.
设定每个haproxy进程所能接受的ssl的最大并发连接数;
spread-checks <0..50, in percent> 分散连接,提前或延迟在0到50%之
间
代理配置段:
- defaults <name>
- frontend <name>
- backend <name>
- listen <name>
A "frontend" section describes a set of listening sockets accepting client
connections. #用于承载前端的连接
A "backend" section describes a set of servers to which the proxy will
connect to forward incoming connections. #用于承载后端的连接
A "listen" section defines a complete proxy with its frontend and backend
parts combined in one section. It is generally useful for TCP-only traffic. 对
应配置段的标识符;
All proxy names must be formed from upper and lower case letters, digits, ‘-
‘ (dash), ‘_‘ (underscore) , ‘.‘ (dot) and ‘:‘ (colon). 区分字符大小写;
配置参数:
bind:Define one or several listening addresses and/or ports in a frontend.
bind [<address>]:<port_range> [, ...] [param*]
listen http_proxy
bind :80,:443
bind 10.0.0.1:10080,10.0.0.1:10443
bind /var/run/ssl-frontend.sock user root mode 600 accept-proxy
套接字:ipvs套接字和socket套接字
socket套接字是基于ipc通信的,只能是在同一个机器上
bind只用于前端frontend和listen,可以查文档得知:
http://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4.1
balance:后端服务器组内的服务器调度算法
balance <algorithm> [ <arguments> ]
balance url_param <param> [check_post]
查文档可知可以设置defaluts、backend和listen
算法:
roundrobin:Each server is used in turns, according to their
weights.
server options: weight #
动态算法:支持权重的运行时调整,而不会打乱整个权重分
布,支持慢启动;每个后端中最多支持4095个server;即不
必重启服务,给新机器慢慢加连接数以达到指定权重;
static-rr:
静态算法:不支持权重的运行时调整及慢启动;后端主机数
量无上限;
leastconn:
推荐使用在具有较长会话的场景中,例如MySQL、LDAP
等;
算法是动态的还是静态的取决于hash_type的值
first:
根据服务器在列表中的位置,自上而下进行调度;前面服务
器的连接数达到上限,新请求才会分配给下一台服务;
source:源地址hash;
除权取余法:
一致性哈希:
基于cookie的调度比较好,但是需要用到7层调度,不用,
因为后台服务器坏了就没法了;
会话不绑定应该采用:
会话粘性,如cookie
会话复制集群:涉及到会话在后端服务器间的复制
session server:集中负责会话保存的服务器
uri:
对URI的左半部分做hash计算,并由服务器总权重相除以后
派发至某挑出的服务器;
<scheme>://<user>:<password>@<host>:
<port>/<path>;<params>?<query>#<frag>
左半部分:/<path>;<params>
整个uri:/<path>;<params>?<query>#<frag>
url_param:对用户请求的uri听<params>部分中的参数的值作
hash计算,并由服务器总权重相除以后派发至某挑出的服务器;
通常用于追踪用户,以确保来自同一个用户的请求始终发往同一
个Backend Server;
hdr(<name>):对于每个http请求,此处由<name>指定的http首部
将会被取出做hash计算; 并由服务器总权重相除以后派发至某
挑出的服务器;没有有效值的会被轮询调度;
hdr(Cookie) 常用,调用灵活细致;每个浏览器的cookie都
是不同的,甚至同一个浏览器的两个进程的cookie也是
不同的;
rdp-cookie
rdp-cookie(<name>)
hash-type:哈希算法
hash-type <method> <function> <modifier>
map-based:除权取余法,哈希数据结构是静态的数组;
consistent:一致性哈希,哈希数据结构是一个树;
<function> is the hash function to be used : 哈希函数
sdbm
djb2
wt6
default_backend <backend>
设定默认的backend,用于frontend中;
default-server [param*]
为backend中的各server设定默认选项;
server <name> <address>[:[port]] [param*]
定义后端主机的各服务器及其选项;
server <name> <address>[:port] [settings ...]
default-server [settings ...]
<name>:服务器在haproxy上的内部名称;出现在日志及警告信
息;
<address>:服务器地址,支持使用主机名;
[:[port]]:端口映射;省略时,表示同bind中绑定的端口;
[param*]:参数
maxconn <maxconn>:当前server的最大并发连接数;
backlog <backlog>:当前server的连接数达到上限后的后援
队列长度;要根据压测进行设置
backup:设定当前server为备用服务器;
check:对当前server做健康状态检测;
addr :检测时使用的IP地址;可以对机器的其他ip检测
port :针对此端口进行检测;
inter <delay>:连续两次检测之间的时间间隔,默认为
2000ms;
rise <count>:连续多少次检测结果为“成功”才标记服务
器为可用;默认为2;
fall <count>:连续多少次检测结果为“失败”才标记服务
器为不可用;默认为3;
健康性检测有3种:网络层检测、传输层检测和应用层检
测;网络层检测只是ping下,准确度不高;传输层检测
是对端口进行检测,保证了服务还在运行;而应用层检
测则很准确,都能访问页面等资源了;
注意:httpchk,"smtpchk", "mysql-check", "pgsql-
check" and "ssl-hello-chk" 用于定义应用层检测方法;
cookie <value>:为当前server指定其cookie值,用于实现基
于cookie的会话黏性;
disabled:标记为不可用;
on-error <mode>:后端服务故障时的行动策略;
- fastinter: force fastinter 快速再次测试
- fail-check: simulate a failed check, also forces fastinter (default)
- sudden-death: simulate a pre-fatal failed health check, one more failed
check will mark a server down, forces fastinter - mark-down: mark the server immediately down and force fastinter
redir <prefix>:将发往此server的所有GET和HEAD类的请求重定向至指定的URL;
weight <weight>:权重,默认为1;OK --> PROBLEM OK --> PROBLEM --> PROBLEM --> PROBLEM PROBLEM --> OK 统计接口启用相关的参数: stats enable 启用统计页;基于默认的参数启用stats page;
- stats uri : /haproxy?stats
- stats realm : "HAProxy Statistics"
- stats auth : no authentication
- stats scope : no restriction
stats auth <user>:<passwd> 认证时的账号和密码,可使用多次; stats realm <realm> 认证时的realm; stats uri <prefix> 自定义stats page uri stats refresh <delay> 设定自动刷新时间间隔; stats admin { if | unless } <cond> 启用stats page中的管理功能 配置示例: listen stats bind :9099 stats enable stats realm HAPorxy\ Stats\ Page stats auth admin:admin stats admin if TRUE maxconn <conns>:为指定的frontend定义其最大并发连接数;默认为2000; Fix the maximum number of concurrent connections on a frontend. mode { tcp|http|health } 定义haproxy的工作模式; tcp:基于layer4实现代理;可代理mysql, pgsql, ssh, ssl等协议; http:仅当代理的协议为http时使用; health:工作为健康状态检查的响应模式,当连接请求到达时回应“OK”后即断开连接; 示例: listen ssh bind :22022 balance leastconn mode tcp server sshsrv1 172.16.100.6:22 check server sshsrv2 172.16.100.7:22 check cookie <name> [ rewrite | insert | prefix ] [ indirect ] [ nocache ] [ postonly ] [ preserve ] [ httponly ] [ secure ] [ domain <domain> ]* [ maxidle <idle> ] [ maxlife <life> ] <name>:is the name of the cookie which will be monitored, modified or inserted in order to bring persistence. rewirte:重写; insert:插入; prefix:前缀; 基于cookie的session sticky的实现: backend websrvs cookie WEBSRV insert nocache indirect server srv1 172.16.100.6:80 weight 2 check rise 1 fall 2 maxconn 3000 cookie srv1 server srv2 172.16.100.7:80 weight 1 check rise 1 fall 2 maxconn 3000 cookie srv2 注意:cookie实现会话绑定,但是curl默认是不使用cookie的; option forwardfor [ except <network> ] [ header <name> ] [ if-none ] Enable insertion of the X-Forwarded-For header to requests sent to servers 在由haproxy发往后端主机的请求报文中添加“X-Forwarded-For”首部,其值前端客户端的地址;用于向后端主发送真实的客户端IP; [ except <network> ]:请求报请来自此处指定的网络时不予添加此首部; [ header <name> ]:使用自定义的首部名称,而非“X-Forwarded-For”; 例子:
vim haproxy.cfg 前端配置
defaluts
option forwardfor except 127.0.0.0/8
vim /etc/httpd/conf/httpd.conf rs配置
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b
\"%{Referer}i\" \"%{User-Agent}i\"" combined
systemctl reload httpd
tail /var/log/httpd/access_log 查看日志可以看到真实客户端iperrorfile <code> <file> Return a file contents instead of errors generated by HAProxy <code>:is the HTTP status code. Currently, HAProxy is capable of generating codes 200, 400, 403, 408, 500, 502, 503, and 504.
没有404的原因:是rs生成的,没法子改变;
<file>:designates a file containing the full HTTP response.示例: errorfile 400 /etc/haproxy/errorfiles/400badreq.http errorfile 408 /dev/null # workaround Chrome pre-connect bug errorfile 403 /etc/haproxy/errorfiles/403forbid.http errorfile 503 /etc/haproxy/errorfiles/503sorry.http errorloc <code> <url> errorloc302 <code> <url> errorfile 403 http://www.magedu.com/error_pages/403.html reqadd <string> [{if | unless} <cond>] Add a header at the end of the HTTP request rspadd <string> [{if | unless} <cond>] Add a header at the end of the HTTP response rspadd X-Via:\ HAPorxy 对由haproxy转发的响应报文首部添加字段,不能直接添加到响应
报文,因为响应报文是由rs生成的;
reqdel <search> [{if | unless} <cond>]
reqidel <search> [{if | unless} <cond>] (ignore case)
Delete all headers matching a regular expression in an HTTP request
删除忽略字符大小写的某个首部字段
rspdel <search> [{if | unless} <cond>]
rspidel <search> [{if | unless} <cond>] (ignore case)
Delete all headers matching a regular expression in an HTTP responserspidel Server.* 日志系统: log: log global log <address> [len <length>] <facility> [<level> [<minlevel>]] no log 注意: 默认发往本机的日志服务器; (1) local2.* /var/log/local2.log (2) $ModLoad imudp $UDPServerRun 514 log-format <string>: 课外实践:参考文档实现combined格式的记录 capture cookie <name> len <length> Capture and log a cookie in the request and in the response. capture request header <name> len <length> Capture and log the last occurrence of the specified request header. capture request header X-Forwarded-For len 15 capture response header <name> len <length> Capture and log the last occurrence of the specified response header. capture response header Content-length len 9 capture response header Location len 15 为指定的MIME类型启用压缩传输功能 compression algo <algorithm> ...:启用http协议的压缩机制,指明压缩算法gzip, deflate; compression type <mime type> ...:指明压缩的MIME类型;常适用于压缩的类型为文本类型; 对后端服务器做http协议的健康状态检测: option httpchk 7层检测 option httpchk <uri> option httpchk <method> <uri> option httpchk <method> <uri> <version> 定义基于http协议的7层健康状态检测机制; http-check expect [!] <match> <pattern> Make HTTP health checks consider response contents or specific status codes. 连接超时时长: timeout client <timeout> Set the maximum inactivity time on the client side. 默认单位是毫秒; timeout server <timeout> Set the maximum inactivity time on the server side. timeout http-keep-alive <timeout> 持久连接的持久时长; timeout http-request <timeout> Set the maximum allowed time to wait for a complete HTTP request timeout connect <timeout> Set the maximum time to wait for a connection attempt to a server to succeed. timeout client-fin <timeout> Set the inactivity timeout on the client side for half-closed connections. timeout server-fin <timeout> Set the inactivity timeout on the server side for half-closed connections. use_backend <backend> [{if | unless} <condition>] Switch to a specific backend if/unless an ACL-based condition is matched. 当符合指定的条件时使用特定的backend; block { if | unless } <condition> Block a layer 7 request if/unless a condition is matched acl invalid_src src 172.16.200.2 block if invalid_src errorfile 403 /etc/fstab http-request { allow | deny } [ { if | unless } <condition> ] Access control for Layer 7 requests tcp-request connection {accept|reject} [{if | unless} <condition>] Perform an action on an incoming connection depending on a layer 4 condition 示例: listen ssh bind :22022 balance leastconn acl invalid_src src 172.16.200.2 tcp-request connection reject if invalid_src mode tcp server sshsrv1 172.16.100.6:22 check server sshsrv2 172.16.100.7:22 check backup
acl:
The use of Access Control Lists (ACL) provides a flexible solution to perform content switching and generally to take decisions based on content extracted from the request, the response or any environmental status.acl <aclname> <criterion> [flags] [operator] [<value>] ... <aclname>:ACL names must be formed from upper and lower case letters, digits, ‘-‘ (dash), ‘_‘ (underscore) , ‘.‘ (dot) and ‘:‘ (colon).ACL names are case-sensitive. <value>的类型: - boolean - integer or integer range - IP address / network - string (exact, substring, suffix, prefix, subdir, domain) - regular expression - hex block <flags> -i : ignore case during matching of all subsequent patterns. -m : use a specific pattern matching method -n : forbid the DNS resolutions -u : force the unique id of the ACL -- : force end of flags. Useful when a string looks like one of the flags. [operator] 匹配整数值:eq、ge、gt、le、lt 匹配字符串: - exact match (-m str) : the extracted string must exactly match the patterns ; - substring match (-m sub) : the patterns are looked up inside the extracted string, and the ACL matches if any of them is found inside ; - prefix match (-m beg) : the patterns are compared with the beginning of the extracted string, and the ACL matches if any of them matches. - suffix match (-m end) : the patterns are compared with the end of the extracted string, and the ACL matches if any of them matches. - subdir match (-m dir) : the patterns are looked up inside the extracted string, delimited with slashes ("/"), and the ACL matches if any of them matches. - domain match (-m dom) : the patterns are looked up inside the extracted string, delimited with dots ("."), and the ACL matches if any of them matches. acl作为条件时的逻辑关系: - AND (implicit) - OR (explicit with the "or" keyword or the "||" operator) - Negation with the exclamation mark ("!") if invalid_src invalid_port if invalid_src || invalid_port if ! invalid_src invalid_port 非只作用于第一个条件 <criterion> : dst : ip dst_port : integer src : ip src_port : integer acl invalid_src src 172.16.200.2 path : string This extracts the request‘s URL path, which starts at the first slash and ends before the question mark (without the host part). /path;<params> path : exact string match path_beg : prefix match path_dir : subdir match path_dom : domain match path_end : suffix match path_len : length match path_reg : regex match path_sub : substring match 例子: path_beg /images/ path_end .jpg .jpeg .png .gif path_reg ^/images.*\.jpeg$ path_sub image path_dir jpegs 2个斜线之间的精确匹配 path_dom ilinux 2个.之间的精确匹配 /images/jpegs/20180312/logo.jpg url : string This extracts the request‘s URL as presented in the request. A typical use is with prefetch-capable caches, and with portals which need to aggregate multiple information from databases and keep them in caches. url : exact string match url_beg : prefix match url_dir : subdir match url_dom : domain match url_end : suffix match url_len : length match url_reg : regex match url_sub : substring match req.hdr([<name>[,<occ>]]) : string This extracts the last occurrence of header <name> in an HTTP request. hdr([<name>[,<occ>]]) : exact string match hdr_beg([<name>[,<occ>]]) : prefix match hdr_dir([<name>[,<occ>]]) : subdir match hdr_dom([<name>[,<occ>]]) : domain match hdr_end([<name>[,<occ>]]) : suffix match hdr_len([<name>[,<occ>]]) : length match hdr_reg([<name>[,<occ>]]) : regex match hdr_sub([<name>[,<occ>]]) : substring match 示例: acl bad_curl hdr_sub(User-Agent) -i curl block if bad_curl status : integer Returns an integer containing the HTTP status code in the HTTP response. Pre-defined ACLs 预定义,内建的acl ACL name Equivalent to Usage FALSE always_false never match HTTP req_proto_http match if protocol is valid HTTP HTTP_1.0 req_ver 1.0 match HTTP version 1.0 HTTP_1.1 req_ver 1.1 match HTTP version 1.1 HTTP_CONTENT hdr_val(content-length) gt 0 match an existing content-length HTTP_URL_ABS url_reg ^[^/:]*:// match absolute URL with scheme HTTP_URL_SLASH url_beg / match URL beginning with "/" HTTP_URL_STAR url * match URL equal to "*" LOCALHOST src 127.0.0.1/8 match connection from local host METH_CONNECT method CONNECT match HTTP CONNECT method METH_GET method GET HEAD match HTTP GET or HEAD method METH_HEAD method HEAD match HTTP HEAD method METH_OPTIONS method OPTIONS match HTTP OPTIONS method METH_POST method POST match HTTP POST method METH_TRACE method TRACE match HTTP TRACE method RDP_COOKIE req_rdp_cookie_cnt gt 0 match presence of an RDP cookie REQ_CONTENT req_len gt 0 match data in the request buffer TRUE always_true always match WAIT_END wait_end wait for end of content analysis
HAProxy:global, proxies(fronted, backend, listen, defaults)
balance:
roundrobin, static-rr 后端web服务器都是静态资源
leastconn mysql调度要用到
first
source
hdr(<name>)
uri (hash-type) 缓存调度用到
url_paramNginx调度算法:ip_hash, hash, leastconn, lvs调度算法: rr/wrr/sh/dh, lc/wlc/sed/nq/lblc/lblcr 基于ACL的动静分离示例: frontend web *:80 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js .html .txt .htm use_backend staticsrvs if url_static default_backend appsrvs backend staticsrvs balance roundrobin server stcsrv1 172.16.100.6:80 check backend appsrvs balance roundrobin server app1 172.16.100.7:80 check server app1 172.16.100.7:8080 check listen stats bind :9091 stats enable stats auth admin:admin stats admin if TRUE
配置HAProxy支持https协议:
1 支持ssl会话;
bind *:443 ssl crt /PATH/TO/SOME_PEM_FILEcrt后的证书文件要求PEM格式,且同时包含证书和与之匹配的所有私钥; cat demo.crt demo.key > demo.pem 2 把80端口的请求重向定443; bind *:80 redirect scheme https if !{ ssl_fc } 另一种配置:对非ssl的任何url的访问统统定向至https主机的主页; redirect location https://172.16.0.67/ if !{ ssl_fc } 3 如何向后端传递用户请求的协议和端口 http_request set-header X-Forwarded-Port %[dst_port] http_request add-header X-Forwared-Proto https if { ssl_fc }
配置时常用的功能:
http --> httpsmode http 压缩、条件式转发、算法、stats page、自定义错误页、访问控制、日志功能 最大并发连接; global, defaults, frontend, listen, server 基于cookie的session粘滞 后端主机的健康状态检测 请求和响应报文首部的操纵 实践(博客)作业: http: (1) 动静分离部署wordpress,动静都要能实现负载均衡,要注意会话的问题; (2) 在haproxy和后端主机之间添加varnish进行缓存; (3) 给出设计拓扑,写成博客; (4) haproxy的设定要求: (a) stats page,要求仅能通过本地访问使用管理接口; (b) 动静分离; (c) 分别考虑不同的服务器组的调度算法; (4) 压缩合适的内容类型;
原文地址:http://blog.51cto.com/angwoyufengtian/2126391