traefik 与 rancher 集群自带haproxy 压测性能对比

ab性能指标

在进行性能测试过程中有几个指标比较重要:

1、吞吐率(Requests per second)

服务器并发处理能力的量化描述,单位是reqs/s,指的是在某个并发用户数下单位时间内处理的请求数。某个并发用户数下单位时间内能处理的最大请求数,称之为最大吞吐率。

记住:吞吐率是基于并发用户数的。这句话代表了两个含义:

a、吞吐率和并发用户数相关

b、不同的并发用户数下,吞吐率一般是不同的

计算公式:总请求数/处理完成这些请求数所花费的时间,即

Request per second=Complete requests/Time taken for tests

必须要说明的是,这个数值表示当前机器的整体性能,值越大越好。

2、并发连接数(The number of concurrent connections)

并发连接数指的是某个时刻服务器所接受的请求数目,简单的讲,就是一个会话。

3、并发用户数(Concurrency Level)

要注意区分这个概念和并发连接数之间的区别,一个用户可能同时会产生多个会话,也即连接数。在HTTP/1.1下,IE7支持两个并发连接,IE8支持6个并发连接,FireFox3支持4个并发连接,所以相应的,我们的并发用户数就得除以这个基数。

4、用户平均请求等待时间(Time per request)

计算公式:处理完成所有请求数所花费的时间/(总请求数/并发用户数),即:

Time per request=Time taken for tests/(Complete requests/Concurrency Level)

5、服务器平均请求等待时间(Time per request:across all concurrent requests)

计算公式:处理完成所有请求数所花费的时间/总请求数,即:

Time taken for/testsComplete requests

可以看到,它是吞吐率的倒数。

同时,它也等于用户平均请求等待时间/并发用户数,即

Time per request/Concurrency Level

### 测试结果含义参考:

```

Document Path: /a.php //请求的资源

Document Length: 0 bytes // 文档返回的长度,不包括相应头

Concurrency Level: 1000 // 并发个数

Time taken for tests: 48.650 seconds //总请求时间

Complete requests: 50000 // 总请求数

Failed requests: 0 //失败的请求数

Broken pipe errors: 0

Total transferred: 9750000 bytes

HTML transferred: 0 bytes

Requests per second: 1027.75 [#/sec] (mean) // 平均每秒的请求数

Time per request: 973.00 [ms] (mean) // 平均每个请求消耗的时间

Time per request: 0.97 [ms] (mean, across all concurrent requests) // 就是上面的时间 除以并发数

Transfer rate: 200.41 [Kbytes/sec] received // 时间传输速率

```

[[email protected]_node ~]# ab -n 50000 -c 10 http://nginx.test.local:8080/

-c 100 即:每次并发100个

-n 10000 即: 共发送10000个请求

++++++++++++++++++++++++++++++ 10 并发 +++++++++后端: 9个container+++++++++++++++++++++++

### traefik 转发

[[email protected]_node ~]# ab -n 50000 -c 100 http://nginx.test.local:8080/

6个container

```

Concurrency Level:      100

Time taken for tests:   5.351 seconds

Complete requests:      50000

Failed requests:        0

Write errors:           0

Total transferred:      11600000 bytes

HTML transferred:       1000000 bytes

Requests per second:    9344.50 [#/sec] (mean)

Time per request:       10.701 [ms] (mean)

Time per request:       0.107 [ms] (mean, across all concurrent requests)

Transfer rate:          2117.11 [Kbytes/sec] received

```

### 使用rancher 集群 haproxy

ab -n 50000 -c 100  http://nginx.test.local:8080/

```

Concurrency Level:      100

Time taken for tests:   10.822 seconds

Complete requests:      50000

Failed requests:        0

Write errors:           0

Total transferred:      12550000 bytes

HTML transferred:       1000000 bytes

Requests per second:    4620.27 [#/sec] (mean)

Time per request:       21.644 [ms] (mean)

Time per request:       0.216 [ms] (mean, across all concurrent requests)

Transfer rate:          1132.51 [Kbytes/sec] received

```

++++++++++++++++++++++++++++++ 100 并发 +++++++++后端: 9个container+++++++++++++++++++++++

### traefik 转发

[[email protected]_node ~]# ab -n 5000000 -c 100 -t 5 http://nginx.test.local:8080/

9个container

```

Concurrency Level:      100

Time taken for tests:   4.728 seconds

Complete requests:      50000

Failed requests:        0

Write errors:           0

Total transferred:      11600000 bytes

HTML transferred:       1000000 bytes

Requests per second:    10575.30 [#/sec] (mean)

Time per request:       9.456 [ms] (mean)

Time per request:       0.095 [ms] (mean, across all concurrent requests)

Transfer rate:          2395.97 [Kbytes/sec] received

```

### 使用rancher 集群 haproxy 转发

[[email protected]_node ~]# ab -n 5000000 -c 100  http://nginx.test.local:8080/

```

Concurrency Level:      100

Time taken for tests:   953.029 seconds

Complete requests:      5000000

Failed requests:        0

Write errors:           0

Total transferred:      1255000000 bytes

HTML transferred:       100000000 bytes

Requests per second:    5246.43 [#/sec] (mean)

Time per request:       19.061 [ms] (mean)

Time per request:       0.191 [ms] (mean, across all concurrent requests)

Transfer rate:          1285.99 [Kbytes/sec] received

```

++++++++++++++++++++++++++++++ 50 并发 +++++++++后端: 9个container+++++++++++++++++++++++

### traefik 转发

[[email protected]_node ~]# ab -n 5000000 -c 50  http://nginx.test.local:8080/

```

Concurrency Level:      50

Time taken for tests:   545.354 seconds

Complete requests:      5000000

Failed requests:        0

Write errors:           0

Total transferred:      1160000000 bytes

HTML transferred:       100000000 bytes

Requests per second:    9168.36 [#/sec] (mean)

Time per request:       5.454 [ms] (mean)

Time per request:       0.109 [ms] (mean, across all concurrent requests)

Transfer rate:          2077.21 [Kbytes/sec] received

```

### 使用rancher 集群 haproxy 转发

[[email protected]_node ~]# ab -n 5000000 -c 50  http://nginx.test.local:8080/

```

Concurrency Level:      50

Time taken for tests:   1082.314 seconds

Complete requests:      5000000

Failed requests:        0

Write errors:           0

Total transferred:      1255000000 bytes

HTML transferred:       100000000 bytes

Requests per second:    4619.73 [#/sec] (mean)

Time per request:       10.823 [ms] (mean)

Time per request:       0.216 [ms] (mean, across all concurrent requests)

Transfer rate:          1132.38 [Kbytes/sec] received

```

++++++++++++++++++++++++++++++ 10 并发 +++++++++后端: 9个container+++++++++++++++++++++++

### traefik 转发

[[email protected]_node ~]# ab -n 5000000 -c 10  http://nginx.test.local:8080/

9个container

```

Concurrency Level:      10

Time taken for tests:   582.869 seconds

Complete requests:      5000000

Failed requests:        0

Write errors:           0

Total transferred:      1160000000 bytes

HTML transferred:       100000000 bytes

Requests per second:    8578.26 [#/sec] (mean)

Time per request:       1.166 [ms] (mean)

Time per request:       0.117 [ms] (mean, across all concurrent requests)

Transfer rate:          1943.51 [Kbytes/sec] received

```

### 使用rancher 集群 haproxy 转发

```

[[email protected]_node ~]# ab -n 5000000 -c 10  http://nginx.test.local:8080/

Concurrency Level:      10

Time taken for tests:   1340.674 seconds

Complete requests:      5000000

Failed requests:        0

Write errors:           0

Total transferred:      1255000000 bytes

HTML transferred:       100000000 bytes

Requests per second:    3729.47 [#/sec] (mean)

Time per request:       2.681 [ms] (mean)

Time per request:       0.268 [ms] (mean, across all concurrent requests)

Transfer rate:          914.16 [Kbytes/sec] received

```

### traefik 与 nginx性能对比

https://docs.traefik.io/benchmarks/

时间: 2024-08-19 11:40:37

traefik 与 rancher 集群自带haproxy 压测性能对比的相关文章

MyCat集群部署(HAProxy + MyCat)

本文档内容的依赖龙果学院<基于Dubbo的分布式系统架构实战>课程 二.软件版本 操作系统:CentOS-6.6-x86_64 JDK版本:jdk1.7.0_72 HAProxy版本:haproxy-1.5.16.tar.gz MyCat版本:Mycat-server-1.4-release-20151019230038-linux.tar.gz MySQL版本:mysql-5.6.26.tar.gz 三.部署环境规划 四.MyCat集群部署架构图如下: 图解说明: HAProxy负责将请求分

Rancher集群化docker管理平台部署、特性及破坏性测试。

rancher是一个docker集群化管理平台,相对于mesos和k8s架构,rancher的部署管理非常简单方便.并且功能丰富.如下为本人绘制的逻辑架构图. 1:部署Rancher管理平台 规划: server:10.64.5.184 agent1:10.64.5.185 agent2:10.64.5.186 agent3:10.64.5.187 agent4:10.64.5.188 部署方式: docker容器启动 server端部署   依赖镜像:rancher/server:latest

Traefik实现Kubernetes集群服务外部https访问

1.部署 Traefik 由于我们需要将外部对于kubernetes的http请求全都转换成https,不想更改服务的配置以及代码,那我们可以选择在traefik上配置域名证书,这样通过域名对服务的访问将会自动转换成https请求. 1.1创建ClusterRole以及ClusterRoleBinding(Kubernetes1.6+) ingress-rbac.yaml文件: apiVersion: v1 kind: ServiceAccount metadata:   name: ingre

rabbitmq3.6.5镜像集群搭建以及haproxy负载均衡

一.集群架构 后端75.103.69分别是3台rabbitmq节点做镜像集群,前端103用haproxy作为负载均衡器 二.安装rabbitmq节点 参照 https://www.cnblogs.com/sky-cheng/p/10709104.html 三.配置hosts文件 vim /etc/hosts 172.28.18.75 node1172.28.18.103 node2172.28.18.69 node3 四.设置erlang cookie RabbitMQ节点之间和命令行工具 (e

kubernetes 集群安装etcd集群,带证书

install etcd 准备证书 https://www.kubernetes.org.cn/3096.html 在master1需要安装CFSSL工具,这将会用来建立 TLS certificates. export CFSSL_URL="https://pkg.cfssl.org/R1.2" wget "${CFSSL_URL}/cfssl_linux-amd64" -O /usr/local/bin/cfssl wget "${CFSSL_URL}

负载均衡集群总结(Haproxy)

环境:Centos 6.9,Mysql 8.0 首先要先配置mysql主从复制集,可以参考我的上一篇>>Mysql 主从复制总结(详细) 我的主节点在(master):192.168.110.76 两个从节点在(slave):192.168.110.77,192.168.110.78    负载均衡节点(proxy):192.168.110.69 HaProxy HaProxy.Lvs.Nginx Nginx基于http 在web领域,路径解析功能很强大.曾经自己手动搭载过,不复杂.性能最低

hadoop集群自带WordCount例子

默认当前位置是hadoop安装包位置 jar包:share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar 一 前置准备 $ cd share/hadoop/mapreduce/ 因为这个需要hdfs中的文件,需要掌握基本的hdfs命令 HDFS基本的文件命令: 格式: hadoop fs -cmd <args> 其中,cmd代表具体的文件命令,与unix对应的命令相同,args表示可变的参数. 如, hadoop fs --获取完整的

apache自带ab压测

./ab -k -n100000 -c100 http://localhost/index.php -k表示保持连接keep-alive -n表示请求数 -c表示并发数 (总结)Web性能压力测试工具之ApacheBench(ab)详解 http://www.ha97.com/4617.html

实操教程丨如何在K8S集群中部署Traefik Ingress Controller

注:本文使用的Traefik为1.x的版本 在生产环境中,我们常常需要控制来自互联网的外部进入集群中,而这恰巧是Ingress的职责. Ingress的主要目的是将HTTP和HTTPS从集群外部暴露给该集群中运行的服务.这与Ingress控制如何将外部流量路由到集群有异曲同工之妙.接下来,我们举一个实际的例子来更清楚的说明Ingress的概念. 首先,想象一下在你的Kubernetes集群中有若干个微服务(小型应用程序之间彼此通信).这些服务能够在集群内部被访问,但我们想让我们的用户从集群外部也