高可用FastDFS多Group多Storage多Tracker主备结合SpringBoot

FastDFS前奏

在开始之前,这篇文章并不是初级入门的文章,在该文章发布之前已经有两篇文章讲述了FastDFS和SpringBoot相关的知识,请参阅:

如果需要在FastDFS中实现HTTPS功能,请参阅:

在下图中,任何一个节点都支持水平横向无限扩展。当在某一个Group中添加新的Storage服务器的时候,新加入该Group的服务器会自动同步所有数据。当新添加Group的时候,只需要修改处于前端的Nginx服务器即可。

  • FastDFS服务器通过HTTP提供服务,但是HTTP的性能太弱,所以在V4.05之后的版本中删除了内置的HTTP服务。
  • FastDFS的Group之间的Storage需要复制数据,如果请求的数据正在复制,由于不是源服务器,那么fastdfs-nginx-module将进行重定向源服务器。
  • fastdfs-nginx-module支持配置多个组,一个服务器可以配置多个Group不同的Storage。同一个Group的Storage必须端口相同。
  • Nginx会为FastDFS提供负载均衡和数据缓存的功能。
  • FastDFS集群的总容量等于全部Group容量之和。一个Group的容量等于最小的Storage服务器容量。
  • 组一般用于隔离不同的数据。

FastDFS分布式文件集群HA架构图

集群中具体机器配置

安装软件 机器主机名 应用IP地址 管理IP地址
fastdfs+fastdfs-nginx-module fastdfs-storage1-group1 192.168.80.11 192.168.10.11
fastdfs+fastdfs-nginx-module fastdfs-storage2-group1 192.168.80.12 192.168.10.12
fastdfs+fastdfs-nginx-module fastdfs-storage3-group1 192.168.80.13 192.168.10.13
fastdfs+fastdfs-nginx-module fastdfs-storage4-group2 192.168.80.14 192.168.10.14
fastdfs+fastdfs-nginx-module fastdfs-storage5-group2 192.168.80.15 192.168.10.15
fastdfs+fastdfs-nginx-module fastdfs-storage6-group2 192.168.80.16 192.168.10.16
fastdfs+fastdfs-nginx-module fastdfs-storage7-group3 192.168.80.17 192.168.10.17
fastdfs+fastdfs-nginx-module fastdfs-storage8-group3 192.168.80.18 192.168.10.18
fastdfs+fastdfs-nginx-module fastdfs-storage9-group3 192.168.80.19 192.168.10.19
fastdfs+fastdfs-nginx-module fastdfs-storage10-group1-new 192.168.80.20 192.168.10.20
fastdfs fastdfs-tracker1 192.168.80.21 192.168.10.21
fastdfs fastdfs-tracker2 192.168.80.22 192.168.10.22
fastdfs fastdfs-tracker3 192.168.80.23 192.168.10.23
nginx fastdfs-nginx0 192.168.80.50 192.168.10.50
nginx fastdfs-nginx1 192.168.80.51 192.168.10.51

(1)在所有服务器上部署FastDFS服务

yum update
mkdir /source
cd /source
yum install -y gcc gcc-c++ make cmake wget libevent
wget https://github.com/happyfish100/libfastcommon/archive/V1.0.35.tar.gz
wget https://github.com/happyfish100/fastdfs/archive/V5.10.tar.gz
tar -zxvf V1.0.35.tar.gz
tar -zxvf V5.10.tar.gz
cd libfastcommon-1.0.35
./make.sh
./make.sh install
ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so
cd ../
cd fastdfs-5.10/
./make.sh
./make.sh install
cd ../
cp /etc/fdfs/tracker.conf.sample /etc/fdfs/tracker.conf
cp /etc/fdfs/storage.conf.sample /etc/fdfs/storage.conf
cp /etc/fdfs/client.conf.sample /etc/fdfs/client.conf
mkdir -p /data/fdfs/tracker
mkdir -p /data/fdfs/storage
mkdir -p /data/fdfs/client
mkdir -p /data/fdfs/disks/volume0
mkdir -p /data/fdfs/disks/volume1
mkdir -p /data/fdfs/disks/volume2
mkdir -p /data/fdfs/disks/volume3
mkdir -p /data/fdfs/disks/volume4
mkdir -p /data/fdfs/disks/volume5

(2)为Storage部署fastdfs-nginx-module

部署步骤

# 添加用户
useradd nginx -s /sbin/nologin -M

# 安装Nginx需要的pcre(Perl兼容正则表达式)库,允许Nginx使用rewrite模块提供URL重写功能。
yum install pcre pcre-devel perl-ExtUtils-Embed -y

# 安装openssl-devel,允许Nginx提供HTTPS服务。
yum install openssl-devel -y

# 下载软件包
cd /source
wget http://nginx.org/download/nginx-1.16.1.tar.gz
wget http://nchc.dl.sourceforge.net/project/fastdfs/FastDFS%20Nginx%20Module%20Source%20Code/fastdfs-nginx-module_v1.16.tar.gz

# 解压软件包
tar xvf fastdfs-nginx-module_v1.16.tar.gz
tar -xvf nginx-1.16.1.tar.gz

# 创建必要的软连接
ln -s /usr/include/fastdfs/ /usr/local/include/fastdfs
ln -s /usr/include/fastcommon/ /usr/local/include/fastcommon
cp fastdfs-5.10/conf/http.conf /etc/fdfs/
cp fastdfs-5.10/conf/mime.types /etc/fdfs/
cp fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/
mkdir -p /data/fdfs/fastdfs-nginx-module

# 编译安装
cd nginx-1.16.1
./configure --user=nginx --group=nginx --prefix=/application/nginx-1.16.1 --with-http_ssl_module --with-http_gzip_static_module --with-poll_module --with-file-aio --with-http_realip_module --with-http_addition_module --with-http_random_index_module --with-pcre --with-http_stub_status_module --with-stream --add-module=/source/fastdfs-nginx-module/src/
make
make install
ln -s /application/nginx-1.16.1/ /application/nginx

# 修改权限并创建systemd控制单元
chown -R nginx:nginx /application/nginx*
touch /usr/lib/systemd/system/nginx.service

Nginx配置文件

user  nginx;
worker_processes  8;

error_log  logs/error.log;

pid        logs/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  ‘$remote_addr - $remote_user [$time_local] "$request" ‘
                      ‘$status $body_bytes_sent "$http_referer" ‘
                      ‘"$http_user_agent" "$http_x_forwarded_for"‘;

    sendfile        on;

    keepalive_timeout  65;

    gzip  on;

    server {
        listen       80;

        # 配置为主机域名
        server_name  fastdfs-storage1-group1;

        access_log  logs/host.access.log  main;

        location ~/group[0-9] {
            ngx_fastdfs_module;
        }
    }
}

配置mod_fastdfs.conf文件

base_path=/data/fdfs/fastdfs-nginx-module
tracker_server=192.168.80.21:22122
tracker_server=192.168.80.22:22122
tracker_server=192.168.80.23:22122

# 配置所属组的名称
group_name=group1

url_have_group_name = true
store_path_count=6
store_path0=/data/fdfs/disks/volume0
store_path1=/data/fdfs/disks/volume1
store_path2=/data/fdfs/disks/volume2
store_path3=/data/fdfs/disks/volume3
store_path4=/data/fdfs/disks/volume4
store_path5=/data/fdfs/disks/volume5

Nginx SystemD单元脚本

[Unit]
Description=nginx
Documentation=http://nginx.org/en/docs/
After=network.target

[Service]
Type=forking
PIDFile=/application/nginx/logs/nginx.pid
ExecStartPre=/application/nginx/sbin/nginx -t -c /application/nginx/conf/nginx.conf
ExecStart=/application/nginx/sbin/nginx -c /application/nginx/conf/nginx.conf
ExecReload=/application/nginx/sbin/nginx -s reload
ExecStop=/application/nginx/sbin/nginx -s stop
Restart=on-abort
PrivateTmp=true

[Install]
WantedBy=multi-user.target

(3)配置Tracker服务器集群

# 配置tracker的IP地址或者主机域名,前提要进行配置静态hosts解析或者配置DNS
bind_addr=192.168.80.21

# tracker的数据和日志存储目录
base_path=/data/fdfs/tracker

(4)配置Storage服务器集群

# 配置该Storage服务属于哪一个Group
group_name=group1

# 配置绑定的IP地址或者主机域名
bind_addr=192.168.80.12

# 配置日志存储目录
base_path=/data/fdfs/storage

# 配置数据存储目录的个数
store_path_count=6

# 配置数据存储目录
store_path0=/data/fdfs/disks/volume0
store_path1=/data/fdfs/disks/volume1
store_path2=/data/fdfs/disks/volume2
store_path3=/data/fdfs/disks/volume3
store_path4=/data/fdfs/disks/volume4
store_path5=/data/fdfs/disks/volume5

# 配置跟踪服务器的IP地址或者主机域名
tracker_server=192.168.80.21:22122
tracker_server=192.168.80.22:22122
tracker_server=192.168.80.23:22122

在全部主机上执行启动服务并设置为开机自启动

(5)在NGINX节点上配置NGINX进行负载均衡

NGINX配置文件

user  nginx;
worker_processes  8;

error_log  logs/error.log;

pid        logs/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  ‘$remote_addr - $remote_user [$time_local] "$request" ‘
                      ‘$status $body_bytes_sent "$http_referer" ‘
                      ‘"$http_user_agent" "$http_x_forwarded_for"‘;

    sendfile        on;

    keepalive_timeout  65;

    gzip  on;

    # Group1的Storage节点
    upstream g1_pool {
        server 192.168.80.11 weight=1;
        server 192.168.80.12 weight=1;
        server 192.168.80.13 weight=1;
    }

    # Group2的Storage节点
    upstream g2_pool {
        server 192.168.80.14 weight=1;
        server 192.168.80.15 weight=1;
        server 192.168.80.16 weight=1;
    }

    # Group3的Storage节点
    upstream g3_pool {
        server 192.168.80.17 weight=1;
        server 192.168.80.18 weight=1;
        server 192.168.80.19 weight=1;
    }

    server {
        listen       80;
        server_name  localhost;
        access_log  logs/host.access.log  main;

        location ~/group1 {
            proxy_pass http://g1_pool;
        }

        location ~/group2 {
            proxy_pass http://g2_pool;
        }

        location ~/group3 {
            proxy_pass http://g3_pool;
        }

    }
}

启动所有服务

systemctl daemon-reload
systemctl start nginx
systemctl enable nginx

# 如果是跟踪服务器执行这两行
systemctl restart fdfs_trackerd
systemctl enable fdfs_trackerd

# 如果是存储服务器执行这两行
systemctl restart fdfs_storaged
systemctl enable fdfs_storaged

(6)使用FastDFS自带的Client上传图片进行测试

在任意一台服务器配置/etc/fdfs/client.conf

base_path=/data/fdfs/client
tracker_server=192.168.80.21:22122
tracker_server=192.168.80.22:22122
tracker_server=192.168.80.23:22122

在这台服务器进行上传文件测试

可以看到经过多次上传测试,多Group+多Tracker+多Storager生效了。

group2/M00/00/00/wKhQDl2y9aOAcngTACdpr-L7emo254.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group2/M00/00/00/wKhQD12y9aiARzwQACdpr-L7emo948.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group2/M00/00/00/wKhQEF2y9ayAZBcBACdpr-L7emo474.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M00/00/00/wKhQEV2y9a6Af_mrACdpr-L7emo390.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M00/00/00/wKhQEl2y9a-Ae1cWACdpr-L7emo570.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M00/00/00/wKhQE12y9bGAbWemACdpr-L7emo654.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M01/00/00/wKhQEV2y9bKADzEIACdpr-L7emo251.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M01/00/00/wKhQEl2y9bOAHT16ACdpr-L7emo211.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M01/00/00/wKhQE12y9bSAEwBUACdpr-L7emo190.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M02/00/00/wKhQEV2y9bWAc4CSACdpr-L7emo991.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M02/00/00/wKhQEl2y9baAJmqNACdpr-L7emo880.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M02/00/00/wKhQE12y9biAbE8IACdpr-L7emo426.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M03/00/00/wKhQEV2y9bmAGh5pACdpr-L7emo466.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M03/00/00/wKhQEl2y9bmATKQjACdpr-L7emo349.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M03/00/00/wKhQE12y9bmAXVOfACdpr-L7emo921.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M04/00/00/wKhQEV2y9bmAV9sqACdpr-L7emo420.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M04/00/00/wKhQEl2y9bqAOg2NACdpr-L7emo216.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M04/00/00/wKhQE12y9bqAcZKEACdpr-L7emo382.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M05/00/00/wKhQEV2y9bqAJVS9ACdpr-L7emo910.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M05/00/00/wKhQEl2y9buATzq2ACdpr-L7emo523.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M05/00/00/wKhQE12y9buALTHRACdpr-L7emo483.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M00/00/00/wKhQEV2y9buAJO-PACdpr-L7emo079.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M00/00/00/wKhQEl2y9buAP1NpACdpr-L7emo443.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
^[[Agroup3/M00/00/00/wKhQE12y9byAdfYxACdpr-L7emo025.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M01/00/00/wKhQEV2y9byAJB2OACdpr-L7emo747.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M01/00/00/wKhQEl2y9byAIye6ACdpr-L7emo456.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M01/00/00/wKhQE12y9byADddiACdpr-L7emo585.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
^[[Agroup3/M02/00/00/wKhQEV2y9b2AFuifACdpr-L7emo318.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M02/00/00/wKhQEl2y9b2AQ3prACdpr-L7emo501.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group3/M02/00/00/wKhQE12y9b2AbG-UACdpr-L7emo027.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group1/M00/00/00/wKhQC12y9b6AZ866ACdpr-L7emo376.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group1/M00/00/00/wKhQDF2y9b6ABGW6ACdpr-L7emo312.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group1/M00/00/00/wKhQDV2y9b6AFEd_ACdpr-L7emo639.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group1/M01/00/00/wKhQC12y9b-AU07YACdpr-L7emo467.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group1/M01/00/00/wKhQDF2y9b-ALDNrACdpr-L7emo209.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group1/M01/00/00/wKhQDV2y9b-AAsKGACdpr-L7emo483.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group1/M02/00/00/wKhQC12y9b-AfzkvACdpr-L7emo440.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group1/M02/00/00/wKhQDF2y9cCATq8TACdpr-L7emo504.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group1/M02/00/00/wKhQDV2y9cCAUPEnACdpr-L7emo555.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group1/M03/00/00/wKhQC12y9cCABjj9ACdpr-L7emo576.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png
group1/M03/00/00/wKhQDF2y9cGAHNLqACdpr-L7emo735.png
[[email protected] source]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /source/girl.png

(7)集成到SpringBoot工程进行测试

在SpringBoot中使用FastDFS的时候需要集成fastdfs-spring-boot-starter

克隆项目到本地

git clone https://github.com/bluemiaomiao/fastdfs-spring-boot-starter.git
cd fastdfs-spring-boot-starter

使用Maven编译打包并安装到本地

mvn clean install
mvn source:jar install
mvn javadoc:jar install

在POM文件中添加依赖

<dependency>
    <groupId>com.bluemiaomiao</groupId>
    <artifactId>fastdfs-spring-boot-starter</artifactId>
    <version>1.0-SNAPSHOT</version>
</dependency>

在主配置类上添加注解 (@EnableFastdfsClient)

@EnableFastdfsClient
@SpringBootApplication
public class DemoApplication {

    @Autowired
    private FastdfsClientService fastdfsClientService;

    public static void main(String[] args) {
        SpringApplication.run(DemoApplication.class, args);
    }
}

添加配置条目(application.properties)

fastdfs.nginx-servers=192.168.80.50:80,192.168.80.51:80
fastdfs.tracker-servers=192.168.80.21:22122,192.168.80.22:22122,192.168.80.23:22122
fastdfs.http-secret-key=2scPwMPctXhbLVOYB0jyuyQzytOofmFCBIYe65n56PPYVWrn

# 在本项目中没有使用Token功能
fastdfs.http-anti-steal-token=false

fastdfs.http-tracker-http-port=8080
fastdfs.network-timeout=30
fastdfs.connect-timeout=5
fastdfs.connection-pool-max-idle=18
fastdfs.connection-pool-min-idle=2
fastdfs.connection-pool-max-total=18
fastdfs.charset=UTF-8

或者添加配置条目(application.yml)

fastdfs:
  charset: UTF-8
  connect-timeout: 5
  http-secret-key: 2scPwMPctXhbLVOYB0jyuyQzytOofmFCBIYe65n56PPYVWrn
  network-timeout: 30
  http-anti-steal-token: false
  http-tracker-http-port: 8080
  connection-pool-max-idle: 20
  connection-pool-max-total: 20
  connection-pool-min-idle: 2
  nginx-servers: 192.168.80.50:80,192.168.80.51:80
  tracker-servers: 192.168.80.21:22122,192.168.80.22:22122,192.168.80.23:22122

测试主要代码

@RestController
@RequestMapping("/file")
public class FileController {

    @Autowired
    private FastdfsClientService fastdfsClientService;

    @PostMapping("/upload")
    public String[] upload(@RequestParam("file") MultipartFile file) {
        String[] remoteInfo = null;
        try {
            remoteInfo = fastdfsClientService.upload("group1", file.getBytes(), "png", null);
        } catch (Exception e) {
            e.printStackTrace();
        }

        return remoteInfo;
    }

    @GetMapping("/download")
    public String download(@RequestParam("group") String group, @RequestParam("file_id") String fileId) {
        String url = "";
        try {
            url = fastdfsClientService.autoDownloadWithoutToken(group, fileId, UUID.randomUUID().toString());
        } catch (Exception e) {
            e.printStackTrace();
        }

        return url;
    }
}

原文地址:https://blog.51cto.com/xvjunjie/2445770

时间: 2024-08-06 16:34:55

高可用FastDFS多Group多Storage多Tracker主备结合SpringBoot的相关文章

nginx负载均衡+keepalived高可用完全配置小结

nginx做负载均衡(无高可用) 大致步骤. 1. 前端 nginx安装,pcre安装,具体步骤不解释. 2. 负载配置 A. 默认轮循 在nginx.conf  里加入一行 include upstream.conf,然后所有的负载均衡的配置直接在upstream.conf里配置. [[email protected] conf]# cat upstream.conf upstream httpservers { server 192.168.137.10:80 weight=5; serve

Mysql+Keepalived双主热备高可用操作记录

环境: ubuntu18.04.2 mysql5.7.21 1 #1)安装keepalived并将其配置成系统服务.master1和master2两台机器上同样进行如下操作: 2 apt-get install libssl-dev 3 apt-get install openssl 4 apt-get install libpopt-dev 5 [[email protected] ~]# cd /usr/local/src/ 6 [[email protected] src]# wget h

Keepalived学习,双机主备高可用

一.主机安装 1.解压 tar -zxvf keepalived-2.0.18.tar.gz 2.解压后进入到解压出来的目录,看到会有configure,那么就可以做配置了 3.使用configure命令配置安装目录与核心配置文件所在位置: ./configure --prefix=/usr/local/keepalived --sysconf=/etc prefix:keepalived安装的位置 sysconf:keepalived核心配置文件所在位置,固定位置,改成其他位置则keepali

ELK架构下利用Kafka Group实现Logstash的高可用

系统运维的过程中,每一个细节都值得我们关注 下图为我们的基本日志处理架构 所有日志由Rsyslog或者Filebeat收集,然后传输给Kafka,Logstash作为Consumer消费Kafka里边的数据,分别写入Elasticsearch和Hadoop,最后使用Kibana输出到web端供相关人员查看,或者是由Spark接手进入更深层次的分析 在以上整个架构中,核心的几个组件Kafka.Elasticsearch.Hadoop天生支持高可用,唯独Logstash是不支持的,用单个Logsta

镜像仓库Harbor私服高可用策略分析及部署

一.分析指定Harbor高可用策略 主流的策略有那么几种: 1.harbor做双主复制 2.harbor集群挂载分布式cephfs存储 3.在k8s集群上部署harbor 第二种和第三种都是多个节点,然后挂载的分布式存储,然后为了保证数据的统一性使用单独的mysql数据库,这样以来存在mysql数据和镜像仓库数据单点存放,故障恢复难度大,安装操作复杂的问题 双主复制不存在这些问题,数据多点存放,而且扩容更改高可用模式操作简单,可以更换成主主从等模式. 二.安装docker-compose #ca

用Heartbeat实现web服务器高可用

用Heartbeat实现web服务器高可用 heartbeat概述: Heartbeat 项目是 Linux-HA 工程的一个组成部分,它实现了一个高可用集群系统.心跳服务和集群通信是高可用集群的两个关键组件,在 Heartbeat 项目里,由 heartbeat 模块实现了这两个功能. 端口号:694 1)heartbeat的工作原理: heartbeat最核心的包括两个.部分,心跳监测部分和资源接管部分,心跳监测可以通过网络链路和串口进行,而且支持冗余链路,它们之间相互发送报文来告诉对方自己

mysql高可用架构方案之一(keepalived+主主双活)

Mysql双主双活+keepalived实现高可用 目录 1.前言... 4 2.方案... 4 2.1.环境及软件... 4 2.2.IP规划... 4 2.3.架构图... 4 3.安装设置MYSQL半同步... 5 4.Keepalived实现MYSQL的高可用... 11 1.前言 最近研究了下高可用的东西,这里总结一下mysql主主双活的架构方案,整体上提高服务的高可用性,出现问题也不需要手动切换,提高整体的维护效率.确定改造的话,只需要让他们的程序中使用vip地址就可以,实现起来比较

keepalived + nginx 实现高可用之远程面签项目

面签系统部署文档 1. 准备工作 1.1 前提 运维应确保各个系统网络策略已经开通并验证通过 运维需提供安装系统的DVD光盘或镜像 云屋视频模块由云屋工程师负责部署安装 客服系统由客服系统工程师负责部署安装 面签系统由面签系统工程师负责部署安装 1.2 开始准备工作 //创建用户 useradd sunyard passwd sunyard //授权 chmod -v u+w /etc/sudoers? ?(增加 sudoers 文件的写的权限,默认为只读) vi /etc/sudoers (修

nginx全面解析【负载均衡、反向代理、高可用、宕机容错】

nginx全面解析[负载均衡.反向代理.高可用.宕机容错]原创itcats_cn 最后发布于2018-09-06 10:24:33 阅读数 15799 收藏展开什么是nginx?nginx是一款高性能的http服务器,官方测试nginx能够支支撑5万并发链接,并且cpu.内存等资源消耗却非常低,运行非常稳定,所以现在很多知名的公司都在使用nginx. nginx有什么作用?1.负载均衡(可以减轻单台服务器的压力) 2.反向代理(隐藏企业真实的ip地址) 3.搭建虚拟服务器 4.用做静态服务器(实