centos7部署postgresql集群高可用 patroni + etcd 之patroni篇

实验环境:centos7.4纯净版

postgres版本: 9.6.15

etcd版本:3.3.11

patroni版本:

ip规划

192.168.216.130 node1 master

192.168.216.132 node2 slave

192.168.216.134 node3 slave

etcd集群部署请看上一篇文章:https://www.cnblogs.com/caidingyu/p/11408389.html

postgres部署参考文章:https://www.cnblogs.com/virtulreal/p/9921978.html

修改node1中postgresql.conf配置如下

max_connections = ‘100‘
max_wal_senders = ‘10‘
port = ‘5432‘
listen_addresses = ‘0.0.0.0‘
synchronous_commit = on
full_page_writes = on
wal_log_hints = on
synchronous_standby_names = ‘*‘
max_replication_slots = 10
wal_level = replica

修改node1中pg_hba.conf配置如下

[[email protected] data]# more pg_hba.conf|grep -v ^#|grep -v ^$
local   all             all                                     peer
host    all             all             127.0.0.1/32            md5
host    all             postgres        127.0.0.1/32            md5
host    all             all             192.168.216.0/24      md5
host    all             all             ::1/128                 md5
local   replication     replicator                                peer
host    replication     replicator        127.0.0.1/32            md5
host    replication     replicator        ::1/128                 md5
host    replication     replicator        192.168.216.130/32      md5
host    replication     replicator        192.168.216.132/32      md5
host    replication     replicator        192.168.216.134/32      md5

 

node1上创建复制槽,至关重要,patroni 用到了这个玩意

postgres=# create user replicator replication login encrypted password ‘1qaz2wsx‘;
postgres=# alter user postgres with password ‘1qaz2wsx‘;
postgres=# select * from pg_create_physical_replication_slot(‘pgsql96_node1‘);
postgres=# select * from pg_create_physical_replication_slot(‘pgsql96_node2‘);
postgres=# select * from pg_create_physical_replication_slot(‘pgsql96_node3‘);

 node2 配置stream replication

systemctl stop postgresql-9.6
su - postgres
cd /var/lib/pgsql/9.6/data
rm -rf ./*
/usr/pgsql-9.6/bin/pg_basebackup -h 192.168.216.130 -D /var/lib/pgsql/9.6/data -U replicator -v -P -R

vi recovery.conf
recovery_target_timeline = ‘latest‘
standby_mode = ‘on‘
primary_conninfo = ‘host=192.168.216.130 port=5432 user=replicator password=1qaz2wsx‘
primary_slot_name = ‘pgsql96_node2‘
trigger_file = ‘/tmp/postgresql.trigger.5432‘

执行exit返回root用户
systemctl start postgresql-9.6

  

 node3 配置stream replication 

systemctl stop postgresql-9.6
su - postgres
cd /var/lib/pgsql/9.6/data
rm -rf ./*
/usr/pgsql-9.6/bin/pg_basebackup -h 192.168.216.130 -D /var/lib/pgsql/9.6/data -U replicator -v -P -R

vi recovery.conf
recovery_target_timeline = ‘latest‘
standby_mode = ‘on‘
primary_conninfo = ‘host=192.168.216.130 port=5432 user=replicator password=1qaz2wsx‘
primary_slot_name = ‘pgsql96_node3‘
trigger_file = ‘/tmp/postgresql.trigger.5432‘

执行exit返回root用户
systemctl start postgresql-9.6

在node1上连接数据库,查看复制状态

select client_addr,pg_xlog_location_diff(sent_location, write_location) as write_delay,pg_xlog_location_diff(sent_location, flush_location) as flush_delay,pg_xlog_location_diff(sent_location, replay_location) as replay_delay from pg_stat_replication;

下载、安装 patroni,如遇网络问题可以多执行几次pip install或者切换其他pip源

yum install gcc
yum install python-devel.x86_64
cd /tmp
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py
pip install psycopg2-binary
pip install patroni[etcd,consul]

 验证是否安装成功

which patroni
patroni --help

 

node1上patroni 配置文件如下,该配置文件需要手动创建

mkdir -p /usr/patroni/conf
 cd /usr/patroni/conf/
cat /usr/patroni/conf/patroni_postgresql.yml
scope: pgsql96
namespace: /pgsql/
name: pgsql96_node1

restapi:
  listen: 192.168.216.130:8008
  connect_address: 192.168.216.130:8008

etcd:
  host: 192.168.216.130:2379

bootstrap:
  # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
  # and all other cluster members will use it as a `global configuration`
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    master_start_timeout: 300
    synchronous_mode: false
    postgresql:
      use_pg_rewind: true
      use_slots: true
      parameters:
        listen_addresses: "0.0.0.0"
        port: 5432
        wal_level: logical
        hot_standby: "on"
        wal_keep_segments: 1000
        max_wal_senders: 10
        max_replication_slots: 10
        wal_log_hints: "on"
#        archive_mode: "on"
#        archive_timeout: 1800s
#        archive_command: gzip < %p > /data/backup/pgwalarchive/%f.gz
#      recovery_conf:
#        restore_command: gunzip < /data/backup/pgwalarchive/%f.gz > %p

postgresql:
  listen: 0.0.0.0:5432
  connect_address: 192.168.216.130:5432
  data_dir: /var/lib/pgsql/9.6/data
  bin_dir: /usr/pgsql-9.6/bin
#  config_dir: /etc/postgresql/9.6/main
  authentication:
    replication:
      username: replicator
      password: 1qaz2wsx
    superuser:
      username: postgres
      password: 1qaz2wsx

#watchdog:
#  mode: automatic # Allowed values: off, automatic, required
#  device: /dev/watchdog
#  safety_margin: 5

tags:
    nofailover: false
    noloadbalance: false
    clonefrom: false
    nosync: false

 node2上patroni 配置文件如下

[[email protected] etcd]# cat /usr/patroni/conf/patroni_postgresql.yml
scope: pgsql96
namespace: /pgsql/
name: pgsql96_node2

restapi:
  listen: 192.168.216.132:8008
  connect_address: 192.168.216.132:8008

etcd:
  host: 192.168.216.132:2379

bootstrap:
  # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
  # and all other cluster members will use it as a `global configuration`
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    master_start_timeout: 300
    synchronous_mode: false
    postgresql:
      use_pg_rewind: true
      use_slots: true
      parameters:
        listen_addresses: "0.0.0.0"
        port: 5432
        wal_level: logical
        hot_standby: "on"
        wal_keep_segments: 1000
        max_wal_senders: 10
        max_replication_slots: 10
        wal_log_hints: "on"
#        archive_mode: "on"
#        archive_timeout: 1800s
#        archive_command: gzip < %p > /data/backup/pgwalarchive/%f.gz
#      recovery_conf:
#        restore_command: gunzip < /data/backup/pgwalarchive/%f.gz > %p

postgresql:
  listen: 0.0.0.0:5432
  connect_address: 192.168.216.132:5432
  data_dir: /var/lib/pgsql/9.6/data
  bin_dir: /usr/pgsql-9.6/bin
#  config_dir: /etc/postgresql/9.6/main
  authentication:
    replication:
      username: replicator
      password: 1qaz2wsx
    superuser:
      username: postgres
      password: 1qaz2wsx

#watchdog:
#  mode: automatic # Allowed values: off, automatic, required
#  device: /dev/watchdog
#  safety_margin: 5

tags:
    nofailover: false
    noloadbalance: false
    clonefrom: false
    nosync: false

node3上patroni 配置文件如下

scope: pgsql96
namespace: /pgsql/
name: pgsql96_node3

restapi:
  listen: 192.168.216.134:8008
  connect_address: 192.168.216.134:8008

etcd:
  host: 192.168.216.134:2379

bootstrap:
  # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
  # and all other cluster members will use it as a `global configuration`
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    master_start_timeout: 300
    synchronous_mode: false
    postgresql:
      use_pg_rewind: true
      use_slots: true
      parameters:
        listen_addresses: "0.0.0.0"
        port: 5432
        wal_level: logical
        hot_standby: "on"
        wal_keep_segments: 1000
        max_wal_senders: 10
        max_replication_slots: 10
        wal_log_hints: "on"
#        archive_mode: "on"
#        archive_timeout: 1800s
#        archive_command: gzip < %p > /data/backup/pgwalarchive/%f.gz
#      recovery_conf:
#        restore_command: gunzip < /data/backup/pgwalarchive/%f.gz > %p

postgresql:
  listen: 0.0.0.0:5432
  connect_address: 192.168.216.134:5432
  data_dir: /var/lib/pgsql/9.6/data
  bin_dir: /usr/pgsql-9.6/bin
#  config_dir: /etc/postgresql/9.6/main
  authentication:
    replication:
      username: replicator
      password: 1qaz2wsx
    superuser:
      username: postgres
      password: 1qaz2wsx

#watchdog:
#  mode: automatic # Allowed values: off, automatic, required
#  device: /dev/watchdog
#  safety_margin: 5

tags:
    nofailover: false
    noloadbalance: false
    clonefrom: false
    nosync: false

手动启动 patroni

node1、node2、node3 三个节点依次启动

patroni /usr/patroni/conf/patroni_postgresql.yml

查看 patroni 集群状态 

克隆一个窗口,执行patronictl -c /usr/patroni/conf/patroni_postgresql.yml list

查看 etcd 的 信息

etcdctl ls /pgsql/pgsql96

etcdctl get /pgsql/pgsql96/members/pgsql96_node1

为了方便开机自启,故配置成 patroni.service,3个node都需要进行配置

[[email protected] data]# vi /etc/systemd/system/patroni.service
[[email protected] data]# cat /etc/systemd/system/patroni.service
[Unit]
Description=patroni - a high-availability PostgreSQL
Documentation=https://patroni.readthedocs.io/en/latest/index.html
After=syslog.target network.target etcd.target
Wants=network-online.target

[Service]
Type=simple
User=postgres
Group=postgres
PermissionsStartOnly=true
ExecStart=/usr/bin/patroni /usr/patroni/conf/patroni_postgresql.yml
ExecReload=/bin/kill -HUP $MAINPID
LimitNOFILE=65536
KillMode=process
KillSignal=SIGINT
Restart=on-abnormal
RestartSec=30s
TimeoutSec=0

[Install]
WantedBy=multi-user.target

禁止 postgresql 的自启动,通过 patroni 来管理 postgresql

systemctl status patroni
systemctl start patroni
systemctl enable patroni

systemctl status postgresql
systemctl disable postgresql

systemctl status etcd
systemctl enable etcd

如何切换Leader

执行patronictl -c /usr/patroni/conf/patroni_postgresql.yml switchover

多执行patronictl -c /usr/patroni/conf/patroni_postgresql.yml list进行刷新,可以看到Leader由node2切换为node1

  

原文地址:https://www.cnblogs.com/caidingyu/p/11408502.html

时间: 2024-10-09 13:27:39

centos7部署postgresql集群高可用 patroni + etcd 之patroni篇的相关文章

CentOS7部署Kubernetes集群

CentOS7部署Kubernetes集群 简介 Kubernetes是什么? Kubernetes一个用于容器集群的自动化部署.扩容以及运维的开源平台. 通过Kubernetes,你可以快速有效地响应用户需求: a.快速而有预期地部署你的应用 b.极速地扩展你的应用 c.无缝对接新的应用功能 d.节省资源,优化硬件资源的使用 我们希望培育出一个组件及工具的生态,帮助大家减轻在公有云及私有云上运行应用的负担. Kubernetes特点: a.可移植: 支持公有云,私有云,混合云,多重云(mult

基于heartbeat v1+ldirectord实现LVS集群高可用

前言 高可用集群,High Availability Cluster,简称HA Cluster,是指以减少服务中断时间为目的的服务器集群技术.通过上文可以看出,LVS集群本身并不能实现高可用,比如Director Server不能检测Real Server的健康度,一旦其中一台或全部Real Server宕机,Director Server还会继续转发请求,导致站点无法访问,同样,如果Director Server宕机站点就更不可能正常运转了.本文将讲解如何基于heartbeat v1实现LVS

CentOS6.5安装DRBD+MariaDB+Heartbeat实现数据库集群高可用

本实验使用两台服务器搭建: 系统                  CentOS6.5 tese02              IP:192.168.1.244 test03               IP:192.168.1.245 DRBD               版本:8.4.6 DRBD-UTIL       版本:8.9.2 MariaDB           版本:10.0.17 Heartbeat         版本:3.0.4 VIP                  

ActiveMQ + ZooKeeper 集群高可用配置

一. 准备条件: (1) 最好是有3台服务器[2台也行, 只是根据(replicas/2)+1 公式至少得2个ActiveMQ服务存在才能保证运行, 自己测试的时候麻烦点, 关掉其中一个, 再开启, 看会不会选举到另一个ActiveMQ服务, 多试几次可以看到效果] (2)  ActiveMQ安装参考: ActiveMQ (3)  ZooKeeper安装参考:ZooKeeper 二. 配置 : ActiveMQ根目录下的conf/activemq.xml, 原来默认如下: <persistenc

Rabbitmq集群高可用测试

Rabbitmq集群高可用 RabbitMQ是用erlang开发的,集群非常方便,因为erlang天生就是一门分布式语言,但其本身并不支持负载均衡. Rabbit模式大概分为以下三种:单一模式.普通模式.镜像模式 单一模式:最简单的情况,非集群模式. 没什么好说的. 普通模式:默认的集群模式. 对于Queue来说,消息实体只存在于其中一个节点,A.B两个节点仅有相同的元数据,即队列结构. 当消息进入A节点的Queue中后,consumer从B节点拉取时,RabbitMQ会临时在A.B间进行消息传

K8S简介+CentOS7 部署K8S集群

一.前言 Kubernetes(简称K8S)是开源的容器集群管理系统,可以实现容器集群的自动化部署.自动扩缩容.维护等功能.它既是一款容器编排工具,也是全新的基于容器技术的分布式架构领先方案.在Docker技术的基础上,为容器化的应用提供部署运行.资源调度.服务发现和动态伸缩等功能,提高了大规模容器集群管理的便捷性.[Kubernetes是容器集群管理工具] 二.Kubernetes的架构图 三.重要概念 3.1.cluster cluster是 计算.存储和网络资源的集合,k8s利用这些资源运

利用ansible来做kubernetes 1.10.3集群高可用的一键部署

请读者务必保持环境一致 安装过程中需要下载所需系统包,请务必使所有节点连上互联网. 本次安装的集群节点信息 实验环境:VMware的虚拟机 IP地址 主机名 CPU 内存 192.168.77.133 k8s-m1 6核 6G 192.168.77.134 k8s-m2 6核 6G 192.168.77.135 k8s-m3 6核 6G 192.168.77.136 k8s-n1 6核 6G 192.168.77.137 k8s-n2 6核 6G 192.168.77.138 k8s-n3 6核

CentOS7下使用Sentinel实现Redis集群高可用

Sentinel是Redis官方提供的一种高可用方案(除了Sentinel,Redis Cluster是另一种方案),它可以自动监控Redis master/slave的运行状态,如果发现master无法访问了,就会启动failover把其中一台可以访问的slave切换为master. (1).Sentinel(哨兵)的作用 检测Master状态,如果Master异常,则会进行Master-Slave切换,将其中一个Slave作为Master,将之前的Master作为Slave .当Master

mfs3.0.85+heartbeat+drbd集群高可用实现

mfs集群部署文档 1.内容简介 MFS有元数据服务器(mfsmaster).元数据日志存储服务器(mfsmetalogger).数据存储服务器(mfschunkserver).客户端(clients)组成. 目前MFS元数据服务器存在单点问题,因此我们可以通过DRBD提供磁盘及时同步,通过HeartBeat提供Failover,来达到高可用.实现高可用后,不使用mfsmetalogger. 2.机器分配: 角色 软件版本 ip地址 操作系统 硬盘 网卡 mfsmaster moosefs3.0