Postgres-XL集群部署与管理指南

Postgres-XL是一个基于PostgreSQL数据库的横向扩展开源SQL数据库集群,具有足够的灵活性来处理不同的数据库工作负载,架构如下图所示:

  • Web 2.0
  • 操作数据存储
  • GIS的地理空间
  • 混合业务工作环境
  • OLTP 写频繁的业务
  • 多租户服务提供商托管环境
  • 完全ACID,保持事务一致性
  • 包含JSON的Key-value 存储
  • 需要MPP并行性商业智能/大数据分析

    各个组件介绍如下:
  • Global Transaction Monitor (GTM)
    全局事务管理器,确保群集范围内的事务一致性。GTM负责发放事务ID和快照作为其多版本并发控制的一部分。集群也可以配置一个或多个备用GTM,以改进可用性。此外,可以在协调器间配置GTM代理, 可用于改善可扩展性,减少GTM的通信量。
  • GTM Standby
    GTM的备用节点,在pgxc,pgxl中,GTM控制所有的全局事务分配,如果出现问题,就会导致整个集群不可用,为了增加可用性,增加该备用节点。当GTM出现问题时,GTM Standby可以升级为GTM,保证集群正常工作。
  • GTM Proxy
    GTM需要与所有的Coordinators通信,为了降低压力,可以在每个Coordinator机器上部署一个GTM Proxy。
  • Coordinator
    协调员管理用户会话,并与GTM和数据节点进行交互。协调员解析,并计划查询,并给语句中的每一个组件发送下一个序列化的全局性计划。为节省机器,通常此服务和数据节点部署在一起。
  • Data Node
    数据节点是数据实际存储的地方。数据的分布可以由DBA来配置。为了提高可用性,可以配置数据节点的热备以便进行故障转移准备。
    总之,GTM是负责ACID的,保证分布式数据库全局事务一致性。得益于此,就算数据节点是分布的,但是在主节点操作增删改查事务时,就如同只操作一个数据库一样简单。Coordinator是调度的,将操作指令发送到各个数据节点。datanodes是数据节点,分布式存储数据。

    1、安装Postgres-XL

    1.1 集群规划

    四台机器规划如下图所示:

    1.2 操作系统配置

    禁用防火墙、selinux。各个节点通过yum安装以下软件包:

    [[email protected] ~]# yum -y install bzip2 readline-devel flex make gcc rsync

    另外,各个节点创建postgres用户:

    [[email protected] ~]# groupadd postgres;useradd -g postgres postgres;echo redhat|passwd --stdin postgres

    用户创建完成后,还需要在各个节点配置postgres用户的免密码登录(略)。然后编辑postgres用户的环境变量,加入如下内容:

    [[email protected] ~]# mkdir /u01;chown postgres:postgres /u01
    [[email protected] ~]# su - postgres
    [[email protected] ~]$ vi .bashrc
    export PGUSER=postgres
    export PGHOME=/usr/local/pgsql
    export PGXC_CTL_HOME=/u01/pgxl/pgxc_ctl
    export LD_LIBRARY_PATH=$PGHOME/lib
    export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
    export PATH=$PGHOME/bin:$PATH:$HOME/.local/bin:$HOME/bin

    1.3 Postgres-XL安装

    在各个节点以root用户进行安装,默认的安装目录是/usr/local/pgsql。

    [[email protected] ~]# wget https://×××w.postgres-xl.org/downloads/postgres-xl-10r1.tar.gz;tar -xzf postgres-xl-10r1.tar.gz;cd postgres-xl-10r1;./configure&&make&&make install
    [[email protected] postgres-xl-10r1]#  cd contrib;make && make install

    2、配置Postgres-XL集群

    2.1 生成pgxc_ctl配置文件

    在一个节点操作即可。

    [[email protected] ~]$ pgxc_ctl
    PGXC prepare
    PGXC exit

    编辑生成的配置文件,内容如下:

    pgxcInstallDir=/u01
    pgxcOwner=$USER
    pgxcUser=$pgxcOwner
    tmpDir=/tmp
    localTmpDir=$tmpDir
    #--------GTM Master--------
    gtmName=gtm_master
    gtmMasterServer=pg01
    gtmMasterPort=20001
    gtmMasterDir=$pgxcInstallDir/pgxl/nodes/gtm
    gtmExtraConfig=none
    gtmMasterSpecificExtraConfig=none
    #-------GTM Slave---------
    gtmSlave=y
    gtmSlaveName=gtm_slave
    gtmSlaveServer=pg02
    gtmSlavePort=20001
    gtmSlaveDir=$pgxcInstallDir/pgxl/nodes/gtm_slave
    gtmSlaveSpecificExtraConfig=none
    #-------GTM Proxy--------
    gtmProxyDir=$pgxcInstallDir/pgxl/nodes/gtm_pxy
    gtmProxy=y
    gtmProxyNames=(gtm_pxy1 gtm_pxy2)
    gtmProxyServers=(pg01 pg02)
    gtmProxyPorts=(20002 20002)
    gtmProxyDirs=($gtmProxyDir $gtmProxyDir)
    gtmPxyExtraConfig=none
    gtmPxySpecificExtraConfig=(none none)
    #-----Coordinators Master----------
    coordMasterDir=$pgxcInstallDir/pgxl/nodes/coord
    coordSlaveDir=$pgxcInstallDir/pgxl/nodes/coord_slave
    coordArchLogDir=$pgxcInstallDir/pgxl/nodes/coord_archlog
    coordNames=(coord1 coord2)
    coordPorts=(5433 5433)
    poolerPorts=(5434 5434)
    coordPgHbaEntries=(0.0.0.0/0)
    coordMasterServers=(pg01 pg02)
    coordMasterDirs=($coordMasterDir $coordMasterDir)
    coordMaxWALsernder=5
    coordMaxWALSenders=($coordMaxWALsernder $coordMaxWALsernder)
    #-----Coordinators Slave----------
    coordSlave=y
    coordSlaveSync=y
    coordSlaveServers=(pg03 pg04)
    coordSlavePorts=(5433 5433)
    coordSlavePoolerPorts=(5434 5434)
    coordSlaveDirs=($coordSlaveDir $coordSlaveDir)
    coordArchLogDirs=($coordArchLogDir $coordArchLogDir)
    coordExtraConfig=coordExtraConfig
    cat > $coordExtraConfig <<EOF
    log_destination = ‘stderr‘
    logging_collector = on
    log_directory = ‘pg_log‘
    listen_addresses = ‘*‘
    max_connections = 1000
    EOF
    coordSpecificExtraConfig=(none none)
    coordExtraPgHba=none
    coordSpecificExtraPgHba=(none none)
    #------Datanodes Master----------
    datanodeMasterDir=$pgxcInstallDir/pgxl/nodes/dnmaster
    datanodeSlaveDir=$pgxcInstallDir/pgxl/nodes/dnslave
    datanodeArchLogDir=$pgxcInstallDir/pgxl/nodes/dn_archlog
    primaryDatanode=pg03
    datanodeNames=(datanode1 datanode2)
    datanodePorts=(5436 5436)
    datanodePoolerPorts=(5437 5437)
    datanodePgHbaEntries=(0.0.0.0/0)
    datanodeMasterServers=(pg03 pg04)
    datanodeMasterDirs=($datanodeMasterDir $datanodeMasterDir)
    datanodeMaxWalSender=0
    datanodeMaxWALSenders=($datanodeMaxWalSender $datanodeMaxWalSender)
    datanodeSlave=n

    2.2 Posgres-XL集群初始化

    初始化完成后,会自动启动集群。

[[email protected] ~]$ pgxc_ctl -c /u01/pgxl/pgxc_ctl/pgxc_ctl.conf init all
/bin/bash
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Reading configuration using /u01/pgxl/pgxc_ctl_bash --home /u01/pgxl --configuration /u01/pgxl/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
******** PGXC_CTL START ***************

Current directory: /u01/pgxl
Initialize GTM master
The files belonging to this GTM system will be owned by user "postgres".
This user must also own the server process.

fixing permissions on existing directory /u01/pgxl/nodes/gtm ... ok
creating configuration files ... ok
creating control file ... ok

Success.
Done.
Start GTM master
server starting
Initialize GTM slave
The files belonging to this GTM system will be owned by user "postgres".
This user must also own the server process.

fixing permissions on existing directory /u01/pgxl/nodes/gtm_slave ... ok
creating configuration files ... ok
creating control file ... ok

Success.
Done.
Start GTM slavepgxc_ctl(13032):1811220924_48 server starting
Done.
Initialize all the gtm proxies.
Initializing gtm proxy gtm_pxy1.
Initializing gtm proxy gtm_pxy2.
The files belonging to this GTM system will be owned by user "postgres".
This user must also own the server process.

fixing permissions on existing directory /u01/pgxl/nodes/gtm_pxy ... ok
creating configuration files ... ok

Success.
The files belonging to this GTM system will be owned by user "postgres".
This user must also own the server process.

fixing permissions on existing directory /u01/pgxl/nodes/gtm_pxy ... ok
creating configuration files ... ok

Success.
Done.
Starting all the gtm proxies.
Starting gtm proxy gtm_pxy1.
Starting gtm proxy gtm_pxy2.
server starting
server starting
Done.
Initialize all the coordinator masters.
Initialize coordinator master coord1.
Initialize coordinator master coord2.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /u01/pgxl/nodes/coord ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... creating cluster information ... ok
syncing data to disk ... ok
freezing database template0 ... ok
freezing database template1 ... ok
freezing database postgres ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /u01/pgxl/nodes/coord ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... creating cluster information ... ok
syncing data to disk ... ok
freezing database template0 ... ok
freezing database template1 ... ok
freezing database postgres ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success.
Done.
Starting coordinator master.
Starting coordinator master coord1
Starting coordinator master coord2
2018-11-22 09:24:59.252 CST [13950] LOG:  listening on IPv4 address "0.0.0.0", port 5433
2018-11-22 09:24:59.252 CST [13950] LOG:  listening on IPv6 address "::", port 5433
2018-11-22 09:24:59.254 CST [13950] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5433"
2018-11-22 09:24:59.302 CST [13950] LOG:  redirecting log output to logging collector process
2018-11-22 09:24:59.302 CST [13950] HINT:  Future log output will appear in directory "pg_log".
2018-11-22 09:24:59.060 CST [13633] LOG:  listening on IPv4 address "0.0.0.0", port 5433
2018-11-22 09:24:59.061 CST [13633] LOG:  listening on IPv6 address "::", port 5433
2018-11-22 09:24:59.062 CST [13633] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5433"
2018-11-22 09:24:59.109 CST [13633] LOG:  redirecting log output to logging collector process
2018-11-22 09:24:59.109 CST [13633] HINT:  Future log output will appear in directory "pg_log".
Done.
Initialize all the coordinator slaves.
Initialize the coordinator slave coord1.
Initialize the coordinator slave coord2.
Done.
Starting all the coordinator slaves.
Starting coordinator slave coord1.
Starting coordinator slave coord2.
2018-11-22 09:25:05.987 CST [13330] LOG:  listening on IPv4 address "0.0.0.0", port 5433
2018-11-22 09:25:05.987 CST [13330] LOG:  listening on IPv6 address "::", port 5433
2018-11-22 09:25:05.989 CST [13330] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5433"
2018-11-22 09:25:06.039 CST [13330] LOG:  redirecting log output to logging collector process
2018-11-22 09:25:06.039 CST [13330] HINT:  Future log output will appear in directory "pg_log".
2018-11-22 09:25:05.517 CST [13266] LOG:  listening on IPv4 address "0.0.0.0", port 5433
2018-11-22 09:25:05.517 CST [13266] LOG:  listening on IPv6 address "::", port 5433
2018-11-22 09:25:05.518 CST [13266] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5433"
2018-11-22 09:25:05.567 CST [13266] LOG:  redirecting log output to logging collector process
2018-11-22 09:25:05.567 CST [13266] HINT:  Future log output will appear in directory "pg_log".
Done
Initialize all the datanode masters.
Initialize the datanode master datanode1.
Initialize the datanode master datanode2.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /u01/pgxl/nodes/dnmaster ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... creating cluster information ... ok
syncing data to disk ... ok
freezing database template0 ... ok
freezing database template1 ... ok
freezing database postgres ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /u01/pgxl/nodes/dnmaster ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... creating cluster information ... ok
syncing data to disk ... ok
freezing database template0 ... ok
freezing database template1 ... ok
freezing database postgres ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success.
Done.
Starting all the datanode masters.
Starting datanode master datanode1.
Starting datanode master datanode2.
2018-11-22 09:25:15.129 CST [13604] LOG:  listening on IPv4 address "0.0.0.0", port 5436
2018-11-22 09:25:15.130 CST [13604] LOG:  listening on IPv6 address "::", port 5436
2018-11-22 09:25:15.131 CST [13604] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5436"
2018-11-22 09:25:15.148 CST [13604] LOG:  redirecting log output to logging collector process
2018-11-22 09:25:15.148 CST [13604] HINT:  Future log output will appear in directory "pg_log".
2018-11-22 09:25:14.657 CST [13539] LOG:  listening on IPv4 address "0.0.0.0", port 5436
2018-11-22 09:25:14.657 CST [13539] LOG:  listening on IPv6 address "::", port 5436
2018-11-22 09:25:14.659 CST [13539] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5436"
2018-11-22 09:25:14.676 CST [13539] LOG:  redirecting log output to logging collector process
2018-11-22 09:25:14.676 CST [13539] HINT:  Future log output will appear in directory "pg_log".
Done.
ALTER NODE coord1 WITH (HOST=‘pg01‘, PORT=5433);
ALTER NODE
CREATE NODE coord2 WITH (TYPE=‘coordinator‘, HOST=‘pg02‘, PORT=5433);
CREATE NODE
CREATE NODE datanode1 WITH (TYPE=‘datanode‘, HOST=‘pg03‘, PORT=5436);
CREATE NODE
CREATE NODE datanode2 WITH (TYPE=‘datanode‘, HOST=‘pg04‘, PORT=5436);
CREATE NODE
SELECT pgxc_pool_reload();
 pgxc_pool_reload
------------------
 t
(1 row)

CREATE NODE coord1 WITH (TYPE=‘coordinator‘, HOST=‘pg01‘, PORT=5433);
CREATE NODE
ALTER NODE coord2 WITH (HOST=‘pg02‘, PORT=5433);
ALTER NODE
CREATE NODE datanode1 WITH (TYPE=‘datanode‘, HOST=‘pg03‘, PORT=5436);
CREATE NODE
CREATE NODE datanode2 WITH (TYPE=‘datanode‘, HOST=‘pg04‘, PORT=5436);
CREATE NODE
SELECT pgxc_pool_reload();
 pgxc_pool_reload
------------------
 t
(1 row)

Done.
EXECUTE DIRECT ON (datanode1) ‘CREATE NODE coord1 WITH (TYPE=‘‘coordinator‘‘, HOST=‘‘pg01‘‘, PORT=5433)‘;
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode1) ‘CREATE NODE coord2 WITH (TYPE=‘‘coordinator‘‘, HOST=‘‘pg02‘‘, PORT=5433)‘;
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode1) ‘ALTER NODE datanode1 WITH (TYPE=‘‘datanode‘‘, HOST=‘‘pg03‘‘, PORT=5436)‘;
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode1) ‘CREATE NODE datanode2 WITH (TYPE=‘‘datanode‘‘, HOST=‘‘pg04‘‘, PORT=5436)‘;
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode1) ‘SELECT pgxc_pool_reload()‘;
 pgxc_pool_reload
------------------
 t
(1 row)

EXECUTE DIRECT ON (datanode2) ‘CREATE NODE coord1 WITH (TYPE=‘‘coordinator‘‘, HOST=‘‘pg01‘‘, PORT=5433)‘;
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode2) ‘CREATE NODE coord2 WITH (TYPE=‘‘coordinator‘‘, HOST=‘‘pg02‘‘, PORT=5433)‘;
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode2) ‘CREATE NODE datanode1 WITH (TYPE=‘‘datanode‘‘, HOST=‘‘pg03‘‘, PORT=5436)‘;
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode2) ‘ALTER NODE datanode2 WITH (TYPE=‘‘datanode‘‘, HOST=‘‘pg04‘‘, PORT=5436)‘;
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode2) ‘SELECT pgxc_pool_reload()‘;
 pgxc_pool_reload
------------------
 t
(1 row)

Done.

通过查询pgxc_node表可以获取集群节点信息:

2.3 Posgres-XL集群启动与关闭

[[email protected] ~]$ pgxc_ctl -c /u01/pgxl/pgxc_ctl/pgxc_ctl.conf stop all
[[email protected] ~]$ pgxc_ctl -c /u01/pgxl/pgxc_ctl/pgxc_ctl.conf start all

2.4 Posgres-XL集群服务监控

[[email protected] ~]$ pgxc_ctl -c /u01/pgxl/pgxc_ctl/pgxc_ctl.conf monitor all
/bin/bash
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Reading configuration using /u01/pgxl/pgxc_ctl_bash --home /u01/pgxl --configuration /u01/pgxl/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
   ******** PGXC_CTL START ***************

Current directory: /u01/pgxl
Running: gtm master
Running: gtm slave
Running: gtm proxy gtm_pxy1
Running: gtm proxy gtm_pxy2
Running: coordinator master coord1
Running: coordinator slave coord1
Running: coordinator master coord2
Running: coordinator slave coord2
Running: datanode master datanode1
Running: datanode slave datanode1
Running: datanode master datanode2
Running: datanode slave datanode2

3、测试Posgres-XL集群

3.1 创建测试表并插入数据

创建一张log表,然后插入500条数据,如下:

[[email protected] ~]$ psql -p5433
psql (PGXL 10r1, based on PG 10.5 (Postgres-XL 10r1))
Type "help" for help.
postgres=# CREATE TABLE log (id numeric NOT NULL,stamp timestamp with time zone,user_id numeric);
postgres=# copy log from ‘/u02/tmp/log.csv‘ with csv;
COPY 499

3.2 查看表数据分布信息

可以在协调器节点或者各个数据节点查看,这里使用下面的语句直接统计每个数据节点的表数据分布情况:

postgres=# SELECT xc_node_id, count(*) FROM log GROUP BY xc_node_id;

3.3 建表说明

  • distribute表
    默认情况下,系统会将插入的数据,按照拆分规则,分配到不同的datanode节点中存储,也就是sharding技术。每个datanode节点只保存了部分数据,通过coordinate节点可以查询完整的数据视图。上面创建的log表就是distribute表。建表语法如下:

    postgres=# CREATE TABLE log (id numeric NOT NULL,stamp timestamp with time zone,user_id numeric) distribute by hash(id);

    distribute表数据分布如下图:

  • replication表
    各个datanode节点中,表的数据完全相同。也就是说,在插入数据时,系统会分别在每个datanode节点插入相同数据。读数据时,只需要读任意一个datanode节点上的数据即可。建表语法如下:
    postgres=# CREATE TABLE log2 (id numeric NOT NULL,stamp timestamp with time zone,user_id numeric) distribute by replication;
    postgres=# copy log2 from ‘/u02/tmp/log.csv‘ with csv;

    replication表数据分布如下图:

    不论在哪个数据节点查询,显示的结果都一样的。

    4、Postgres-XL Slave节点管理

    为了提高集群可用性,以下配置数据节点的热备以便进行故障转移切换。

    4.1 新增Slave数据节点

    在gtm节点操作,新增两个slave数据节点,如下:

[[email protected] ~]$ pgxc_ctl -c /u01/pgxl/pgxc_ctl/pgxc_ctl.conf
/bin/bash
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Reading configuration using /u01/pgxl/pgxc_ctl_bash --home /u01/pgxl --configuration /u01/pgxl/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
   ******** PGXC_CTL START ***************

Current directory: /u01/pgxl
PGXC add datanode slave datanode1 pg05 5436 5437 /u01/pgxl/nodes/dnslave /u01/pgxl/nodes/dn_slave_war /u01/pgxl/nodes/dn_archlog
PGXC add datanode slave datanode2 pg06 5436 5437 /u01/pgxl/nodes/dnslave /u01/pgxl/nodes/dn_slave_war /u01/pgxl/nodes/dn_archlog

4.2 主备节点切换

检查所有服务运行正常:

[[email protected] ~]$ pgxc_ctl -c /u01/pgxl/pgxc_ctl/pgxc_ctl.conf monitor all
/bin/bash
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Reading configuration using /u01/pgxl/pgxc_ctl_bash --home /u01/pgxl --configuration /u01/pgxl/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
   ******** PGXC_CTL START ***************

Current directory: /u01/pgxl
Running: gtm master
Running: gtm slave
Running: gtm proxy gtm_pxy1
Running: gtm proxy gtm_pxy2
Running: coordinator master coord1
Running: coordinator slave coord1
Running: coordinator master coord2
Running: coordinator slave coord2
Running: datanode master datanode1
Running: datanode slave datanode1
Running: datanode master datanode2
Running: datanode slave datanode2

当前的主节点是pg03和pg04,如下图:

切换主备节点:

[[email protected] ~]$ pgxc_ctl -c /u01/pgxl/pgxc_ctl/pgxc_ctl.conf monitor all
/bin/bash
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Reading configuration using /u01/pgxl/pgxc_ctl_bash --home /u01/pgxl --configuration /u01/pgxl/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
   ******** PGXC_CTL START ***************

Current directory: /u01/pgxl
Running: gtm master
Running: gtm slave
Running: gtm proxy gtm_pxy1
Running: gtm proxy gtm_pxy2
Running: coordinator master coord1
Running: coordinator slave coord1
Running: coordinator master coord2
Running: coordinator slave coord2
Running: datanode master datanode1
Running: datanode slave datanode1
Running: datanode master datanode2
Running: datanode slave datanode2
[[email protected] ~]$ pgxc_ctl -c /u01/pgxl/pgxc_ctl/pgxc_ctl.conf failover datanode datanode1 datanode2
/bin/bash
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Reading configuration using /u01/pgxl/pgxc_ctl_bash --home /u01/pgxl --configuration /u01/pgxl/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
   ******** PGXC_CTL START ***************

Current directory: /u01/pgxl
Failover specified datanodes.
Failover the datanode datanode1.
Failover datanode datanode1 using GTM itself
Actual Command: ssh [email protected] "( pg_ctl promote -Z datanode -D /u01/pgxl/nodes/dnslave ) > /tmp/pg01_STDOUT_4264_0 2>&1" < /dev/null > /dev/null 2>&1
Bring remote stdout: scp [email protected]:/tmp/pg01_STDOUT_4264_0 /tmp/STDOUT_4264_1 > /dev/null 2>&1
Actual Command: ssh [email protected] "( pg_ctl restart -w -Z datanode -D /u01/pgxl/nodes/dnslave -o -i; sleep 1 ) > /tmp/pg01_STDOUT_4264_2 2>&1" < /dev/null > /dev/null 2>&1
Bring remote stdout: scp [email protected]:/tmp/pg01_STDOUT_4264_2 /tmp/STDOUT_4264_3 > /dev/null 2>&1
2018-11-23 14:52:34.655 CST [26323] LOG:  listening on IPv4 address "0.0.0.0", port 5436
2018-11-23 14:52:34.655 CST [26323] LOG:  listening on IPv6 address "::", port 5436
2018-11-23 14:52:34.657 CST [26323] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5436"
2018-11-23 14:52:34.675 CST [26323] LOG:  redirecting log output to logging collector process
2018-11-23 14:52:34.675 CST [26323] HINT:  Future log output will appear in directory "pg_log".
ALTER NODE
 pgxc_pool_reload
------------------
 t
(1 row)

EXECUTE DIRECT
 pgxc_pool_reload
------------------
 t
(1 row)

EXECUTE DIRECT
 pgxc_pool_reload
------------------
 t
(1 row)

ALTER NODE
 pgxc_pool_reload
------------------
 t
(1 row)

Failover the datanode datanode2.
Failover datanode datanode2 using GTM itself
Actual Command: ssh [email protected] "( pg_ctl promote -Z datanode -D /u01/pgxl/nodes/dnslave ) > /tmp/pg01_STDOUT_4264_4 2>&1" < /dev/null > /dev/null 2>&1
Bring remote stdout: scp [email protected]:/tmp/pg01_STDOUT_4264_4 /tmp/STDOUT_4264_5 > /dev/null 2>&1
Actual Command: ssh [email protected] "( pg_ctl restart -w -Z datanode -D /u01/pgxl/nodes/dnslave -o -i; sleep 1 ) > /tmp/pg01_STDOUT_4264_6 2>&1" < /dev/null > /dev/null 2>&1
Bring remote stdout: scp [email protected]:/tmp/pg01_STDOUT_4264_6 /tmp/STDOUT_4264_7 > /dev/null 2>&1
2018-11-23 14:52:38.607 CST [26317] LOG:  listening on IPv4 address "0.0.0.0", port 5436
2018-11-23 14:52:38.607 CST [26317] LOG:  listening on IPv6 address "::", port 5436
2018-11-23 14:52:38.609 CST [26317] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5436"
2018-11-23 14:52:38.628 CST [26317] LOG:  redirecting log output to logging collector process
2018-11-23 14:52:38.628 CST [26317] HINT:  Future log output will appear in directory "pg_log".
ALTER NODE
 pgxc_pool_reload
------------------
 t
(1 row)

EXECUTE DIRECT
 pgxc_pool_reload
------------------
 t
(1 row)

EXECUTE DIRECT
 pgxc_pool_reload
------------------
 t
(1 row)

ALTER NODE
 pgxc_pool_reload
------------------
 t
(1 row)

Done.
[[email protected] ~]$ pgxc_ctl -c /u01/pgxl/pgxc_ctl/pgxc_ctl.conf monitor all
/bin/bash
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /u01/pgxl/pgxc_ctl_bash.
Reading configuration using /u01/pgxl/pgxc_ctl_bash --home /u01/pgxl --configuration /u01/pgxl/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
   ******** PGXC_CTL START ***************

Current directory: /u01/pgxl
Running: gtm master
Running: gtm slave
Running: gtm proxy gtm_pxy1
Running: gtm proxy gtm_pxy2
Running: coordinator master coord1
Running: coordinator slave coord1
Running: coordinator master coord2
Running: coordinator slave coord2
Running: datanode master datanode1
Running: datanode master datanode2

切换完成后,旧的master节点作废,而新建的两个slave节点就转变为master节点角色。

4.3 删除Slave节点

[[email protected] ~]$ pgxc_ctl -c /u01/pgxl/pgxc_ctl/pgxc_ctl.conf remove datanode slave datanode1 clean
[[email protected] ~]$ pgxc_ctl -c /u01/pgxl/pgxc_ctl/pgxc_ctl.conf remove datanode slave datanode2 clean

原文地址:http://blog.51cto.com/candon123/2322012

时间: 2024-10-19 22:09:50

Postgres-XL集群部署与管理指南的相关文章

GaussDB T分布式集群部署以及升级指南

本文用四节点部署GaussDB T 1.0.1分布式集群,部署完成后再将其升级到1.0.2版本(直接安装1.0.2版本,在安装过程中会遇到segment fault报错,目前尚未解决).前期操作系统准备工作参考之前的几篇文章. 1.部署分布式集群 1.1 节点信息 各节点信息如下表所示: 1.2 集群参数文件 根据实际情况修改集群参数,或者通过database manager工具生成,内容如下: [[email protected] db]# vi clusterconfig.xml <?xml

zabbix实例集群部署-偏向于管理使用

zabbix实例集群部署 前言:已经折腾两个礼拜了,本文侧重点不在于安装,在于使用管理.部署请看文档或者百度,,很简        单 提示:zabbix关于模板.应用集.主机组.触发器.等等命令一定要谨慎,不要随意,防止自己糊涂了 主机组: 比如说,我的主机组用的是yunce56,因为我这个项目名称叫做yunce 模板:zabbix自带的templates不够合理,我自己重新写,DIY.比如说我专门监                          控cpu,我可以写yunce-cpu-li

Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 master 服务器的组件有:kube-apiserver.kube-controller-manager.kube-scheduler 因此需要下载k8s master,下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGE

万台规模下的SDN控制器集群部署实践

目前在网络世界里,云计算.虚拟化.SDN.NFV这些话题都非常热.今天借这个机会我跟大家一起来一场SDN的深度之旅,从概念一直到实践一直到一些具体的技术. 本次分享分为三个主要部分: SDN & NFV的背景介绍 SDN部署的实际案例 SDN控制器的集群部署方案 我们首先看一下SDN.其实SDN这个东西已经有好几年了,它强调的是什么?控制平面和数据平面分离,中间是由OpenFlow交换机组成的控制器,再往上就是运行在SDN之上的服务或者是应用.这里强调两个,控制器和交换机的接口——我们叫做南向接

入门初探+伪集群部署

Kafka入门初探+伪集群部署 Kafka是目前非常流行的消息队列中间件,常用于做普通的消息队列.网站的活性数据分析(PV.流量.点击量等).日志的搜集(对接大数据存储引擎做离线分析). 全部内容来自网络,可信度有待考证!如有问题,还请及时指正. 概念介绍 在Kafka中消息队列分为三种角色: producer,即生产者,负责产生日志数据. broker,存储节点,负责按照topic中的partition分区,均匀分布式的存储分区. consumer,即消费者,负责读取使用broker中的分区.

solr 集群(SolrCloud 分布式集群部署步骤)

SolrCloud 分布式集群部署步骤 安装软件包准备 apache-tomcat-7.0.54 jdk1.7 solr-4.8.1 zookeeper-3.4.5 注:以上软件都是基于 Linux 环境的 64位 软件,以上软件请到各自的官网下载. 服务器准备 为搭建这个集群,准备三台服务器,分别为 192.168.0.2 -- master 角色192.168.0.3 -- slave 角色192.168.0.4 -- slave 角色 搭建基础环境 安装 jdk1.7 - 这个大家都会安装

集群部署及测试SolrCloud-5

SolrCloud-5.2.1 集群部署及测试 一. 说明 Solr5内置了Jetty服务,所以不用安装部署到Tomcat了,网上部署Tomcat的资料太泛滥了. 部署前的准备工作: 1. 将各主机IP配置为静态IP(保证各主机可以正常通信,为避免过多的网络传输,建议在同一网段). 2. 修改主机名,配置各主机映射:修改hosts文件,加入各主机IP和主机名的映射. 3. 开放相应端口或者直接关闭防火墙. 4. 保证Zookeeper集群服务正常运行.Zookeeper的部署参考:http://

ActiveMQ的单节点和集群部署

平安寿险消息队列用的是ActiveMQ. 单节点部署: 下载解压后,直接cd到bin目录,用activemq start命令就可启动activemq服务端了. ActiveMQ默认采用61616端口提供JMS服务,使用8161端口提供管理控制台服务,执行以下命令以便检验是否已经成功启动ActiveMQ服务: ps -aux | grep activemq netstat -anp | grep 61616 此外,还可直接访问管理页面:http://ip:8161/admin/ ,用户名和密码可以

t持久化与集群部署开发详解

Quartz.net持久化与集群部署开发详解 序言 我前边有几篇文章有介绍过quartz的基本使用语法与类库.但是他的执行计划都是被写在本地的xml文件中.无法做集群部署,我让它看起来脆弱不堪,那是我的罪过. 但是quart.net是经过许多大项目的锤炼,走到啦今天,支持集群高可用的开发方案那是一定的,今天我就给小结下我的quartz.net开发升级过程. Quartz.net的数据库表结构 如果支持集群与持久化,单靠本机的内存和xml来保存计算任务调度的各种状态值,可想而知,是困难的.所以支持