greenplum集群安装与增加节点生产环境实战

1.准备环境

1.1集群介绍

系统环境:centos6.5

数据库版本:greenplum-db-4.3.3.1-build-1-RHEL5-x86_64.zip

greenplum集群中,2台机器IP分别是

[[email protected] ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.201        BI-greenplum-01

192.168.10.202        BI-greenplum-02

1.2创建用户及用户组(每台机器)

[[email protected] ~]#  groupadd -g 530 gpadmin

[[email protected] ~]# useradd -g 530 -u530 -m -d /home/gpadmin -s /bin/bash gpadmin

[[email protected] ~]# passwd gpadmin

Changing password for user gpadmin.

New password:

BAD PASSWORD: it is too simplistic/systematic

BAD PASSWORD: is too simple

Retype new password:

passwd: all authentication tokens updated successfully.

1.3配置内核参数,添加如下内容:

vi /etc/sysctl.conf

#By greenplum

net.ipv4.ip_forward = 0

net.ipv4.conf.default.accept_source_route = 0

kernel.sysrq = 1

kernel.core_uses_pid = 1

net.ipv4.tcp_syncookies = 1

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.sem = 250 64000 100 512

kernel.shmmax = 500000000

kernel.shmmni = 4096

kernel.shmall = 4000000000

kernel.sem = 250 64000 100 512

net.ipv4.tcp_tw_recycle=1

net.ipv4.tcp_max_syn_backlog=4096

net.core.netdev_max_backlog=10000

vm.overcommit_memory=2

net.ipv4.conf.all.arp_filter = 1

以上参数可以根据自己系统配置做适当修改

手工执行命令,让参数生效

[[email protected] ~]# sysctl -p

在limits.conf文件中添加如下配置

[[email protected] ~]# vi /etc/security/limits.conf

# End of file

* soft nofile 65536

* hard nofile 65536

* soft nproc 131072

* hard nproc 131072

2.greenplum安装

2.1安装依赖包,包括增加节点需要的包

yum -y install ed openssh-clients gcc gcc-c++  make automake autoconf libtool perl rsync coreutils glib2 lrzsz sysstat e4fsprogs xfsprogs ntp readline-devel zlib zlib-devel unzip

注意:greenplum依赖ed,否则无法初始化成功

2.2首先准备好安装文件(在MASTER 192.168.10.201上操作)

[[email protected] ~]# unzip greenplum-db-4.3.3.1-build-1-RHEL5-x86_64.zip

[[email protected] ~]# ./greenplum-db-4.3.3.1-build-1-RHEL5-x86_64.bin

2.3给安装过目录赋权

[[email protected] ~]# cd /usr/local/

[[email protected] local]# chown -R gpadmin:gpadmin /usr/local/greenplum-db*

2.4压缩打包传到其他机器上

[[email protected] local]# tar zcvf gp.tar.gz greenplum-db*

[[email protected] local]# scp gp.tar.gz BI-greenplum-02:/usr/local/

2.5其他机器上解压文件

[[email protected] ~]# cd /usr/local/

[[email protected] local]# ls

bin  etc  games  gp.tar.gz  include  lib  lib64  libexec  sbin  share  src

[[email protected] local]# tar zxvf gp.tar.gz

2.6每台机器上配置环境变量

[[email protected] local]# su - gpadmin

[[email protected] ~]$ vi .bash_profile

source /usr/local/greenplum-db/greenplum_path.sh

export MASTER_DATA_DIRECTORY=/app/master/gpseg-1

export PGPORT=5432

export PGDATABASE=trjdb

让环境变量生效

[[email protected] ~]$ source .bash_profile

2.7 配置免钥

[[email protected] ~]$ cat all_hosts_file

BI-greenplum-01

BI-greenplum-02

[[email protected] ~]$ gpssh-exkeys -f all_hosts_file

[STEP 1 of 5] create local ID and authorize on local host

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] authorize current user on remote hosts

... send to BI-greenplum-02

***

*** Enter password for BI-greenplum-02:

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts

... finished key exchange with BI-greenplum-02

[INFO] completed successfully

2.8创建数据文件(每台操作)

[[email protected] ~]# mkdir /app

[[email protected] ~]# chown -R gpadmin:gpadmin /app

在MASTER(192.168.10.201)操作

[[email protected] ~]$ gpssh -f all_hosts_file

Note: command history unsupported on this machine ...

=> mkdir /app/master

[BI-greenplum-02]

[BI-greenplum-01]

=> mkdir -p /app/data/gp1 /app/data/gp2 /app/data/gp3 /app/data/gp4

[BI-greenplum-02]

[BI-greenplum-01]

=> mkdir -p /app/data/gpm1 /app/data/gpm2 /app/data/gpm3 /app/data/gpm4

[BI-greenplum-02]

[BI-greenplum-01]

[[email protected] ~]$ vi gpinitsystem_config

# FILE NAME: gpinitsystem_config

# Configuration file needed by the gpinitsystem

################################################

#### REQUIRED PARAMETERS

################################################

#### Name of this Greenplum system enclosed in quotes.

ARRAY_NAME="EMC Greenplum DW"

#### Naming convention for utility-generated data directories.

SEG_PREFIX=gpseg

#### Base number by which primary segment port numbers

#### are calculated.

PORT_BASE=40000

#### File system location(s) where primary segment data directories

#### will be created. The number of locations in the list dictate

#### the number of primary segments that will get created per

#### physical host (if multiple addresses for a host are listed in

#### the hostfile, the number of segments will be spread evenly across

#### the specified interface addresses).

declare -a DATA_DIRECTORY=(/app/data/gp1 /app/data/gp2 /app/data/gp3 /app/data/gp4)

#### OS-configured hostname or IP address of the master host.

MASTER_HOSTNAME=BI-greenplum-01

#### File system location where the master data directory

#### will be created.

MASTER_DIRECTORY=/app/master

#### Port number for the master instance.

MASTER_PORT=5432

#### Shell utility used to connect to remote hosts.

TRUSTED_SHELL=ssh

#### Maximum log file segments between automatic WAL checkpoints.

CHECK_POINT_SEGMENTS=8

#### Default server-side character set encoding.

ENCODING=UNICODE

################################################

#### OPTIONAL MIRROR PARAMETERS

################################################

#### Base number by which mirror segment port numbers

#### are calculated.

MIRROR_PORT_BASE=50000

#### Base number by which primary file replication port

#### numbers are calculated.

REPLICATION_PORT_BASE=41000

#### Base number by which mirror file replication port

#### numbers are calculated.

MIRROR_REPLICATION_PORT_BASE=51000

#### File system location(s) where mirror segment data directories

#### will be created. The number of mirror locations must equal the

#### number of primary locations as specified in the

#### DATA_DIRECTORY parameter.

declare -a MIRROR_DATA_DIRECTORY=(/app/data/gpm1 /app/data/gpm2 /app/data/gpm3 /app/data/gpm4)

################################################

#### OTHER OPTIONAL PARAMETERS

################################################

#### Create a database of this name after initialization.

DATABASE_NAME=trjdb

#### Specify the location of the host address file here instead of

#### with the the -h option of gpinitsystem.

MACHINE_LIST_FILE=/home/gpadmin/seg_hosts_file

增加配置为数据节点

[[email protected] ~]$ vi seg_hosts_file

BI-greenplum-01

BI-greenplum-02

3.初始配置

[[email protected] ~]$  gpinitsystem -c gpinitsystem_config -s BI-greenplum-02

以上说明已经安装完成

[[email protected] ~]$ psql -d trjdb

psql (8.2.15)

Type "help" for help.

trjdb=#

查看集群状态

select a.dbid,a.content,a.role,a.port,a.hostname,b.fsname,c.fselocation from gp_segment_configuration a,pg_filespace b,pg_filespace_entry c where a.dbid=c.fsedbid and b.oid=c.fsefsoid order by content;

greenplum增加机器与增加数据节点

1、增加两台(192.168.10.203、192.168.10.204)

修改hosts(每台都一样)

[[email protected] ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.201        BI-greenplum-01

192.168.10.202        BI-greenplum-02

192.168.10.203        BI-greenplum-03

192.168.10.204        BI-greenplum-04

2、创建用户及用户组(增加的机器)

[[email protected] ~]# groupadd -g 530 gpadmin

[[email protected] ~]# useradd -g 530 -u530 -m -d /home/gpadmin -s /bin/bash gpadmin

[[email protected] ~]# passwd gpadmin

Changing password for user gpadmin.

New password:

BAD PASSWORD: it is too simplistic/systematic

BAD PASSWORD: is too simple

Retype new password:

passwd: all authentication tokens updated successfully.

3.修改内核配置文件(增加的机器)

[[email protected] ~]# vi /etc/sysctl.conf

#By greenplum

net.ipv4.ip_forward = 0

net.ipv4.conf.default.accept_source_route = 0

kernel.sysrq = 1

kernel.core_uses_pid = 1

net.ipv4.tcp_syncookies = 1

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.sem = 250 64000 100 512

kernel.shmmax = 500000000

kernel.shmmni = 4096

kernel.shmall = 4000000000

kernel.sem = 250 64000 100 512

net.ipv4.tcp_tw_recycle=1

net.ipv4.tcp_max_syn_backlog=4096

net.core.netdev_max_backlog=10000

vm.overcommit_memory=2

net.ipv4.conf.all.arp_filter = 1

让内核参数生效

[[email protected] ~]# sysctl -p

4、修改文件打开数

[[email protected] ~]# vi  /etc/security/limits.conf

* soft nofile 65536

* hard nofile 65536

* soft nproc 131072

* hard nproc 131072

5、安装依赖包

yum -y install ed openssh-clients gcc gcc-c++  make automake autoconf libtool perl rsync coreutils glib2 lrzsz sysstat e4fsprogs xfsprogs ntp readline-devel zlib zlib-devel unzip

6、把之前的压缩包gp.tar.gz拷贝到增加节点上

[[email protected] local]# scp gp.tar.gz BI-greenplum-03:/usr/local/

[[email protected] local]# scp gp.tar.gz BI-greenplum-04:/usr/local/

解压

[[email protected] local]# tar zxvf gp.tar.gz

[[email protected] local]# tar zxvf gp.tar.gz

7、增加节点创建目录(每台增加节点上)

[[email protected] local]# mkdir /app/master

[[email protected] local]# mkdir /app/master

[[email protected] local]# mkdir -p /app/data/gp1 /app/data/gp2 /app/data/gp3 /app/data/gp4

[[email protected] local]#  mkdir -p /app/data/gp1 /app/data/gp2 /app/data/gp3 /app/data/gp4

[[email protected] local]# mkdir -p /app/data/gpm1 /app/data/gpm2 /app/data/gpm3 /app/data/gpm4

[[email protected] local]# mkdir -p /app/data/gpm1 /app/data/gpm2 /app/data/gpm3 /app/data/gpm4

[[email protected] local]# chown -R gpadmin:gpadmin /app

[[email protected] local]# chown -R gpadmin:gpadmin /app

[[email protected] local]#  chmod -R 700 /app

[[email protected] local]#  chmod -R 700 /app

8、修改环境变量(增加计算点机器)

[[email protected] local]# su - gpadmin

[[email protected] ~]$ vi .bash_profile

source /usr/local/greenplum-db/greenplum_path.sh

export MASTER_DATA_DIRECTORY=/app/master/gpseg-1

export PGPORT=5432

export PGDATABASE=trjdb

让环境变量生效

[[email protected] ~]$ source .bash_profile

9、免密钥在BI-greenplum-01操作

[[email protected] local]# su - gpadmin

[[email protected] ~]$ vi all_hosts_file

BI-greenplum-01

BI-greenplum-02

BI-greenplum-03

BI-greenplum-04

[[email protected] ~]$ gpssh-exkeys -f all_hosts_file

[STEP 1 of 5] create local ID and authorize on local host

... /home/gpadmin/.ssh/id_rsa file exists ... key generation skipped

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] authorize current user on remote hosts

... send to BI-greenplum-02

... send to BI-greenplum-03

***

*** Enter password for BI-greenplum-03:

... send to BI-greenplum-04

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts

... finished key exchange with BI-greenplum-02

... finished key exchange with BI-greenplum-03

... finished key exchange with BI-greenplum-04

[INFO] completed successfully

10.初始化新扩展(在master上操作)

[[email protected] ~]$ vi hosts_expand

BI-greenplum-03

BI-greenplum-04

根据自己情况来增加

[[email protected] ~]$ gpexpand -f hosts_expand

20171208:00:55:14:023306 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.3.1 build 1'

20171208:00:55:14:023306 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.3.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Oct 10 2014 14:31:57'

20171208:00:55:14:023306 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state

System Expansion is used to add segments to an existing GPDB array.

gpexpand did not detect a System Expansion that is in progress.

Before initiating a System Expansion, you need to provision and burn-in

the new hardware.  Please be sure to run gpcheckperf/gpcheckos to make

sure the new hardware is working properly.

Please refer to the Admin Guide for more information.

Would you like to initiate a new System Expansion Yy|Nn (default=N):

> y

You must now specify a mirroring strategy for the new hosts.  Spread mirroring places

a given hosts mirrored segments each on a separate host.  You must be

adding more hosts than the number of segments per host to use this.

Grouped mirroring places all of a given hosts segments on a single

mirrored host.  You must be adding at least 2 hosts in order to use this.

What type of mirroring strategy would you like?

spread|grouped (default=grouped):

>

By default, new hosts are configured with the same number of primary

segments as existing hosts.  Optionally, you can increase the number

of segments per host.

For example, if existing hosts have two primary segments, entering a value

of 2 will initialize two additional segments on existing hosts, and four

segments on new hosts.  In addition, mirror segments will be added for

these new primary segments if mirroring is enabled.

How many new primary segments per host do you want to add? (default=0):

> 4

Enter new primary data directory 1:

> /app/data/gp1

Enter new primary data directory 2:

> /app/data/gp2

Enter new primary data directory 3:

> /app/data/gp3

Enter new primary data directory 4:

> /app/data/gp4

Enter new mirror data directory 1:

> /app/data/gpm1

Enter new mirror data directory 2:

> /app/data/gpm2

Enter new mirror data directory 3:

> /app/data/gpm3

Enter new mirror data directory 4:

> /app/data/gpm4

Generating configuration file...

20171208:00:57:18:023306 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Generating input file...

Input configuration files were written to 'gpexpand_inputfile_20171208_005718' and 'None'.

Please review the file and make sure that it is correct then re-run

with: gpexpand -i gpexpand_inputfile_20171208_005718 -D trjdb

20171208:00:57:18:023306 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Exiting...

会生成一个名为gpexpand_inputfile_20171208_005718配置文件,需要修改后才能通过配置文件扩展数据库

原文件如:红色是保存的需要的

[[email protected] ~]$ cat gpexpand_inputfile_20171208_005718

BI-greenplum-03:BI-greenplum-03:40000:/app/data/gp1/gpseg8:19:8:p:41000

BI-greenplum-04:BI-greenplum-04:50000:/app/data/gpm1/gpseg8:31:8:m:51000

BI-greenplum-03:BI-greenplum-03:40001:/app/data/gp2/gpseg9:20:9:p:41001

BI-greenplum-04:BI-greenplum-04:50001:/app/data/gpm2/gpseg9:32:9:m:51001

BI-greenplum-03:BI-greenplum-03:40002:/app/data/gp3/gpseg10:21:10:p:41002

BI-greenplum-04:BI-greenplum-04:50002:/app/data/gpm3/gpseg10:33:10:m:51002

BI-greenplum-03:BI-greenplum-03:40003:/app/data/gp4/gpseg11:22:11:p:41003

BI-greenplum-04:BI-greenplum-04:50003:/app/data/gpm4/gpseg11:34:11:m:51003

BI-greenplum-04:BI-greenplum-04:40000:/app/data/gp1/gpseg12:23:12:p:41000

BI-greenplum-03:BI-greenplum-03:50000:/app/data/gpm1/gpseg12:27:12:m:51000

BI-greenplum-04:BI-greenplum-04:40001:/app/data/gp2/gpseg13:24:13:p:41001

BI-greenplum-03:BI-greenplum-03:50001:/app/data/gpm2/gpseg13:28:13:m:51001

BI-greenplum-04:BI-greenplum-04:40002:/app/data/gp3/gpseg14:25:14:p:41002

BI-greenplum-03:BI-greenplum-03:50002:/app/data/gpm3/gpseg14:29:14:m:51002

BI-greenplum-04:BI-greenplum-04:40003:/app/data/gp4/gpseg15:26:15:p:41003

BI-greenplum-03:BI-greenplum-03:50003:/app/data/gpm4/gpseg15:30:15:m:51003

BI-greenplum-01:BI-greenplum-01:40004:/app/data/gp1/gpseg16:35:16:p:41004

BI-greenplum-02:BI-greenplum-02:50004:/app/data/gpm1/gpseg16:55:16:m:51004

BI-greenplum-01:BI-greenplum-01:40005:/app/data/gp2/gpseg17:36:17:p:41005

BI-greenplum-02:BI-greenplum-02:50005:/app/data/gpm2/gpseg17:56:17:m:51005

BI-greenplum-01:BI-greenplum-01:40006:/app/data/gp3/gpseg18:37:18:p:41006

BI-greenplum-02:BI-greenplum-02:50006:/app/data/gpm3/gpseg18:57:18:m:51006

BI-greenplum-01:BI-greenplum-01:40007:/app/data/gp4/gpseg19:38:19:p:41007

BI-greenplum-02:BI-greenplum-02:50007:/app/data/gpm4/gpseg19:58:19:m:51007

BI-greenplum-02:BI-greenplum-02:40004:/app/data/gp1/gpseg20:39:20:p:41004

BI-greenplum-03:BI-greenplum-03:50004:/app/data/gpm1/gpseg20:59:20:m:51004

BI-greenplum-02:BI-greenplum-02:40005:/app/data/gp2/gpseg21:40:21:p:41005

BI-greenplum-03:BI-greenplum-03:50005:/app/data/gpm2/gpseg21:60:21:m:51005

BI-greenplum-02:BI-greenplum-02:40006:/app/data/gp3/gpseg22:41:22:p:41006

BI-greenplum-03:BI-greenplum-03:50006:/app/data/gpm3/gpseg22:61:22:m:51006

BI-greenplum-02:BI-greenplum-02:40007:/app/data/gp4/gpseg23:42:23:p:41007

BI-greenplum-03:BI-greenplum-03:50007:/app/data/gpm4/gpseg23:62:23:m:51007

BI-greenplum-03:BI-greenplum-03:40004:/app/data/gp1/gpseg24:43:24:p:41004

BI-greenplum-04:BI-greenplum-04:50004:/app/data/gpm1/gpseg24:63:24:m:51004

BI-greenplum-03:BI-greenplum-03:40005:/app/data/gp2/gpseg25:44:25:p:41005

BI-greenplum-04:BI-greenplum-04:50005:/app/data/gpm2/gpseg25:64:25:m:51005

BI-greenplum-03:BI-greenplum-03:40006:/app/data/gp3/gpseg26:45:26:p:41006

BI-greenplum-04:BI-greenplum-04:50006:/app/data/gpm3/gpseg26:65:26:m:51006

BI-greenplum-03:BI-greenplum-03:40007:/app/data/gp4/gpseg27:46:27:p:41007

BI-greenplum-04:BI-greenplum-04:50007:/app/data/gpm4/gpseg27:66:27:m:51007

BI-greenplum-04:BI-greenplum-04:40004:/app/data/gp1/gpseg28:47:28:p:41004

BI-greenplum-01:BI-greenplum-01:50004:/app/data/gpm1/gpseg28:51:28:m:51004

BI-greenplum-04:BI-greenplum-04:40005:/app/data/gp2/gpseg29:48:29:p:41005

BI-greenplum-01:BI-greenplum-01:50005:/app/data/gpm2/gpseg29:52:29:m:51005

BI-greenplum-04:BI-greenplum-04:40006:/app/data/gp3/gpseg30:49:30:p:41006

BI-greenplum-01:BI-greenplum-01:50006:/app/data/gpm3/gpseg30:53:30:m:51006

BI-greenplum-04:BI-greenplum-04:40007:/app/data/gp4/gpseg31:50:31:p:41007

BI-greenplum-01:BI-greenplum-01:50007:/app/data/gpm4/gpseg31:54:31:m:51007

修改后如下:

然后运行gpexpand脚本

gpexpand -i gpexpand_inputfile_20171208_005718 -D trjdb

[[email protected] ~]$ gpexpand -i gpexpand_inputfile_20171208_005718 -D trjdb

20171208:01:03:10:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.3.1 build 1'

20171208:01:03:10:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.3.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Oct 10 2014 14:31:57'

20171208:01:03:11:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state

20171208:01:03:11:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Readying Greenplum Database for a new expansion

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking database trjdb for unalterable tables...

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking database postgres for unalterable tables...

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking database template1 for unalterable tables...

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking database trjdb for tables with unique indexes...

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking database postgres for tables with unique indexes...

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking database template1 for tables with unique indexes...

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Syncing Greenplum Database extensions

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-The packages on BI-greenplum-03 are consistent.

20171208:01:03:26:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-The packages on BI-greenplum-04 are consistent.

20171208:01:03:27:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Creating segment template

20171208:01:03:27:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-VACUUM FULL on the catalog tables

20171208:01:03:28:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Starting copy of segment dbid 1 to location /app/master/gpexpand_12082017_23572

20171208:01:03:28:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Copying postgresql.conf from existing segment into template

20171208:01:03:29:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Copying pg_hba.conf from existing segment into template

20171208:01:03:29:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Adding new segments into template pg_hba.conf

20171208:01:03:29:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Creating schema tar file

20171208:01:03:29:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Distributing template tar file to new hosts

20171208:01:03:31:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Configuring new segments (primary)

20171208:01:03:32:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Configuring new segments (mirror)

20171208:01:03:33:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Backing up pg_hba.conf file on original segments

20171208:01:03:33:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Copying new pg_hba.conf file to original segments

20171208:01:03:33:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Configuring original segments

20171208:01:03:33:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Cleaning up temporary template files

20171208:01:03:34:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Starting Greenplum Database in restricted mode

20171208:01:03:42:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Stopping database

20171208:01:03:55:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking if Transaction filespace was moved

20171208:01:03:55:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking if Temporary filespace was moved

20171208:01:03:55:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Configuring new segment filespaces

20171208:01:03:55:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Cleaning up databases in new segments.

20171208:01:03:55:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Starting master in utility mode

20171208:01:03:56:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Stopping master in utility mode

20171208:01:04:03:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Starting Greenplum Database in restricted mode

20171208:01:04:11:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Creating expansion schema

20171208:01:04:12:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database trjdb

20171208:01:04:12:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database postgres

20171208:01:04:13:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database template1

20171208:01:04:14:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Stopping Greenplum Database

20171208:01:04:27:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Starting Greenplum Database

20171208:01:04:34:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Starting new mirror segment synchronization

20171208:01:04:48:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-************************************************

20171208:01:04:48:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Initialization of the system expansion complete.

20171208:01:04:48:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-To begin table expansion onto the new segments

20171208:01:04:48:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-rerun gpexpand

20171208:01:04:48:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-************************************************

20171208:01:04:48:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Exiting...

以上说明增加计算节点成功

假如上一步失败了,怎么办?
启动限制模式,回滚。
gpstart -R
gpexpand --rollback -D trjdb
gpstart -a
然后找问题继续上一步,直到成功。

可以采用脚本进行表重分布

[[email protected] ~]$ gpexpand -d 60:00:00

20171208:01:09:08:026159 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.3.1 build 1'

20171208:01:09:08:026159 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.3.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Oct 10 2014 14:31:57'

20171208:01:09:09:026159 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state

20171208:01:09:14:026159 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-EXPANSION COMPLETED SUCCESSFULLY

20171208:01:09:14:026159 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Exiting...

查看节点状态,红色是新增加的

[[email protected] ~]$ psql -d trjdb

psql (8.2.15)

Type "help" for help.

trjdb=# select a.dbid,a.content,a.role,a.port,a.hostname,b.fsname,c.fselocation from gp_segment_configuration a,pg_filespace b,pg_filespace_entry c where a.dbid=c.fsedbid and b.oid=c.fsefsoid order by content;

dbid | content | role | port  |    hostname     |  fsname   |      fselocation

------+---------+------+-------+-----------------+-----------+------------------------

1 |      -1 | p    |  5432 | BI-greenplum-01 | pg_system | /app/master/gpseg-1

18 |      -1 | m    |  5432 | BI-greenplum-02 | pg_system | /app/master/gpseg-1

10 |       0 | m    | 50000 | BI-greenplum-02 | pg_system | /app/data/gpm1/gpseg0

2 |       0 | p    | 40000 | BI-greenplum-01 | pg_system | /app/data/gp1/gpseg0

3 |       1 | p    | 40001 | BI-greenplum-01 | pg_system | /app/data/gp2/gpseg1

11 |       1 | m    | 50001 | BI-greenplum-02 | pg_system | /app/data/gpm2/gpseg1

4 |       2 | p    | 40002 | BI-greenplum-01 | pg_system | /app/data/gp3/gpseg2

12 |       2 | m    | 50002 | BI-greenplum-02 | pg_system | /app/data/gpm3/gpseg2

5 |       3 | p    | 40003 | BI-greenplum-01 | pg_system | /app/data/gp4/gpseg3

13 |       3 | m    | 50003 | BI-greenplum-02 | pg_system | /app/data/gpm4/gpseg3

6 |       4 | p    | 40000 | BI-greenplum-02 | pg_system | /app/data/gp1/gpseg4

14 |       4 | m    | 50000 | BI-greenplum-01 | pg_system | /app/data/gpm1/gpseg4

15 |       5 | m    | 50001 | BI-greenplum-01 | pg_system | /app/data/gpm2/gpseg5

7 |       5 | p    | 40001 | BI-greenplum-02 | pg_system | /app/data/gp2/gpseg5

16 |       6 | m    | 50002 | BI-greenplum-01 | pg_system | /app/data/gpm3/gpseg6

8 |       6 | p    | 40002 | BI-greenplum-02 | pg_system | /app/data/gp3/gpseg6

17 |       7 | m    | 50003 | BI-greenplum-01 | pg_system | /app/data/gpm4/gpseg7

9 |       7 | p    | 40003 | BI-greenplum-02 | pg_system | /app/data/gp4/gpseg7

31 |       8 | m    | 50000 | BI-greenplum-04 | pg_system | /app/data/gpm1/gpseg8

19 |       8 | p    | 40000 | BI-greenplum-03 | pg_system | /app/data/gp1/gpseg8

32 |       9 | m    | 50001 | BI-greenplum-04 | pg_system | /app/data/gpm2/gpseg9

20 |       9 | p    | 40001 | BI-greenplum-03 | pg_system | /app/data/gp2/gpseg9

33 |      10 | m    | 50002 | BI-greenplum-04 | pg_system | /app/data/gpm3/gpseg10

21 |      10 | p    | 40002 | BI-greenplum-03 | pg_system | /app/data/gp3/gpseg10

22 |      11 | p    | 40003 | BI-greenplum-03 | pg_system | /app/data/gp4/gpseg11

34 |      11 | m    | 50003 | BI-greenplum-04 | pg_system | /app/data/gpm4/gpseg11

27 |      12 | m    | 50000 | BI-greenplum-03 | pg_system | /app/data/gpm1/gpseg12

23 |      12 | p    | 40000 | BI-greenplum-04 | pg_system | /app/data/gp1/gpseg12

28 |      13 | m    | 50001 | BI-greenplum-03 | pg_system | /app/data/gpm2/gpseg13

24 |      13 | p    | 40001 | BI-greenplum-04 | pg_system | /app/data/gp2/gpseg13

29 |      14 | m    | 50002 | BI-greenplum-03 | pg_system | /app/data/gpm3/gpseg14

25 |      14 | p    | 40002 | BI-greenplum-04 | pg_system | /app/data/gp3/gpseg14

26 |      15 | p    | 40003 | BI-greenplum-04 | pg_system | /app/data/gp4/gpseg15

30 |      15 | m    | 50003 | BI-greenplum-03 | pg_system | /app/data/gpm4/gpseg15

(34 rows)

原文地址:http://blog.51cto.com/jxzhfei/2056120

时间: 2024-10-10 01:56:02

greenplum集群安装与增加节点生产环境实战的相关文章

mongodb集群安装及延迟节点配置

mongodb集群安装及延迟节点配置 本文主要介绍mongodb安装.副本集模式的配置.mongodb数据库的简单使用及延迟节点搭建和利用延迟节点恢复误删除的数据. 一.系统环境 平台:Centos6.6_x86_64 实验环境:四台主机部署副本集模式集群 主机:192.168.115.21.192.168.115.22.192.168.115.23.192.168.115.24 规划:21为master节点,22为副本节点,23为副本节点,24为延迟节点 目的:完成副本集模式集群的部署 测试延

Tomcat学习总结(8)——Tomcat+Nginx集群解决均衡负载及生产环境热部署

近日,为解决生产环境热部署问题,决定在服务器中增加一个tomcat组成集群,利用集群解决热部署问题. 这样既能解决高并发瓶颈问题,又能解决热部署(不影响用户使用的情况下平滑更新生产服务器)问题. 因为项目是前后端分离的,所以本以为成本很低,没想到遇到了一系列的坑,解决了2天才搞定,发现了很多不是集群而是项目本身的问题. 我是同一个服务器下配置tomcat和nginx等,本文主要面向有一定基础的读者,基本配置就不在本文累述了(基础问题可以留言或者发邮件). 0x0_1 服务器环境 服务器: Cen

greenplum 集群安装配置(生产环境)

集群系统初始化信息: http://blog.51cto.com/michaelkang/2167195 本文对敏感信息进行了替换!!!! 下载软件包: cd /workspace/gpdb/ wget dl.#kjh#.com/greenplum-db-5.10.2-rhel7-x86_64.rpm RPM方式, 默认安装到 /usr/local/greenplum-db/ (以root在master执行) rpm -Uvh greenplum-db-5.10.2-rhel7-x86_64.r

Greenplum集群安装(测试环境)

环境:centos6.5 软件:/srv/greenplum-db-4.3.8.1-build-1-RHEL5-x86_64.zip 目录规划:安装目录/opt/greenplum                 ---所有 数据目录/data/greenplum/gpmaster        ---master节点 /data/greenplum/pri_data           --- segment 节点主节点数据目录 /data/greenplum/mri_data       

greenplum集群安装

一.环境配置 1.地址分配 192.168.1.201 mdw master 192.168.1.202 sdw1 segment1 192.168.1.203 sdw2 segment2 2.创建用户及用户组 $ groupadd -g 530 gpadmin $ useradd gpadmin -u 530 -g gpadmin $ passwd gpadmin 3.系统配置 在/etc/sysctl.conf 文件中加入有关共享内存与网络参数配置;执行 sysctl -p;使之生效 ker

greenplum 集群新增 standby节点

greenplum 单独添加 standby 本文主要介绍如何为没有做Standby的Master节点添加Standby.(快速整理版) 旧版本gp集群需要注意!!!! 在为Master节点搭建Standby的过程中,GreenPlum会自动关闭数据库,并以utility模式打开Master节点,然后修改gp_segment_configuration字典中增加Standby的信息,然后再关闭Master节点,将Master的数据拷贝到Standby节点,最后启动数据库,所以,在为Master节

MySQL集群安装与配置

MySQL集群安装与配置 文章目录 [隐藏] 一.mysql集群安装 二.节点配置 三.首次启动节点 四.测试服务是否正常 五.安全关闭和重启 MySQL Cluster 是 MySQL 适合于分布式计算环境的高实用.高冗余版本.它采用了NDB Cluster 存储引擎,允许在1个 Cluster 中运行多个MySQL服务器.MySQL Cluster 能够使用多种故障切换和负载平衡选项配置NDB存储引擎,但在 Cluster 级别上的存储引擎上做这个最简单.下面我们简单介绍MySQL Clus

Intel MPI 5.1.3集群安装

安装之前: 需配置好/etc/hosts文件.ssh无密码访问.关闭SELinux与防火墙,这部分类容可参照<Intel MPI 5.1.3安装+整体安装配置过程介绍> Intel MPI软件请通过Intel官网下载. 开始安装: 在/root/目录下创建node文件,内容为节点主机名 ##用root账户在第一个节点执行安装过程 #tar xzvf l_mpi_p_5.1.3.181.tgz #cd l_mpi_p_5.1.3.181 #./install.sh 提示安装一共分为6个步骤,敲击

CDH5 集群安装教程

一.虚拟机的安装和网络配置. 1.虚拟机安装. 2.安装CentOS-6.5 64位版本. 桥接模式: Master: 内存:3G: 硬盘容量40G: 4核: Slave: 内存2G: 硬盘容量30G: 2核: 3.网络配置(master,slave) 1)进入root账号 su - root 输入密码: vi /etc/sysconfig/network 2)关闭防火墙 vi /etc/sysconfig/selinux #SELinux=disable Service iptables st