Oracle 11gR2 RAC 添加节点

1. 概述

生产,测试数据库添加节点。

2. 安装前准备

1、首先,物理链路的准备。这过程包括对db3进行存储映射、心跳互联等物理环境的准备;

2、根据db1、db2的操作系统配置,安装、配置db3的操作系统;注意此处需要配置的操作系统内容较多。大致包括确认RAC需要的系统安装包、系统核心参数配置、ASMLIB的配置、/etc/hosts配置等等。详细可参考官方的安装指导手册。

3、根据db1、db2的操作系统组、用户的信息,在db3上创建相应的组、用户;创建对于的目录信息;注意:创建的组、用户,其ID要与db1、db2上的一致!

4、确保每个节点均配置ssh互信;

5、采用CVU,验证db3与db1、db2的连通性等。注意:以下操作指令在db1或者db2上执行:

2.1 创建用户和组

#/usr/sbin/groupadd -g 501 oinstall

#/usr/sbin/groupadd -g 502 dba

#/usr/sbin/groupadd -g 504 asmadmin

#/usr/sbin/groupadd -g 506 asmdba

#/usr/sbin/groupadd -g 507 asmoper

#/usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid

#/usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle

Changing password for user oracle.

New UNIX password: password

retype new UNIX password: password

passwd: all authentication tokens updated successfully.

passwd grid

Changing password for user oracle.

New UNIX password: password

retype new UNIX password: password

passwd: all authentication tokens updated successfully.

备注:Oracle,grid 用户创建完成后,对照其他节点检查是否一样的。

2.2 配置时间服务器

节点1编辑ntp服务配置时间服务

service xinetd start

service ntpd start

chkconfig time on

chkconfig ntpd on

chkconfig xinetd on

其他节点操作,建立一个每分钟的同步

crontab -e

0-59/1 * * * * /usr/bin/rdate -s 192.168.8.177 >/dev/null 2>&1

0-59/1 * * * * /usr/sbin/ntpdate 192.168.8.177 >/dev/null 2>&1

2.3 网络配置

vi /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost

--host ip

192.168.150.33 node1-11gr2

192.168.150.34 node2-11gr2

192.168.150.35 node3-11gr2

192.168.150.36 node4-11gr2

--host vip

192.168.150.133 node1-11gr2-vip

192.168.150.134 node2-11gr2-vip

192.168.150.136 node3-11gr2-vip

192.168.150.137 node4-11gr2-vip

--host priv ip

10.1.1.10 node1-11gr2-priv

10.1.1.11 node2-11gr2-priv

10.1.1.12 node3-11gr2-priv

10.1.1.13 node4-11gr2-priv

--cluster vip

192.168.150.135 scan-cluster

备注:根据其他节点参照配置

2.4 修改内核参数

vi /etc/sysctl.conf

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

fs.file-max = 512 x processes (for example 6815744 for 13312 processes)

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

#/sbin/sysctl -p

备注:根据其他节点修改

2.5 Oracle用户配置

vi /etc/security/limits.conf

grid soft nproc 2047

grid hard nproc 16384

grid soft nofile 1024

grid hard nofile 65536

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

vi /etc/pam.d/login

session required pam_limits.so

vi /etc/profile

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

umask 022

fi

2.6 创建文件目录

--Create the Oracle Inventory Directory

To create the Oracle Inventory directory, enter the following commands as the root user:

# mkdir -p /u01/app/oraInventory

# chown -R grid:oinstall /u01/app/oraInventory

# chmod -R 775 /u01/app/oraInventory

--Creating the Oracle Grid Infrastructure Home Directory

To create the Grid Infrastructure home directory, enter the following commands as the root user:

# mkdir -p /u01/11.2.0/grid

# chown -R grid:oinstall /u01/11.2.0/grid

# chmod -R 775 /u01/11.2.0/grid

--Creating the Oracle Base Directory

To create the Oracle Base directory, enter the following commands as the root user:

# mkdir -p /u01/app/oracle

# mkdir /u01/app/oracle/cfgtoollogs

# chown -R oracle:oinstall /u01/app/oracle

# chmod -R 775 /u01/app/oracle

--Creating the Oracle RDBMS Home Directory

To create the Oracle RDBMS Home directory, enter the following commands as the root user:

# mkdir -p /u01/app/oracle/product/11.2.0/db_1

# chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1

# chmod -R 775 /u01/app/oracle/product/11.2.0/db_1

2.7 用户环境配置

--grid 用户

export EDITOR=vi

export ORACLE_SID=+ASM1

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=/u01/11.2.0/grid

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin

export NLS_LANG="AMERICAN_AMERICA.ZHS16GBK"

umask 022

stty erase ^h

ulimit -s 32768

ulimit -n 65536

--Oracle用户

export EDITOR=vi

export ORACLE_SID=prod1

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export PATH=$ORACLE_HOME/bin:/u01/11.2.0/grid/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin:/u01/11.2.0/grid/bin

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK

umask 022

stty erase ^h

ulimit -s 32768

ulimit -n 65536

2.8 互信配置

--新加入节点node2 grid,oracle 用户配置

su - grid

$ mkdir ~/.ssh

$ chmod 700 ~/.ssh

$ /usr/bin/ssh-keygen -t rsa

$ /usr/bin/ssh-keygen -t dsa

$ touch ~/.ssh/authorized_keys

$ cd ~/.ssh

$ ls

su - oracle

$ mkdir ~/.ssh

$ chmod 700 ~/.ssh

$ /usr/bin/ssh-keygen -t rsa

$ /usr/bin/ssh-keygen -t dsa

$ touch ~/.ssh/authorized_keys

$ cd ~/.ssh

$ ls

--node2,oracle 用户(其中一个节点执行)

[[email protected] .ssh]$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[[email protected] .ssh]$cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

[[email protected] .ssh$ ssh node1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[email protected] password:

[[email protected] .ssh$ ssh node1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

[email protected] password:

--node1,oracle 远程拷贝

[[email protected] .ssh]scp ~/.ssh/authorized_keys node1:~/.ssh/authorized_keys

--node1,node2

$ chmod 600 ~/.ssh/authorized_keys

ssh node1-11gr2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh node3-11gr2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh node4-11gr2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh node1-11gr2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

ssh node3-11gr2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

ssh node4-11gr2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

scp ~/.ssh/authorized_keys node1-11gr2:~/.ssh/authorized_keys

scp ~/.ssh/authorized_keys node3-11gr2:~/.ssh/authorized_keys

scp ~/.ssh/authorized_keys node4-11gr2:~/.ssh/authorized_keys

--测试

ssh rac1 date

ssh rac2 date

ssh rac1-priv date

ssh rac2-priv date

ssh scan-cluster date

--执行脚本测试

ssh node1-11gr2-priv date;ssh node2-11gr2-priv date;ssh node3-11gr2-priv date;ssh node4-11gr2-priv date;ssh scan-cluster date

ssh node1-11gr2 date;ssh node2-11gr2 date;ssh node3-11gr2 date;ssh node4-11gr2 date;

2.9 OracleRPM包检查

32位系统rpm 包

libXp-1.0.0-8.i386.rpm

openmotif22-2.2.3-18.i386.rpm

compat-db-4.2.52-5.1.i386.rpm

compat-db42.i686 0:4.2.52-15.el6

compat-db43.i686 0:4.3.29-15.el6

kernel-headers-2.6.18-92.el5.i386.rpm

glibc-headers-2.5-24.i386.rpm

compat-gcc-34-3.4.6-4.i386.rpm

compat-gcc-34-c++-3.4.6-4.i386.rpm

compat-libstdc++-33-3.2.3-61.i386.rpm

libaio-0.3.106-3.2.i386.rpm

libgomp-4.1.2-42.el5.i386.rpm

gcc-4.1.2-42.el5.i386.rpm

binutils-2.15.92.0.2-22

64位系统 rpm 包

libXp-1.0.0-8.i386.rpm
openmotif22-2.2.3-18.i386.rpm
binutils-2.15.92.0.2-22  (x86_64)
compat-db-4.1.25-9  (i386)
compat-db-4.1.25-9  (x86_64)
compat-libstdc++-33-3.2.3-47.3.i386.rpm
control-center-2.8.0-12.rhel4.5  (x86_64)
kernel-headers-2.6.18-194.el5.x86_64.rpm
glibc-headers-2.5-49.x86_64.rpm
glibc-common-2.3.4-2.36  (x86_64)
glibc-devel-2.3.4-2.36  (x86_64)
glibc-devel-2.3.4-2.36  (i386)  
glibc-2.3.4-2.36  (i686)
glibc-2.3.4-2.36  (x86_64)
libstdc++-3.4.6-8  (i386)
libstdc++-3.4.6-8  (x86_64)
libstdc++-devel-4.1.2-48.el5.x86_64.rpm
make-3.81-3.el5.x86_64.rpm
pdksh-5.2.14-36.el5.x86_64.rpm
sysstat-7.0.2-3.el5.x86_64.rpm
libaio-0.3.105-2  (i386)
libgomp-4.1.2-42.el5.i386.rpm
gcc-4.1.2-48.el5  (x86_64)
gcc-c++-4.1.2-48.el5  (x86_64)
elfutils-libelf-devel-0.137-3.el5.x86_64
elfutils-libelf-devel-0.137-3.el5.i386
libaio-devel-0.3.106-5.i386
libaio-devel-0.3.106-5.x86_64
unixODBC.i386 0:2.2.11-10.el5
unixODBC-devel

yum安装

yum install binutils -y

yum install compat-libcap1 -y

yum install compat-libstdc++-33 -y

yum install compat-libstdc++-33.i686 -y

yum install gcc -y

yum install gcc-c++ -y

yum install glibc -y

yum install glibc.i686 -y

yum install glibc-devel -y

yum install glibc-devel.i686 -y

yum install ksh -y

yum install libgcc -y

yum install libgcc.i686 -y

yum install libstdc++ -y

yum install libstdc++.i686 -y

yum install libstdc++-devel -y

yum install libstdc++-devel.i686 -y

yum install libaio -y

yum install libaio.i686 -y

yum install libaio-devel -y

yum install libaio-devel.i686 -y

yum install libXext -y

yum install libXext.i686 -y

yum install libXtst -y

yum install libXtst.i686 -y

yum install libX11 -y

yum install libX11.i686 -y

yum install libXau -y

yum install libXau.i686 -y

yum install libxcb -y

yum install libxcb.i686 -y

yum install libXi -y

yum install libXi.i686 -y

yum install make -y

yum install sysstat -y

yum install unixODBC -y

yum install unixODBC-devel -y

安装脚本检查

rpm -q --qf ‘%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n‘ binutils \

binutils \

compat-libcap1 \

compat-libstdc++-33 \

gcc \

gcc-c++ \

glibc \

glibc-common \

glibc-devel \

glibc-headers \

ksh \

libgcc \

libaio \

libaio-devel \

libstdc++ \

libstdc++-devel \

libXext \

libXtst \

libX11 \

libXau \

libxcb \

libXi \

make \

sysstat \

elfutils-libelf \

elfutils-libelf-devel \

unixODBC \

unixODBC-devel

--本地yun源配置

虚拟机挂着 ISO光盘

mount /dev/cdrom /mnt

修改yum源文件

[[email protected] yum.repos.d]# cat my.repo

[c6-media]

name=CentOS-$releasever - Media

baseurl=file:///mnt/Server

gpgcheck=0

enabled=1

2.10 UDEV共享磁盘配置

[[email protected] rules.d]# cat 99-oracle-asmdevices.rules

ACTION=="add", KERNEL=="/dev/sdb1",RUN+="/bin/raw /dev/raw/raw1 %N"

ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m"

ACTION=="add", KERNEL=="/dev/sdc1",RUN+="/bin/raw /dev/raw/raw2 %N"

ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m"

ACTION=="add", KERNEL=="/dev/sdd1",RUN+="/bin/raw /dev/raw/raw3 %N"

ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m"

ACTION=="add", KERNEL=="/dev/sde1",RUN+="/bin/raw /dev/raw/raw4 %N"

ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="65",RUN+="/bin/raw /dev/raw/raw4 %M %m"

ACTION=="add", KERNEL=="/dev/sdf1",RUN+="/bin/raw /dev/raw/raw5 %N"

ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="81",RUN+="/bin/raw /dev/raw/raw5 %M %m"

KERNEL=="raw[1-5]", OWNER="grid", GROUP="asmadmin", MODE="660"

在所有节点上启动udev服务,或者重启服务器即可

[[email protected] rules.d]# /sbin/udevcontrol reload_rules

[[email protected] rules.d]# /sbin/start_udev

Starting udev: [ OK ]

检查设备是否到位

[[email protected] rules.d]# raw -qa

/dev/raw/raw1: bound to major 8, minor 17

/dev/raw/raw2: bound to major 8, minor 33

/dev/raw/raw3: bound to major 8, minor 49

/dev/raw/raw4: bound to major 8, minor 65

/dev/raw/raw5: bound to major 8, minor 81

[[email protected] raw]# ls -lhtr

total 0

crw-rw---- 1 grid asmadmin 162, 1 Feb 16 14:07 raw1

crw-rw---- 1 grid asmadmin 162, 3 Feb 16 14:07 raw3

crw-rw---- 1 grid asmadmin 162, 4 Feb 16 14:07 raw4

crw-rw---- 1 grid asmadmin 162, 2 Feb 16 14:07 raw2

crw-rw---- 1 grid asmadmin 162, 5 Feb 16 14:07 raw5

[[email protected] rules.d]# cd /dev

[[email protected] dev]# ls -l ocr*

brw-rw---- 1 grid asmadmin 8, 32 Jul 10 17:31 ocr1

brw-rw---- 1 grid asmadmin 8, 48 Jul 10 17:31 ocr2

[[email protected] dev]# ls -l asm-disk*

brw-rw---- 1 grid asmadmin 8, 64 Jul 10 17:31 asm-disk1

brw-rw---- 1 grid asmadmin 8, 208 Jul 10 17:31 asm-disk10

brw-rw---- 1 grid asmadmin 8, 224 Jul 10 17:31 asm-disk11

brw-rw---- 1 grid asmadmin 8, 240 Jul 10 17:31 asm-disk12

brw-rw---- 1 grid asmadmin 8, 80 Jul 10 17:31 asm-disk2

brw-rw---- 1 grid asmadmin 8, 96 Jul 10 17:31 asm-disk3

brw-rw---- 1 grid asmadmin 8, 112 Jul 10 17:31 asm-disk4

brw-rw---- 1 grid asmadmin 8, 128 Jul 10 17:31 asm-disk5

brw-rw---- 1 grid asmadmin 8, 144 Jul 10 17:31 asm-disk6

brw-rw---- 1 grid asmadmin 8, 160 Jul 10 17:31 asm-disk7

brw-rw---- 1 grid asmadmin 8, 176 Jul 10 17:31 asm-disk8

brw-rw---- 1 grid asmadmin 8, 192 Jul 10 17:31 asm-disk9

2.11 安装检查

--node1 grid 用户执行每个节点执行

[[email protected] ~]$ cluvfy stage -post hwos -n node2-11gr2 -verbose

--备注 node3 需要 安装 以下工具

[[email protected] grid]# rpm -ivh cvuqdisk-1.0.9-1.rpm

Preparing... ########################################### [100%]

Using default group oinstall to install package

1:cvuqdisk ########################################### [100%]

当然,还可以通过以下命令将db3与db1、db2的系统配置进行比较。一致的,显示为“matched”、不一致的显示为“mismatched”

--安装检查脚本

cluvfy comp peer -refnode node1-11gr2 -n node2-11gr2 -orainv oinstall -osdba asmdba -verbose

cluvfy stage -pre nodeadd -n node2-11gr2 -fixup -verbose

cluvfy stage -post hwos -n node2-11gr2 -verbose

检查执行结果,以节点1 检查结果为例

[[email protected] ~]$ cluvfy stage -post hwos -n node2-11gr2 -verbose

Performing post-checks for hardware and operating system setup

Checking node reachability...

Check: Node reachability from node "node1-11gr2"

Destination Node Reachable?

------------------------------------ ------------------------

node2-11gr2 yes

Result: Node reachability check passed from node "node1-11gr2"

Checking user equivalence...

Check: User equivalence for user "grid"

Node Name Status

------------------------------------ ------------------------

node2-11gr2 passed

Result: User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Node Name Status

------------------------------------ ------------------------

node2-11gr2 passed

Verification of the hosts config file successful

Interface information for node "node2-11gr2"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

eth0 192.168.150.34 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:05:36:7F 1500

eth1 10.1.1.11 10.0.0.0 0.0.0.0 192.168.150.254 00:0C:29:05:36:89 1500

Check: Node connectivity for interface "eth0"

Result: Node connectivity passed for interface "eth0"

Check: TCP connectivity of subnet "192.168.150.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

node1-11gr2:192.168.150.33 node2-11gr2:192.168.150.34 passed

Result: TCP connectivity check passed for subnet "192.168.150.0"

Check: Node connectivity for interface "eth1"

Result: Node connectivity passed for interface "eth1"

Check: TCP connectivity of subnet "10.0.0.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

node1-11gr2:192.168.150.33 node2-11gr2:10.1.1.11 passed

Result: TCP connectivity check passed for subnet "10.0.0.0"

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.150.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "192.168.150.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking for multiple users with UID value 0

Result: Check for multiple users with UID value 0 passed

Check: Time zone consistency

Result: Time zone consistency check passed

Checking shared storage accessibility...

Disk Sharing Nodes (1 in count)

------------------------------------ ------------------------

/dev/sda node2-11gr2

/dev/sdb node2-11gr2

/dev/sdc node2-11gr2

/dev/sde node2-11gr2

/dev/sdd node2-11gr2

/dev/sdf node2-11gr2

/dev/sdg node2-11gr2

/dev/sdh node2-11gr2

Shared storage check was successful on nodes "node2-11gr2"

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...

Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...

Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined

More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file

Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Post-check for hardware and operating system setup was successful.

[[email protected] ~]$ cluvfy comp peer -refnode node1-11gr2 -n node2-11gr2 -orainv oinstall -osdba asmdba -verbose

Verifying peer compatibility

Checking peer compatibility...

Compatibility check: Physical memory [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 1.9521GB (2046916.0KB) 1.9521GB (2046916.0KB) matched

Physical memory check passed

Compatibility check: Available memory [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 1.7782GB (1864620.0KB) 711.3242MB (728396.0KB) mismatched

Available memory check failed

Compatibility check: Swap space [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 7.9974GB (8385920.0KB) 7.9974GB (8385920.0KB) matched

Swap space check passed

Compatibility check: Free disk space for "/u01/11.2.0/grid" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 35.1064GB (3.6811776E7KB) 13.1348GB (1.37728E7KB) mismatched

Free disk space check failed

Compatibility check: Free disk space for "/tmp" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 35.1064GB (3.6811776E7KB) 13.1348GB (1.37728E7KB) mismatched

Free disk space check failed

Compatibility check: User existence for "grid" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 grid(501) grid(501) matched

User existence for "grid" check passed

Compatibility check: Group existence for "oinstall" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 oinstall(501) oinstall(501) matched

Group existence for "oinstall" check passed

Compatibility check: Group existence for "asmdba" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 asmdba(506) asmdba(506) matched

Group existence for "asmdba" check passed

Compatibility check: Group membership for "grid" in "oinstall (Primary)" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 yes yes matched

Group membership for "grid" in "oinstall (Primary)" check passed

Compatibility check: Group membership for "grid" in "asmdba" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 yes yes matched

Group membership for "grid" in "asmdba" check passed

Compatibility check: Run level [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 5 5 matched

Run level check passed

Compatibility check: System architecture [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 x86_64 x86_64 matched

System architecture check passed

Compatibility check: Kernel version [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 2.6.32-300.10.1.el5uek 2.6.32-300.10.1.el5uek matched

Kernel version check passed

Compatibility check: Kernel param "semmsl" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 250 250 matched

Kernel param "semmsl" check passed

Compatibility check: Kernel param "semmns" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 32000 32000 matched

Kernel param "semmns" check passed

Compatibility check: Kernel param "semopm" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 100 100 matched

Kernel param "semopm" check passed

Compatibility check: Kernel param "semmni" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 128 128 matched

Kernel param "semmni" check passed

Compatibility check: Kernel param "shmmax" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 68719476736 68719476736 matched

Kernel param "shmmax" check passed

Compatibility check: Kernel param "shmmni" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 4096 4096 matched

Kernel param "shmmni" check passed

Compatibility check: Kernel param "shmall" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 4294967296 4294967296 matched

Kernel param "shmall" check passed

Compatibility check: Kernel param "file-max" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 6815744 6815744 matched

Kernel param "file-max" check passed

Compatibility check: Kernel param "ip_local_port_range" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 between 9000.0 & 65500.0 between 9000.0 & 65500.0 matched

Kernel param "ip_local_port_range" check passed

Compatibility check: Kernel param "rmem_default" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 262144 262144 matched

Kernel param "rmem_default" check passed

Compatibility check: Kernel param "rmem_max" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 4194304 4194304 matched

Kernel param "rmem_max" check passed

Compatibility check: Kernel param "wmem_default" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 262144 262144 matched

Kernel param "wmem_default" check passed

Compatibility check: Kernel param "wmem_max" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 1048576 1048576 matched

Kernel param "wmem_max" check passed

Compatibility check: Kernel param "aio-max-nr" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 1048576 1048576 matched

Kernel param "aio-max-nr" check passed

Compatibility check: Package existence for "make" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 make-3.81-3.el5 make-3.81-3.el5 matched

Package existence for "make" check passed

Compatibility check: Package existence for "binutils" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 binutils-2.17.50.0.6-20.el5 binutils-2.17.50.0.6-20.el5 matched

Package existence for "binutils" check passed

Compatibility check: Package existence for "gcc (x86_64)" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 gcc-4.1.2-52.el5 (x86_64) gcc-4.1.2-52.el5 (x86_64) matched

Package existence for "gcc (x86_64)" check passed

Compatibility check: Package existence for "libaio (x86_64)" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 libaio-0.3.106-5 (x86_64),libaio-0.3.106-5 (i386) libaio-0.3.106-5 (x86_64),libaio-0.3.106-5 (i386) matched

Package existence for "libaio (x86_64)" check passed

Compatibility check: Package existence for "glibc (x86_64)" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 glibc-2.5-81 (x86_64),glibc-2.5-81 (i686) glibc-2.5-81 (x86_64),glibc-2.5-81 (i686) matched

Package existence for "glibc (x86_64)" check passed

Compatibility check: Package existence for "compat-libstdc++-33 (x86_64)" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 compat-libstdc++-33-3.2.3-61 (x86_64),compat-libstdc++-33-3.2.3-61 (i386) compat-libstdc++-33-3.2.3-61 (x86_64),compat-libstdc++-33-3.2.3-61 (i386) matched

Package existence for "compat-libstdc++-33 (x86_64)" check passed

Compatibility check: Package existence for "elfutils-libelf (x86_64)" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 elfutils-libelf-0.137-3.el5 (x86_64),elfutils-libelf-0.137-3.el5 (i386) elfutils-libelf-0.137-3.el5 (x86_64),elfutils-libelf-0.137-3.el5 (i386) matched

Package existence for "elfutils-libelf (x86_64)" check passed

Compatibility check: Package existence for "elfutils-libelf-devel" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.137-3.el5 matched

Package existence for "elfutils-libelf-devel" check passed

Compatibility check: Package existence for "glibc-common" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 glibc-common-2.5-81 glibc-common-2.5-81 matched

Package existence for "glibc-common" check passed

Compatibility check: Package existence for "glibc-devel (x86_64)" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 glibc-devel-2.5-81 (x86_64),glibc-devel-2.5-81 (i386) glibc-devel-2.5-81 (x86_64),glibc-devel-2.5-81 (i386) matched

Package existence for "glibc-devel (x86_64)" check passed

Compatibility check: Package existence for "glibc-headers" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 glibc-headers-2.5-81 glibc-headers-2.5-81 matched

Package existence for "glibc-headers" check passed

Compatibility check: Package existence for "gcc-c++ (x86_64)" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 gcc-c++-4.1.2-52.el5 (x86_64) gcc-c++-4.1.2-52.el5 (x86_64) matched

Package existence for "gcc-c++ (x86_64)" check passed

Compatibility check: Package existence for "libaio-devel (x86_64)" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 libaio-devel-0.3.106-5 (i386),libaio-devel-0.3.106-5 (x86_64) libaio-devel-0.3.106-5 (i386),libaio-devel-0.3.106-5 (x86_64) matched

Package existence for "libaio-devel (x86_64)" check passed

Compatibility check: Package existence for "libgcc (x86_64)" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 libgcc-4.1.2-52.el5 (x86_64),libgcc-4.1.2-52.el5 (i386) libgcc-4.1.2-52.el5 (x86_64),libgcc-4.1.2-52.el5 (i386) matched

Package existence for "libgcc (x86_64)" check passed

Compatibility check: Package existence for "libstdc++ (x86_64)" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 libstdc++-4.1.2-52.el5 (x86_64),libstdc++-4.1.2-52.el5 (i386) libstdc++-4.1.2-52.el5 (x86_64),libstdc++-4.1.2-52.el5 (i386) matched

Package existence for "libstdc++ (x86_64)" check passed

Compatibility check: Package existence for "libstdc++-devel (x86_64)" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 libstdc++-devel-4.1.2-52.el5 (x86_64) libstdc++-devel-4.1.2-52.el5 (x86_64) matched

Package existence for "libstdc++-devel (x86_64)" check passed

Compatibility check: Package existence for "sysstat" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 sysstat-7.0.2-11.el5 sysstat-7.0.2-11.el5 matched

Package existence for "sysstat" check passed

Compatibility check: Package existence for "ksh" [reference node: node1-11gr2]

Node Name Status Ref. node status Comment

------------ ------------------------ ------------------------ ----------

node2-11gr2 ksh-20100621-5.el5 ksh-20100621-5.el5 matched

Package existence for "ksh" check passed

Verification of peer compatibility was unsuccessful.

Checks did not pass for the following node(s):

node2-11gr2

[[email protected] ~]$ cluvfy stage -pre nodeadd -n node2-11gr2 -fixup -verbose

CRS integrity check passed

Checking shared resources...

Checking CRS home location...

PRVG-1013 : The path "/u01/11.2.0/grid" does not exist or cannot be created on the nodes to be added

Result: Shared resources check for node addition failed

Checking node connectivity...

Checking hosts config file...

Node Name Status

------------------------------------ ------------------------

node1-11gr2 passed

node2-11gr2 passed

node3-11gr2 passed

node4-11gr2 passed

Verification of the hosts config file successful

Interface information for node "node1-11gr2"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

eth0 192.168.150.33 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:07:E6:BF 1500

eth0 192.168.150.135 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:07:E6:BF 1500

eth0 192.168.150.133 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:07:E6:BF 1500

eth1 10.1.1.10 10.0.0.0 0.0.0.0 192.168.150.254 00:0C:29:07:E6:C9 1500

eth1 169.254.25.123 169.254.0.0 0.0.0.0 192.168.150.254 00:0C:29:07:E6:C9 1500

Interface information for node "node2-11gr2"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

eth0 192.168.150.34 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:05:36:7F 1500

eth1 10.1.1.11 10.0.0.0 0.0.0.0 192.168.150.254 00:0C:29:05:36:89 1500

Interface information for node "node3-11gr2"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

eth0 192.168.150.35 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:86:50:14 1500

eth0 192.168.150.136 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:86:50:14 1500

eth1 10.1.1.12 10.0.0.0 0.0.0.0 192.168.150.254 00:0C:29:86:50:1E 1500

eth1 169.254.115.161 169.254.0.0 0.0.0.0 192.168.150.254 00:0C:29:86:50:1E 1500

Interface information for node "node4-11gr2"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

eth0 192.168.150.36 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:B5:C9:F6 1500

eth0 192.168.150.137 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:B5:C9:F6 1500

eth1 10.1.1.13 10.0.0.0 0.0.0.0 192.168.150.254 00:0C:29:B5:C9:00 1500

eth1 169.254.18.246 169.254.0.0 0.0.0.0 192.168.150.254 00:0C:29:B5:C9:00 1500

Check: Node connectivity for interface "eth0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

node1-11gr2[192.168.150.33] node1-11gr2[192.168.150.135] yes

node1-11gr2[192.168.150.33] node1-11gr2[192.168.150.133] yes

node1-11gr2[192.168.150.33] node2-11gr2[192.168.150.34] yes

node1-11gr2[192.168.150.33] node3-11gr2[192.168.150.35] yes

node1-11gr2[192.168.150.33] node3-11gr2[192.168.150.136] yes

node1-11gr2[192.168.150.33] node4-11gr2[192.168.150.36] yes

node1-11gr2[192.168.150.33] node4-11gr2[192.168.150.137] yes

node1-11gr2[192.168.150.135] node1-11gr2[192.168.150.133] yes

node1-11gr2[192.168.150.135] node2-11gr2[192.168.150.34] yes

node1-11gr2[192.168.150.135] node3-11gr2[192.168.150.35] yes

node1-11gr2[192.168.150.135] node3-11gr2[192.168.150.136] yes

node1-11gr2[192.168.150.135] node4-11gr2[192.168.150.36] yes

node1-11gr2[192.168.150.135] node4-11gr2[192.168.150.137] yes

node1-11gr2[192.168.150.133] node2-11gr2[192.168.150.34] yes

node1-11gr2[192.168.150.133] node3-11gr2[192.168.150.35] yes

node1-11gr2[192.168.150.133] node3-11gr2[192.168.150.136] yes

node1-11gr2[192.168.150.133] node4-11gr2[192.168.150.36] yes

node1-11gr2[192.168.150.133] node4-11gr2[192.168.150.137] yes

node2-11gr2[192.168.150.34] node3-11gr2[192.168.150.35] yes

node2-11gr2[192.168.150.34] node3-11gr2[192.168.150.136] yes

node2-11gr2[192.168.150.34] node4-11gr2[192.168.150.36] yes

node2-11gr2[192.168.150.34] node4-11gr2[192.168.150.137] yes

node3-11gr2[192.168.150.35] node3-11gr2[192.168.150.136] yes

node3-11gr2[192.168.150.35] node4-11gr2[192.168.150.36] yes

node3-11gr2[192.168.150.35] node4-11gr2[192.168.150.137] yes

node3-11gr2[192.168.150.136] node4-11gr2[192.168.150.36] yes

node3-11gr2[192.168.150.136] node4-11gr2[192.168.150.137] yes

node4-11gr2[192.168.150.36] node4-11gr2[192.168.150.137] yes

Result: Node connectivity passed for interface "eth0"

Check: TCP connectivity of subnet "192.168.150.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

node1-11gr2:192.168.150.33 node1-11gr2:192.168.150.135 passed

node1-11gr2:192.168.150.33 node1-11gr2:192.168.150.133 passed

node1-11gr2:192.168.150.33 node2-11gr2:192.168.150.34 passed

node1-11gr2:192.168.150.33 node3-11gr2:192.168.150.35 passed

node1-11gr2:192.168.150.33 node3-11gr2:192.168.150.136 passed

node1-11gr2:192.168.150.33 node4-11gr2:192.168.150.36 passed

node1-11gr2:192.168.150.33 node4-11gr2:192.168.150.137 passed

Result: TCP connectivity check passed for subnet "192.168.150.0"

Check: Node connectivity for interface "eth1"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

node1-11gr2[10.1.1.10] node2-11gr2[10.1.1.11] yes

node1-11gr2[10.1.1.10] node3-11gr2[10.1.1.12] yes

node1-11gr2[10.1.1.10] node4-11gr2[10.1.1.13] yes

node2-11gr2[10.1.1.11] node3-11gr2[10.1.1.12] yes

node2-11gr2[10.1.1.11] node4-11gr2[10.1.1.13] yes

node3-11gr2[10.1.1.12] node4-11gr2[10.1.1.13] yes

Result: Node connectivity passed for interface "eth1"

Check: TCP connectivity of subnet "10.0.0.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

node1-11gr2:10.1.1.10 node2-11gr2:10.1.1.11 passed

node1-11gr2:10.1.1.10 node3-11gr2:10.1.1.12 passed

node1-11gr2:10.1.1.10 node4-11gr2:10.1.1.13 passed

Result: TCP connectivity check passed for subnet "10.0.0.0"

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "192.168.150.0".

Subnet mask consistency check passed for subnet "10.0.0.0".

Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.150.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "192.168.150.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Check: Total memory

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 1.9521GB (2046916.0KB) 1.5GB (1572864.0KB) passed

node2-11gr2 1.9521GB (2046916.0KB) 1.5GB (1572864.0KB) passed

Result: Total memory check passed

Check: Available memory

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 671.4023MB (687516.0KB) 50MB (51200.0KB) passed

node2-11gr2 1.6711GB (1752280.0KB) 50MB (51200.0KB) passed

Result: Available memory check passed

Check: Swap space

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 7.9974GB (8385920.0KB) 2.9281GB (3070374.0KB) passed

node2-11gr2 7.9974GB (8385920.0KB) 2.9281GB (3070374.0KB) passed

Result: Swap space check passed

Check: Free disk space for "node1-11gr2:/u01/11.2.0/grid,node1-11gr2:/tmp"

Path Node Name Mount point Available Required Status

---------------- ------------ ------------ ------------ ------------ ------------

/u01/11.2.0/grid node1-11gr2 / 13.123GB 7.5GB passed

/tmp node1-11gr2 / 13.123GB 7.5GB passed

Result: Free disk space check passed for "node1-11gr2:/u01/11.2.0/grid,node1-11gr2:/tmp"

Check: Free disk space for "node2-11gr2:/u01/11.2.0/grid,node2-11gr2:/tmp"

Path Node Name Mount point Available Required Status

---------------- ------------ ------------ ------------ ------------ ------------

/u01/11.2.0/grid node2-11gr2 / 35.1113GB 7.5GB passed

/tmp node2-11gr2 / 35.1113GB 7.5GB passed

Result: Free disk space check passed for "node2-11gr2:/u01/11.2.0/grid,node2-11gr2:/tmp"

Check: User existence for "grid"

Node Name Status Comment

------------ ------------------------ ------------------------

node1-11gr2 passed exists(501)

node2-11gr2 passed exists(501)

Checking for multiple users with UID value 501

Result: Check for multiple users with UID value 501 passed

Result: User existence check passed for "grid"

Check: Run level

Node Name run level Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 5 3,5 passed

node2-11gr2 5 3,5 passed

Result: Run level check passed

Check: Hard limits for "maximum open file descriptors"

Node Name Type Available Required Status

---------------- ------------ ------------ ------------ ----------------

node1-11gr2 hard 65536 65536 passed

node2-11gr2 hard 65536 65536 passed

Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors"

Node Name Type Available Required Status

---------------- ------------ ------------ ------------ ----------------

node1-11gr2 soft 1024 1024 passed

node2-11gr2 soft 1024 1024 passed

Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes"

Node Name Type Available Required Status

---------------- ------------ ------------ ------------ ----------------

node1-11gr2 hard 16384 16384 passed

node2-11gr2 hard 16384 16384 passed

Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes"

Node Name Type Available Required Status

---------------- ------------ ------------ ------------ ----------------

node1-11gr2 soft 2047 2047 passed

node2-11gr2 soft 2047 2047 passed

Result: Soft limits check passed for "maximum user processes"

Check: System architecture

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 x86_64 x86_64 passed

node2-11gr2 x86_64 x86_64 passed

Result: System architecture check passed

Check: Kernel version

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 2.6.32-300.10.1.el5uek 2.6.18 passed

node2-11gr2 2.6.32-300.10.1.el5uek 2.6.18 passed

Result: Kernel version check passed

Check: Kernel parameter for "semmsl"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 250 250 250 passed

node2-11gr2 250 250 250 passed

Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 32000 32000 32000 passed

node2-11gr2 32000 32000 32000 passed

Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 100 100 100 passed

node2-11gr2 100 100 100 passed

Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 128 128 128 passed

node2-11gr2 128 128 128 passed

Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 68719476736 68719476736 1048020992 passed

node2-11gr2 68719476736 68719476736 1048020992 passed

Result: Kernel parameter check passed for "shmmax"

Check: Kernel parameter for "shmmni"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 4096 4096 4096 passed

node2-11gr2 4096 4096 4096 passed

Result: Kernel parameter check passed for "shmmni"

Check: Kernel parameter for "shmall"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 4294967296 4294967296 2097152 passed

node2-11gr2 4294967296 4294967296 2097152 passed

Result: Kernel parameter check passed for "shmall"

Check: Kernel parameter for "file-max"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 6815744 6815744 6815744 passed

node2-11gr2 6815744 6815744 6815744 passed

Result: Kernel parameter check passed for "file-max"

Check: Kernel parameter for "ip_local_port_range"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed

node2-11gr2 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed

Result: Kernel parameter check passed for "ip_local_port_range"

Check: Kernel parameter for "rmem_default"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 262144 262144 262144 passed

node2-11gr2 262144 262144 262144 passed

Result: Kernel parameter check passed for "rmem_default"

Check: Kernel parameter for "rmem_max"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 4194304 4194304 4194304 passed

node2-11gr2 4194304 4194304 4194304 passed

Result: Kernel parameter check passed for "rmem_max"

Check: Kernel parameter for "wmem_default"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 262144 262144 262144 passed

node2-11gr2 262144 262144 262144 passed

Result: Kernel parameter check passed for "wmem_default"

Check: Kernel parameter for "wmem_max"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 1048576 1048576 1048576 passed

node2-11gr2 1048576 1048576 1048576 passed

Result: Kernel parameter check passed for "wmem_max"

Check: Kernel parameter for "aio-max-nr"

Node Name Current Configured Required Status Comment

---------------- ------------ ------------ ------------ ------------ ------------

node1-11gr2 1048576 1048576 1048576 passed

node2-11gr2 1048576 1048576 1048576 passed

Result: Kernel parameter check passed for "aio-max-nr"

Check: Package existence for "make"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 make-3.81-3.el5 make-3.81 passed

node2-11gr2 make-3.81-3.el5 make-3.81 passed

Result: Package existence check passed for "make"

Check: Package existence for "binutils"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 binutils-2.17.50.0.6-20.el5 binutils-2.17.50.0.6 passed

node2-11gr2 binutils-2.17.50.0.6-20.el5 binutils-2.17.50.0.6 passed

Result: Package existence check passed for "binutils"

Check: Package existence for "gcc(x86_64)"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 gcc(x86_64)-4.1.2-52.el5 gcc(x86_64)-4.1.2 passed

node2-11gr2 gcc(x86_64)-4.1.2-52.el5 gcc(x86_64)-4.1.2 passed

Result: Package existence check passed for "gcc(x86_64)"

Check: Package existence for "libaio(x86_64)"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed

node2-11gr2 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed

Result: Package existence check passed for "libaio(x86_64)"

Check: Package existence for "glibc(x86_64)"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 glibc(x86_64)-2.5-81 glibc(x86_64)-2.5-24 passed

node2-11gr2 glibc(x86_64)-2.5-81 glibc(x86_64)-2.5-24 passed

Result: Package existence check passed for "glibc(x86_64)"

Check: Package existence for "compat-libstdc++-33(x86_64)"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed

node2-11gr2 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed

Result: Package existence check passed for "compat-libstdc++-33(x86_64)"

Check: Package existence for "elfutils-libelf(x86_64)"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 elfutils-libelf(x86_64)-0.137-3.el5 elfutils-libelf(x86_64)-0.125 passed

node2-11gr2 elfutils-libelf(x86_64)-0.137-3.el5 elfutils-libelf(x86_64)-0.125 passed

Result: Package existence check passed for "elfutils-libelf(x86_64)"

Check: Package existence for "elfutils-libelf-devel"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed

node2-11gr2 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed

Result: Package existence check passed for "elfutils-libelf-devel"

Check: Package existence for "glibc-common"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 glibc-common-2.5-81 glibc-common-2.5 passed

node2-11gr2 glibc-common-2.5-81 glibc-common-2.5 passed

Result: Package existence check passed for "glibc-common"

Check: Package existence for "glibc-devel(x86_64)"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 glibc-devel(x86_64)-2.5-81 glibc-devel(x86_64)-2.5 passed

node2-11gr2 glibc-devel(x86_64)-2.5-81 glibc-devel(x86_64)-2.5 passed

Result: Package existence check passed for "glibc-devel(x86_64)"

Check: Package existence for "glibc-headers"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 glibc-headers-2.5-81 glibc-headers-2.5 passed

node2-11gr2 glibc-headers-2.5-81 glibc-headers-2.5 passed

Result: Package existence check passed for "glibc-headers"

Check: Package existence for "gcc-c++(x86_64)"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 gcc-c++(x86_64)-4.1.2-52.el5 gcc-c++(x86_64)-4.1.2 passed

node2-11gr2 gcc-c++(x86_64)-4.1.2-52.el5 gcc-c++(x86_64)-4.1.2 passed

Result: Package existence check passed for "gcc-c++(x86_64)"

Check: Package existence for "libaio-devel(x86_64)"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed

node2-11gr2 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed

Result: Package existence check passed for "libaio-devel(x86_64)"

Check: Package existence for "libgcc(x86_64)"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 libgcc(x86_64)-4.1.2-52.el5 libgcc(x86_64)-4.1.2 passed

node2-11gr2 libgcc(x86_64)-4.1.2-52.el5 libgcc(x86_64)-4.1.2 passed

Result: Package existence check passed for "libgcc(x86_64)"

Check: Package existence for "libstdc++(x86_64)"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 libstdc++(x86_64)-4.1.2-52.el5 libstdc++(x86_64)-4.1.2 passed

node2-11gr2 libstdc++(x86_64)-4.1.2-52.el5 libstdc++(x86_64)-4.1.2 passed

Result: Package existence check passed for "libstdc++(x86_64)"

Check: Package existence for "libstdc++-devel(x86_64)"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 libstdc++-devel(x86_64)-4.1.2-52.el5 libstdc++-devel(x86_64)-4.1.2 passed

node2-11gr2 libstdc++-devel(x86_64)-4.1.2-52.el5 libstdc++-devel(x86_64)-4.1.2 passed

Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 sysstat-7.0.2-11.el5 sysstat-7.0.2 passed

node2-11gr2 sysstat-7.0.2-11.el5 sysstat-7.0.2 passed

Result: Package existence check passed for "sysstat"

Check: Package existence for "ksh"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

node1-11gr2 ksh-20100621-5.el5 ksh-20060214 passed

node2-11gr2 ksh-20100621-5.el5 ksh-20060214 passed

Result: Package existence check passed for "ksh"

Checking for multiple users with UID value 0

Result: Check for multiple users with UID value 0 passed

Check: Current group ID

Result: Current group ID check passed

Starting check for consistency of primary group of root user

Node Name Status

------------------------------------ ------------------------

node1-11gr2 passed

node2-11gr2 passed

Check for consistency of root user‘s primary group passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed

Check: Time zone consistency

Result: Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...

The NTP configuration file "/etc/ntp.conf" is available on all nodes

NTP Configuration file check passed

Checking daemon liveness...

Check: Liveness for "ntpd"

Node Name Running?

------------------------------------ ------------------------

node1-11gr2 yes

node2-11gr2 no

Result: Liveness check failed for "ntpd"

PRVF-5508 : NTP configuration file is present on at least one node on which NTP daemon or service is not running.

Result: Clock synchronization check using Network Time Protocol(NTP) failed

Checking to make sure user "grid" is not in "root" group

Node Name Status Comment

------------ ------------------------ ------------------------

node1-11gr2 passed does not exist

node2-11gr2 passed does not exist

Result: User "grid" is not part of "root" group. Check passed

Checking consistency of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined

File "/etc/resolv.conf" does not have both domain and search entries defined

Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...

domain entry in file "/etc/resolv.conf" is consistent across nodes

Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...

search entry in file "/etc/resolv.conf" is consistent across nodes

Checking DNS response time for an unreachable node

Node Name Status

------------------------------------ ------------------------

node1-11gr2 failed

node2-11gr2 failed

PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: node1-11gr2,node2-11gr2

File "/etc/resolv.conf" is not consistent across nodes

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...

Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...

Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined

More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file

Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Pre-check for node addition was unsuccessful on all the nodes.

2.12 GI层面添加节点

在db1或者db2上,进入$GRID_HOME/oui/bin目录下。执行以下命令,进行节点添加操作:

[[email protected] bin]$ ./addNode.sh "CLUSTER_NEW_NODES={db3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={db3-vip}"

/u01/11.2.0/grid/oui/bin

./addNode.sh "CLUSTER_NEW_NODES={node3-11gr2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-11gr2-vip}"

./addNode.sh -silent "CLUSTER_NEW_NODES={node3-11gr2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-11gr2-vip}"

注意:addNode.sh脚本首先进行安装前检查,如果检查过程有错误或者警告,需要在运行addNode.sh前设置参数IGNORE_PREADDNODE_CHECKS=Y,否则addNode.sh 会失败退出。

--节点1 gird 用户执行

export IGNORE_PREADDNODE_CHECKS=Y

cd $GRID_HOME/oui/bin

[[email protected] bin]$ ls -l addNode.sh

-rwxr-x--- 1 grid oinstall 622 Nov 16 2015 addNode.sh

[[email protected] bin]$ pwd

/u01/11.2.0/grid/oui/bin

[[email protected] bin]$ ./addNode.sh "CLUSTER_NEW_NODES={node2-11gr2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-11gr2-vip}"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 7505 MB Passed

Oracle Universal Installer, Version 11.2.0.4.0 Production

Copyright (C) 1999, 2013, Oracle. All rights reserved.

--报错处理

[[email protected] bin]$ ./addNode.sh "CLUSTER_NEW_NODES={node2-11gr2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-11gr2-vip}"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 7505 MB Passed

Oracle Universal Installer, Version 11.2.0.4.0 Production

Copyright (C) 1999, 2013, Oracle. All rights reserved.

Performing tests to see whether nodes node2-11gr2,node3-11gr2,node4-11gr2,node2-11gr2 are available

............................................................... 100% Done.

Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured.

SEVERE:Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured.

--更新inventory 目录

[[email protected] bin]$ ./detachHome.sh

[[email protected] bin]$ ./attachHome.sh

/u01/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/11.2.0/grid "CLUSTER_NODES={node2-11gr2}" CRS=TRUE -local

/u01/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/11.2.0/grid "CLUSTER_NODES={node1-11gr2,node3-11gr2,node4-11gr2}" CRS=TRUE

/u01/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/11.2.0/grid "CLUSTER_NODES={node1-11gr2,node3-11gr2,node4-11gr2}" CRS=TRUE -silent

--添加节点日志

[[email protected] bin]$ export IGNORE_PREADDNODE_CHECKS=Y

[[email protected] bin]$ ./addNode.sh "CLUSTER_NEW_NODES={node2-11gr2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-11gr2-vip}"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 7497 MB Passed

Oracle Universal Installer, Version 11.2.0.4.0 Production

Copyright (C) 1999, 2013, Oracle. All rights reserved.

Performing tests to see whether nodes node3-11gr2,node4-11gr2,node2-11gr2 are available

............................................................... 100% Done.

-----------------------------------------------------------------------------

Cluster Node Addition Summary

Global Settings

Source: /u01/11.2.0/grid

New Nodes

Space Requirements

New Nodes

node2-11gr2

/: Required 9.67GB : Available 32.69GB

Installed Products

Product Names

Oracle Grid Infrastructure 11g 11.2.0.4.0

Java Development Kit 1.5.0.51.10

Installer SDK Component 11.2.0.4.0

Oracle One-Off Patch Installer 11.2.0.3.4

Oracle Universal Installer 11.2.0.4.0

Oracle RAC Required Support Files-HAS 11.2.0.4.0

Oracle USM Deconfiguration 11.2.0.4.0

Oracle Configuration Manager Deconfiguration 10.3.1.0.0

Enterprise Manager Common Core Files 10.2.0.4.5

Oracle DBCA Deconfiguration 11.2.0.4.0

Oracle RAC Deconfiguration 11.2.0.4.0

Oracle Quality of Service Management (Server) 11.2.0.4.0

Installation Plugin Files 11.2.0.4.0

Universal Storage Manager Files 11.2.0.4.0

Oracle Text Required Support Files 11.2.0.4.0

Automatic Storage Management Assistant 11.2.0.4.0

Oracle Database 11g Multimedia Files 11.2.0.4.0

Oracle Multimedia Java Advanced Imaging 11.2.0.4.0

Oracle Globalization Support 11.2.0.4.0

Oracle Multimedia Locator RDBMS Files 11.2.0.4.0

Oracle Core Required Support Files 11.2.0.4.0

Bali Share 1.1.18.0.0

Oracle Database Deconfiguration 11.2.0.4.0

Oracle Quality of Service Management (Client) 11.2.0.4.0

Expat libraries 2.0.1.0.1

Oracle Containers for Java 11.2.0.4.0

Perl Modules 5.10.0.0.1

Secure Socket Layer 11.2.0.4.0

Oracle JDBC/OCI Instant Client 11.2.0.4.0

Oracle Multimedia Client Option 11.2.0.4.0

LDAP Required Support Files 11.2.0.4.0

Character Set Migration Utility 11.2.0.4.0

Perl Interpreter 5.10.0.0.2

PL/SQL Embedded Gateway 11.2.0.4.0

OLAP SQL Scripts 11.2.0.4.0

Database SQL Scripts 11.2.0.4.0

Oracle Extended Windowing Toolkit 3.4.47.0.0

SSL Required Support Files for InstantClient 11.2.0.4.0

SQL*Plus Files for Instant Client 11.2.0.4.0

Oracle Net Required Support Files 11.2.0.4.0

Oracle Database User Interface 2.2.13.0.0

RDBMS Required Support Files for Instant Client 11.2.0.4.0

RDBMS Required Support Files Runtime 11.2.0.4.0

XML Parser for Java 11.2.0.4.0

Oracle Security Developer Tools 11.2.0.4.0

Oracle Wallet Manager 11.2.0.4.0

Enterprise Manager plugin Common Files 11.2.0.4.0

Platform Required Support Files 11.2.0.4.0

Oracle JFC Extended Windowing Toolkit 4.2.36.0.0

RDBMS Required Support Files 11.2.0.4.0

Oracle Ice Browser 5.2.3.6.0

Oracle Help For Java 4.2.9.0.0

Enterprise Manager Common Files 10.2.0.4.5

Deinstallation Tool 11.2.0.4.0

Oracle Java Client 11.2.0.4.0

Cluster Verification Utility Files 11.2.0.4.0

Oracle Notification Service (eONS) 11.2.0.4.0

Oracle LDAP administration 11.2.0.4.0

Cluster Verification Utility Common Files 11.2.0.4.0

Oracle Clusterware RDBMS Files 11.2.0.4.0

Oracle Locale Builder 11.2.0.4.0

Oracle Globalization Support 11.2.0.4.0

Buildtools Common Files 11.2.0.4.0

HAS Common Files 11.2.0.4.0

SQL*Plus Required Support Files 11.2.0.4.0

XDK Required Support Files 11.2.0.4.0

Agent Required Support Files 10.2.0.4.5

Parser Generator Required Support Files 11.2.0.4.0

Precompiler Required Support Files 11.2.0.4.0

Installation Common Files 11.2.0.4.0

Required Support Files 11.2.0.4.0

Oracle JDBC/THIN Interfaces 11.2.0.4.0

Oracle Multimedia Locator 11.2.0.4.0

Oracle Multimedia 11.2.0.4.0

Assistant Common Files 11.2.0.4.0

Oracle Net 11.2.0.4.0

PL/SQL 11.2.0.4.0

HAS Files for DB 11.2.0.4.0

Oracle Recovery Manager 11.2.0.4.0

Oracle Database Utilities 11.2.0.4.0

Oracle Notification Service 11.2.0.3.0

SQL*Plus 11.2.0.4.0

Oracle Netca Client 11.2.0.4.0

Oracle Advanced Security 11.2.0.4.0

Oracle JVM 11.2.0.4.0

Oracle Internet Directory Client 11.2.0.4.0

Oracle Net Listener 11.2.0.4.0

Cluster Ready Services Files 11.2.0.4.0

Oracle Database 11g 11.2.0.4.0

-----------------------------------------------------------------------------

Instantiating scripts for add node (Thursday, January 12, 2017 1:54:22 PM HKT)

1% Done.

Instantiation of add node scripts complete

Copying to remote nodes (Thursday, January 12, 2017 1:54:25 PM HKT)

Copying to remote nodes (Thursday, January 12, 2017 1:54:25 PM HKT)

............................................................................................... 96% Done.

Home copied to new nodes

Saving inventory on nodes (Thursday, January 12, 2017 2:14:29 PM HKT)

. 100% Done.

Save inventory complete

WARNING:

The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.

/u01/11.2.0/grid/root.sh #On nodes node2-11gr2

To execute the configuration scripts:

1. Open a terminal window

2. Log in as "root"

3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/11.2.0/grid was successful.

Please check ‘/tmp/silentInstall.log‘ for more details.

--在新添加节点执行脚本

[[email protected] ~]# /u01/11.2.0/grid/root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

Installing Trace File Analyzer

OLR initialization - successful

Adding Clusterware entries to /etc/inittab

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node node1-11gr2, number 1, and is terminating

An active cluster was found during exclusive startup, restarting to join the cluster

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user ‘root‘, privgrp ‘root‘..

Operation successful.

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

2.13 RDBMS层面添加节点

将RDBMS ORACLE HOME 复制/扩展到节点2, 在节点1以oracle用户执行下面的命令:

[[email protected] bin]$ pwd

/u01/app/oracle/product/11.2.0/db_1/oui/bin

[[email protected] bin]$ export IGNORE_PREADDNODE_CHECKS=Y

[[email protected] bin]$

[[email protected] bin]$ ./addNode.sh "CLUSTER_NEW_NODES={node2-11gr2}"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 7514 MB Passed

Oracle Universal Installer, Version 11.2.0.4.0 Production

Copyright (C) 1999, 2013, Oracle. All rights reserved.

Performing tests to see whether nodes node3-11gr2,node4-11gr2,node2-11gr2 are available

............................................................... 100% Done.

.

-----------------------------------------------------------------------------

Cluster Node Addition Summary

Global Settings

Source: /u01/app/oracle/product/11.2.0/db_1

New Nodes

Space Requirements

New Nodes

node2-11gr2

/: Required 4.34GB : Available 28.23GB

Installed Products

Product Names

Oracle Database 11g 11.2.0.4.0

Java Development Kit 1.5.0.51.10

Installer SDK Component 11.2.0.4.0

Oracle One-Off Patch Installer 11.2.0.3.4

Oracle Universal Installer 11.2.0.4.0

Oracle USM Deconfiguration 11.2.0.4.0

Oracle Configuration Manager Deconfiguration 10.3.1.0.0

Oracle DBCA Deconfiguration 11.2.0.4.0

Oracle RAC Deconfiguration 11.2.0.4.0

Oracle Database Deconfiguration 11.2.0.4.0

Oracle Configuration Manager Client 10.3.2.1.0

Oracle Configuration Manager 10.3.8.1.0

Oracle ODBC Driverfor Instant Client 11.2.0.4.0

LDAP Required Support Files 11.2.0.4.0

SSL Required Support Files for InstantClient 11.2.0.4.0

Bali Share 1.1.18.0.0

Oracle Extended Windowing Toolkit 3.4.47.0.0

Oracle JFC Extended Windowing Toolkit 4.2.36.0.0

Oracle Real Application Testing 11.2.0.4.0

Oracle Database Vault J2EE Application 11.2.0.4.0

Oracle Label Security 11.2.0.4.0

Oracle Data Mining RDBMS Files 11.2.0.4.0

Oracle OLAP RDBMS Files 11.2.0.4.0

Oracle OLAP API 11.2.0.4.0

Platform Required Support Files 11.2.0.4.0

Oracle Database Vault option 11.2.0.4.0

Oracle RAC Required Support Files-HAS 11.2.0.4.0

SQL*Plus Required Support Files 11.2.0.4.0

Oracle Display Fonts 9.0.2.0.0

Oracle Ice Browser 5.2.3.6.0

Oracle JDBC Server Support Package 11.2.0.4.0

Oracle SQL Developer 11.2.0.4.0

Oracle Application Express 11.2.0.4.0

XDK Required Support Files 11.2.0.4.0

RDBMS Required Support Files for Instant Client 11.2.0.4.0

SQLJ Runtime 11.2.0.4.0

Database Workspace Manager 11.2.0.4.0

RDBMS Required Support Files Runtime 11.2.0.4.0

Oracle Globalization Support 11.2.0.4.0

Exadata Storage Server 11.2.0.1.0

Provisioning Advisor Framework 10.2.0.4.3

Enterprise Manager Database Plugin -- Repository Support 11.2.0.4.0

Enterprise Manager Repository Core Files 10.2.0.4.5

Enterprise Manager Database Plugin -- Agent Support 11.2.0.4.0

Enterprise Manager Grid Control Core Files 10.2.0.4.5

Enterprise Manager Common Core Files 10.2.0.4.5

Enterprise Manager Agent Core Files 10.2.0.4.5

RDBMS Required Support Files 11.2.0.4.0

regexp 2.1.9.0.0

Agent Required Support Files 10.2.0.4.5

Oracle 11g Warehouse Builder Required Files 11.2.0.4.0

Oracle Notification Service (eONS) 11.2.0.4.0

Oracle Text Required Support Files 11.2.0.4.0

Parser Generator Required Support Files 11.2.0.4.0

Oracle Database 11g Multimedia Files 11.2.0.4.0

Oracle Multimedia Java Advanced Imaging 11.2.0.4.0

Oracle Multimedia Annotator 11.2.0.4.0

Oracle JDBC/OCI Instant Client 11.2.0.4.0

Oracle Multimedia Locator RDBMS Files 11.2.0.4.0

Precompiler Required Support Files 11.2.0.4.0

Oracle Core Required Support Files 11.2.0.4.0

Sample Schema Data 11.2.0.4.0

Oracle Starter Database 11.2.0.4.0

Oracle Message Gateway Common Files 11.2.0.4.0

Oracle XML Query 11.2.0.4.0

XML Parser for Oracle JVM 11.2.0.4.0

Oracle Help For Java 4.2.9.0.0

Installation Plugin Files 11.2.0.4.0

Enterprise Manager Common Files 10.2.0.4.5

Expat libraries 2.0.1.0.1

Deinstallation Tool 11.2.0.4.0

Oracle Quality of Service Management (Client) 11.2.0.4.0

Perl Modules 5.10.0.0.1

JAccelerator (COMPANION) 11.2.0.4.0

Oracle Containers for Java 11.2.0.4.0

Perl Interpreter 5.10.0.0.2

Oracle Net Required Support Files 11.2.0.4.0

Secure Socket Layer 11.2.0.4.0

Oracle Universal Connection Pool 11.2.0.4.0

Oracle JDBC/THIN Interfaces 11.2.0.4.0

Oracle Multimedia Client Option 11.2.0.4.0

Oracle Java Client 11.2.0.4.0

Character Set Migration Utility 11.2.0.4.0

Oracle Code Editor 1.2.1.0.0I

PL/SQL Embedded Gateway 11.2.0.4.0

OLAP SQL Scripts 11.2.0.4.0

Database SQL Scripts 11.2.0.4.0

Oracle Locale Builder 11.2.0.4.0

Oracle Globalization Support 11.2.0.4.0

SQL*Plus Files for Instant Client 11.2.0.4.0

Required Support Files 11.2.0.4.0

Oracle Database User Interface 2.2.13.0.0

Oracle ODBC Driver 11.2.0.4.0

Oracle Notification Service 11.2.0.3.0

XML Parser for Java 11.2.0.4.0

Oracle Security Developer Tools 11.2.0.4.0

Oracle Wallet Manager 11.2.0.4.0

Cluster Verification Utility Common Files 11.2.0.4.0

Oracle Clusterware RDBMS Files 11.2.0.4.0

Oracle UIX 2.2.24.6.0

Enterprise Manager plugin Common Files 11.2.0.4.0

HAS Common Files 11.2.0.4.0

Precompiler Common Files 11.2.0.4.0

Installation Common Files 11.2.0.4.0

Oracle Help for the Web 2.0.14.0.0

Oracle LDAP administration 11.2.0.4.0

Buildtools Common Files 11.2.0.4.0

Assistant Common Files 11.2.0.4.0

Oracle Recovery Manager 11.2.0.4.0

PL/SQL 11.2.0.4.0

Generic Connectivity Common Files 11.2.0.4.0

Oracle Database Gateway for ODBC 11.2.0.4.0

Oracle Programmer 11.2.0.4.0

Oracle Database Utilities 11.2.0.4.0

Enterprise Manager Agent 10.2.0.4.5

SQL*Plus 11.2.0.4.0

Oracle Netca Client 11.2.0.4.0

Oracle Multimedia Locator 11.2.0.4.0

Oracle Call Interface (OCI) 11.2.0.4.0

Oracle Multimedia 11.2.0.4.0

Oracle Net 11.2.0.4.0

Oracle XML Development Kit 11.2.0.4.0

Oracle Internet Directory Client 11.2.0.4.0

Database Configuration and Upgrade Assistants 11.2.0.4.0

Oracle JVM 11.2.0.4.0

Oracle Advanced Security 11.2.0.4.0

Oracle Net Listener 11.2.0.4.0

Oracle Enterprise Manager Console DB 11.2.0.4.0

HAS Files for DB 11.2.0.4.0

Oracle Text 11.2.0.4.0

Oracle Net Services 11.2.0.4.0

Oracle Database 11g 11.2.0.4.0

Oracle OLAP 11.2.0.4.0

Oracle Spatial 11.2.0.4.0

Oracle Partitioning 11.2.0.4.0

Enterprise Edition Options 11.2.0.4.0

-----------------------------------------------------------------------------

Instantiating scripts for add node (Friday, January 20, 2017 10:55:45 AM HKT)

. 1% Done.

Instantiation of add node scripts complete

Copying to remote nodes (Friday, January 20, 2017 10:55:49 AM HKT)

............................................................................................... 96% Done.

Home copied to new nodes

Saving inventory on nodes (Friday, January 20, 2017 11:10:49 AM HKT)

. 100% Done.

Save inventory complete

WARNING:

The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.

/u01/app/oracle/product/11.2.0/db_1/root.sh #On nodes node2-11gr2

To execute the configuration scripts:

1. Open a terminal window

2. Log in as "root"

3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/oracle/product/11.2.0/db_1 was successful.

Please check ‘/tmp/silentInstall.log‘ for more details.

--2节点执行脚本

[[email protected] ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

2.14 DBCA添加数据库实例

[[email protected] ~]$ dbca

--查询日志

set linesize 200

set pagesize 200

col member format a50

col ARCHIVED format a10

select a. GROUP#,a.type,a.STATUS,a.MEMBER,b.THREAD#,b.SEQUENCE#,b.BYTES/1024/1024 "size(M)",b.MEMBERS,b.ARCHIVED,b.STATUS

from v$Logfile a,v$log b

where a.group#=b.group#

order by b.THREAD#;

GROUP# TYPE STATUS MEMBER THREAD# SEQUENCE# size(M) MEMBERS ARCHIVED STATUS

---------- ------- ------- -------------------------------------------------- ---------- ---------- ---------- ---------- ---------- ----------------

1 ONLINE +DG1/prod/redo01.log 1 29 50 1 NO CURRENT

2 ONLINE +DG1/prod/redo02.log 1 28 50 1 YES INACTIVE

5 ONLINE +DG1/prod/redo05.log 3 21 50 1 YES INACTIVE

6 ONLINE +DG1/prod/redo06.log 3 22 50 1 NO CURRENT

8 ONLINE +DG1/prod/redo08.log 4 22 50 1 NO CURRENT

7 ONLINE +DG1/prod/redo07.log 4 21 50 1 YES INACTIVE

6 rows selected.

备注:删除inactive 数据库实例,重新添加实例,添加实例前先删除实例的在线日志。

删除日志前先备份日志

ASMCMD [+dg1/prod] > cp redo03.log redo03.log.bak

copying +dg1/prod/redo03.log -> +dg1/prod/redo03.log.bak

ASMCMD [+dg1/prod] >

ASMCMD [+dg1/prod] >

ASMCMD [+dg1/prod] > cp redo04.log redo04.log.bak

copying +dg1/prod/redo04.log -> +dg1/prod/redo04.log.bak

ASMCMD [+dg1/prod] > rm redo03.log

ASMCMD [+dg1/prod] >

ASMCMD [+dg1/prod] >

ASMCMD [+dg1/prod] > rm redo04.log

ASMCMD [+dg1/prod] > ls

CONTROLFILE/

DATAFILE/

ONLINELOG/

PARAMETERFILE/

TEMPFILE/

control01.ctl

control02.ctl

example01.dbf

redo01.log

redo02.log

redo03.log.bak

redo04.log.bak

redo05.log

redo06.log

redo07.log

redo08.log

spfileprod.ora

sysaux01.dbf

system01.dbf

temp01.dbf

undotbs01.dbf

undotbs03.dbf

undotbs04.dbf

users01.dbf

2.15 添加完成检查

Resource stat

[[email protected] ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.CRS.dg

ONLINE ONLINE node1-11gr2

ONLINE ONLINE node2-11gr2

ONLINE ONLINE node3-11gr2

ONLINE ONLINE node4-11gr2

ora.DG1.dg

ONLINE ONLINE node1-11gr2

ONLINE ONLINE node2-11gr2

ONLINE ONLINE node3-11gr2

ONLINE ONLINE node4-11gr2

ora.LISTENER.lsnr

ONLINE ONLINE node1-11gr2

ONLINE ONLINE node2-11gr2

ONLINE ONLINE node3-11gr2

ONLINE ONLINE node4-11gr2

ora.asm

ONLINE ONLINE node1-11gr2 Started

ONLINE ONLINE node2-11gr2 Started

ONLINE ONLINE node3-11gr2 Started

ONLINE ONLINE node4-11gr2 Started

ora.gsd

OFFLINE OFFLINE node1-11gr2

OFFLINE OFFLINE node2-11gr2

OFFLINE OFFLINE node3-11gr2

OFFLINE OFFLINE node4-11gr2

ora.net1.network

ONLINE ONLINE node1-11gr2

ONLINE ONLINE node2-11gr2

ONLINE ONLINE node3-11gr2

ONLINE ONLINE node4-11gr2

ora.ons

ONLINE ONLINE node1-11gr2

ONLINE ONLINE node2-11gr2

ONLINE ONLINE node3-11gr2

ONLINE ONLINE node4-11gr2

ora.registry.acfs

ONLINE ONLINE node1-11gr2

ONLINE ONLINE node2-11gr2

ONLINE ONLINE node3-11gr2

ONLINE ONLINE node4-11gr2

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE node1-11gr2

ora.cvu

1 ONLINE ONLINE node1-11gr2

ora.node1-11gr2.vip

1 ONLINE ONLINE node1-11gr2

ora.node2-11gr2.vip

1 ONLINE ONLINE node2-11gr2

ora.node3-11gr2.vip

1 ONLINE ONLINE node3-11gr2

ora.node4-11gr2.vip

1 ONLINE ONLINE node4-11gr2

ora.oc4j

1 ONLINE ONLINE node1-11gr2

ora.prod.db

1 ONLINE ONLINE node1-11gr2 Open

2 ONLINE ONLINE node2-11gr2 Open

3 ONLINE ONLINE node3-11gr2 Open

4 ONLINE ONLINE node4-11gr2 Open

ora.scan1.vip

1 ONLINE ONLINE node1-11gr2

Opatch info

[[email protected] OPatch]$ ./opatch lsinventory

Oracle Interim Patch Installer version 11.2.0.3.4

Copyright (c) 2012, Oracle Corporation. All rights reserved.

Oracle Home : /u01/11.2.0/grid

Central Inventory : /u01/app/oraInventory

from : /u01/11.2.0/grid/oraInst.loc

OPatch version : 11.2.0.3.4

OUI version : 11.2.0.4.0

Log file location : /u01/11.2.0/grid/cfgtoollogs/opatch/opatch2017-01-20_11-35-04AM_1.log

Lsinventory Output file location : /u01/11.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2017-01-20_11-35-04AM.txt

--------------------------------------------------------------------------------

Installed Top-level Products (1):

Oracle Grid Infrastructure 11g 11.2.0.4.0

There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

Rac system comprising of multiple nodes

Local node = node2-11gr2

Remote node = node1-11gr2

Remote node = node3-11gr2

Remote node = node4-11gr2

--------------------------------------------------------------------------------

OPatch succeeded.

olsnodes

[[email protected] OPatch]$ olsnodes -n -s -t

node1-11gr2 1 Active Unpinned

node2-11gr2 2 Active Unpinned

node3-11gr2 3 Active Unpinned

node4-11gr2 4 Active Unpinned

undo and redo

[[email protected] ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 20 11:36:16 2017

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Data Mining and Real Application Testing options

SQL> show parameter undo

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

undo_management string AUTO

undo_retention integer 900

undo_tablespace string UNDOTBS2

SQL> show parameter thread

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

parallel_threads_per_cpu integer 2

thread integer 2

SQL>

SQL> set linesize 200

SQL> set pagesize 200

SQL> col member format a50

SQL> col ARCHIVED format a10

SQL> select a. GROUP#,a.type,a.STATUS,a.MEMBER,b.THREAD#,b.SEQUENCE#,b.BYTES/1024/1024 "size(M)",b.MEMBERS,b.ARCHIVED,b.STATUS

2 from v$Logfile a,v$log b

3 where a.group#=b.group#

4 order by b.THREAD#;

GROUP# TYPE STATUS MEMBER THREAD# SEQUENCE# size(M) MEMBERS ARCHIVED STATUS

---------- ------- ------- -------------------------------------------------- ---------- ---------- ---------- ---------- ---------- ----------------

1 ONLINE +DG1/prod/redo01.log 1 7 50 1 NO CURRENT

2 ONLINE +DG1/prod/redo02.log 1 6 50 1 NO INACTIVE

3 ONLINE +DG1/prod/redo03.log 2 2 50 1 NO CURRENT

4 ONLINE +DG1/prod/redo04.log 2 0 50 1 YES UNUSED

5 ONLINE +DG1/prod/redo05.log 3 1 50 1 NO INACTIVE

6 ONLINE +DG1/prod/redo06.log 3 2 50 1 NO CURRENT

7 ONLINE +DG1/prod/redo07.log 4 1 50 1 NO INACTIVE

8 ONLINE +DG1/prod/redo08.log 4 2 50 1 NO CURRENT

8 rows selected.

SQL> show parameter cluster

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

cluster_database boolean TRUE

cluster_database_instances integer 4

cluster_interconnects string

SQL>

用户等效性

[[email protected] ~]$ cluvfy comp admprv -o db_config -d /u01/app/oracle/product/11.2.0/db_1 -n node2-11gr2 -verbose

Verifying administrative privileges

Checking user equivalence...

Check: User equivalence for user "oracle"

Node Name Status

------------------------------------ ------------------------

node2-11gr2 passed

Result: User equivalence check passed for user "oracle"

Checking administrative privileges...

Check: User existence for "oracle"

Node Name Status Comment

------------ ------------------------ ------------------------

node2-11gr2 passed exists(502)

Checking for multiple users with UID value 502

Result: Check for multiple users with UID value 502 passed

Result: User existence check passed for "oracle"

Check: Group existence for "oinstall"

Node Name Status Comment

------------ ------------------------ ------------------------

node2-11gr2 passed exists

Result: Group existence check passed for "oinstall"

Check: Membership of user "oracle" in group "oinstall" [as Primary]

Node Name User Exists Group Exists User in Group Primary Status

---------------- ------------ ------------ ------------ ------------ ------------

node2-11gr2 yes yes yes yes passed

Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed

Check: Group existence for "dba"

Node Name Status Comment

------------ ------------------------ ------------------------

node2-11gr2 passed exists

Result: Group existence check passed for "dba"

Check: Membership of user "oracle" in group "dba"

Node Name User Exists Group Exists User in Group Status

---------------- ------------ ------------ ------------ ----------------

node2-11gr2 yes yes yes passed

Result: Membership check for user "oracle" in group "dba" passed

Administrative privileges check passed

Verification of administrative privileges was successful.

检查集群的完整性

grid用户运行

[[email protected] ~]$ cluvfy stage -post nodeadd -n node2-11gr2 -verbose

Performing post-checks for node addition

Checking node reachability...

Check: Node reachability from node "node2-11gr2"

Destination Node Reachable?

------------------------------------ ------------------------

node2-11gr2 yes

Result: Node reachability check passed from node "node2-11gr2"

Checking user equivalence...

Check: User equivalence for user "grid"

Node Name Status

------------------------------------ ------------------------

node2-11gr2 passed

Result: User equivalence check passed for user "grid"

Checking cluster integrity...

Node Name

------------------------------------

node1-11gr2

node2-11gr2

node3-11gr2

node4-11gr2

Cluster integrity check passed

Checking CRS integrity...

Clusterware version consistency passed

The Oracle Clusterware is healthy on node "node4-11gr2"

The Oracle Clusterware is healthy on node "node1-11gr2"

The Oracle Clusterware is healthy on node "node2-11gr2"

The Oracle Clusterware is healthy on node "node3-11gr2"

CRS integrity check passed

Checking shared resources...

Checking CRS home location...

"/u01/11.2.0/grid" is not shared

Result: Shared resources check for node addition passed

Checking node connectivity...

Checking hosts config file...

Node Name Status

------------------------------------ ------------------------

node4-11gr2 passed

node1-11gr2 passed

node2-11gr2 passed

node3-11gr2 passed

Verification of the hosts config file successful

Interface information for node "node4-11gr2"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

eth0 192.168.150.36 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:B5:C9:F6 1500

eth0 192.168.150.137 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:B5:C9:F6 1500

eth1 10.1.1.13 10.0.0.0 0.0.0.0 192.168.150.254 00:0C:29:B5:C9:00 1500

eth1 169.254.140.194 169.254.0.0 0.0.0.0 192.168.150.254 00:0C:29:B5:C9:00 1500

Interface information for node "node1-11gr2"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

eth0 192.168.150.33 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:07:E6:BF 1500

eth0 192.168.150.135 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:07:E6:BF 1500

eth0 192.168.150.133 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:07:E6:BF 1500

eth1 10.1.1.10 10.0.0.0 0.0.0.0 192.168.150.254 00:0C:29:07:E6:C9 1500

eth1 169.254.3.123 169.254.0.0 0.0.0.0 192.168.150.254 00:0C:29:07:E6:C9 1500

Interface information for node "node2-11gr2"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

eth0 192.168.150.34 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:05:36:7F 1500

eth0 192.168.150.134 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:05:36:7F 1500

eth1 10.1.1.11 10.0.0.0 0.0.0.0 192.168.150.254 00:0C:29:05:36:89 1500

eth1 169.254.92.101 169.254.0.0 0.0.0.0 192.168.150.254 00:0C:29:05:36:89 1500

Interface information for node "node3-11gr2"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

eth0 192.168.150.35 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:86:50:14 1500

eth0 192.168.150.136 192.168.150.0 0.0.0.0 192.168.150.254 00:0C:29:86:50:14 1500

eth1 10.1.1.12 10.0.0.0 0.0.0.0 192.168.150.254 00:0C:29:86:50:1E 1500

eth1 169.254.65.95 169.254.0.0 0.0.0.0 192.168.150.254 00:0C:29:86:50:1E 1500

Check: Node connectivity for interface "eth0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

node4-11gr2[192.168.150.36] node4-11gr2[192.168.150.137] yes

node4-11gr2[192.168.150.36] node1-11gr2[192.168.150.33] yes

node4-11gr2[192.168.150.36] node1-11gr2[192.168.150.135] yes

node4-11gr2[192.168.150.36] node1-11gr2[192.168.150.133] yes

node4-11gr2[192.168.150.36] node2-11gr2[192.168.150.34] yes

node4-11gr2[192.168.150.36] node2-11gr2[192.168.150.134] yes

node4-11gr2[192.168.150.36] node3-11gr2[192.168.150.35] yes

node4-11gr2[192.168.150.36] node3-11gr2[192.168.150.136] yes

node4-11gr2[192.168.150.137] node1-11gr2[192.168.150.33] yes

node4-11gr2[192.168.150.137] node1-11gr2[192.168.150.135] yes

node4-11gr2[192.168.150.137] node1-11gr2[192.168.150.133] yes

node4-11gr2[192.168.150.137] node2-11gr2[192.168.150.34] yes

node4-11gr2[192.168.150.137] node2-11gr2[192.168.150.134] yes

node4-11gr2[192.168.150.137] node3-11gr2[192.168.150.35] yes

node4-11gr2[192.168.150.137] node3-11gr2[192.168.150.136] yes

node1-11gr2[192.168.150.33] node1-11gr2[192.168.150.135] yes

node1-11gr2[192.168.150.33] node1-11gr2[192.168.150.133] yes

node1-11gr2[192.168.150.33] node2-11gr2[192.168.150.34] yes

node1-11gr2[192.168.150.33] node2-11gr2[192.168.150.134] yes

node1-11gr2[192.168.150.33] node3-11gr2[192.168.150.35] yes

node1-11gr2[192.168.150.33] node3-11gr2[192.168.150.136] yes

node1-11gr2[192.168.150.135] node1-11gr2[192.168.150.133] yes

node1-11gr2[192.168.150.135] node2-11gr2[192.168.150.34] yes

node1-11gr2[192.168.150.135] node2-11gr2[192.168.150.134] yes

node1-11gr2[192.168.150.135] node3-11gr2[192.168.150.35] yes

node1-11gr2[192.168.150.135] node3-11gr2[192.168.150.136] yes

node1-11gr2[192.168.150.133] node2-11gr2[192.168.150.34] yes

node1-11gr2[192.168.150.133] node2-11gr2[192.168.150.134] yes

node1-11gr2[192.168.150.133] node3-11gr2[192.168.150.35] yes

node1-11gr2[192.168.150.133] node3-11gr2[192.168.150.136] yes

node2-11gr2[192.168.150.34] node2-11gr2[192.168.150.134] yes

node2-11gr2[192.168.150.34] node3-11gr2[192.168.150.35] yes

node2-11gr2[192.168.150.34] node3-11gr2[192.168.150.136] yes

node2-11gr2[192.168.150.134] node3-11gr2[192.168.150.35] yes

node2-11gr2[192.168.150.134] node3-11gr2[192.168.150.136] yes

node3-11gr2[192.168.150.35] node3-11gr2[192.168.150.136] yes

Result: Node connectivity passed for interface "eth0"

Check: TCP connectivity of subnet "192.168.150.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

node2-11gr2:192.168.150.34 node4-11gr2:192.168.150.36 passed

node2-11gr2:192.168.150.34 node4-11gr2:192.168.150.137 passed

node2-11gr2:192.168.150.34 node1-11gr2:192.168.150.33 passed

node2-11gr2:192.168.150.34 node1-11gr2:192.168.150.135 passed

node2-11gr2:192.168.150.34 node1-11gr2:192.168.150.133 passed

node2-11gr2:192.168.150.34 node2-11gr2:192.168.150.134 passed

node2-11gr2:192.168.150.34 node3-11gr2:192.168.150.35 passed

node2-11gr2:192.168.150.34 node3-11gr2:192.168.150.136 passed

Result: TCP connectivity check passed for subnet "192.168.150.0"

Check: Node connectivity for interface "eth1"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

node4-11gr2[10.1.1.13] node1-11gr2[10.1.1.10] yes

node4-11gr2[10.1.1.13] node2-11gr2[10.1.1.11] yes

node4-11gr2[10.1.1.13] node3-11gr2[10.1.1.12] yes

node1-11gr2[10.1.1.10] node2-11gr2[10.1.1.11] yes

node1-11gr2[10.1.1.10] node3-11gr2[10.1.1.12] yes

node2-11gr2[10.1.1.11] node3-11gr2[10.1.1.12] yes

Result: Node connectivity passed for interface "eth1"

Check: TCP connectivity of subnet "10.0.0.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

node2-11gr2:10.1.1.11 node4-11gr2:10.1.1.13 passed

node2-11gr2:10.1.1.11 node1-11gr2:10.1.1.10 passed

node2-11gr2:10.1.1.11 node3-11gr2:10.1.1.12 passed

Result: TCP connectivity check passed for subnet "10.0.0.0"

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "192.168.150.0".

Subnet mask consistency check passed for subnet "10.0.0.0".

Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.150.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "192.168.150.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking node application existence...

Checking existence of VIP node application (required)

Node Name Required Running? Comment

------------ ------------------------ ------------------------ ----------

node4-11gr2 yes yes passed

node1-11gr2 yes yes passed

node2-11gr2 yes yes passed

node3-11gr2 yes yes passed

VIP node application check passed

Checking existence of NETWORK node application (required)

Node Name Required Running? Comment

------------ ------------------------ ------------------------ ----------

node4-11gr2 yes yes passed

node1-11gr2 yes yes passed

node2-11gr2 yes yes passed

node3-11gr2 yes yes passed

NETWORK node application check passed

Checking existence of GSD node application (optional)

Node Name Required Running? Comment

------------ ------------------------ ------------------------ ----------

node4-11gr2 no no exists

node1-11gr2 no no exists

node2-11gr2 no no exists

node3-11gr2 no no exists

GSD node application is offline on nodes "node4-11gr2,node1-11gr2,node2-11gr2,node3-11gr2"

Checking existence of ONS node application (optional)

Node Name Required Running? Comment

------------ ------------------------ ------------------------ ----------

node4-11gr2 no yes passed

node1-11gr2 no yes passed

node2-11gr2 no yes passed

node3-11gr2 no yes passed

ONS node application check passed

Checking Single Client Access Name (SCAN)...

SCAN Name Node Running? ListenerName Port Running?

---------------- ------------ ------------ ------------ ------------ ------------

scan-cluster node1-11gr2 true LISTENER_SCAN1 1521 true

Checking TCP connectivity to SCAN Listeners...

Node ListenerName TCP connectivity?

------------ ------------------------ ------------------------

node2-11gr2 LISTENER_SCAN1 yes

TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for "scan-cluster"...

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...

Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...

Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined

More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file

Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

ERROR:

PRVG-1101 : SCAN name "scan-cluster" failed to resolve

SCAN Name IP Address Status Comment

------------ ------------------------ ------------------------ ----------

scan-cluster 192.168.150.135 failed NIS Entry

ERROR:

PRVF-4657 : Name resolution setup check for "scan-cluster" (IP address: 192.168.150.135) failed

ERROR:

PRVF-4664 : Found inconsistent name resolution entries for SCAN name "scan-cluster"

Verification of SCAN VIP and Listener setup failed

Checking to make sure user "grid" is not in "root" group

Node Name Status Comment

------------ ------------------------ ------------------------

node2-11gr2 passed does not exist

Result: User "grid" is not part of "root" group. Check passed

Checking if Clusterware is installed on all nodes...

Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...

Check: CTSS Resource running on all nodes

Node Name Status

------------------------------------ ------------------------

node2-11gr2 passed

Result: CTSS resource check passed

Querying CTSS for time offset on all nodes...

Result: Query of CTSS for time offset passed

Check CTSS state started...

Check: CTSS state

Node Name State

------------------------------------ ------------------------

node2-11gr2 Observer

CTSS is in Observer state. Switching over to clock synchronization checks using NTP

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...

The NTP configuration file "/etc/ntp.conf" is available on all nodes

NTP Configuration file check passed

Checking daemon liveness...

Check: Liveness for "ntpd"

Node Name Running?

------------------------------------ ------------------------

node2-11gr2 no

Result: Liveness check failed for "ntpd"

PRVF-5494 : The NTP Daemon or Service was not alive on all nodes

PRVF-5415 : Check to see if NTP daemon or service is running failed

Result: Clock synchronization check using Network Time Protocol(NTP) failed

PRVF-9652 : Cluster Time Synchronization Services check failed

Post-check for node addition was unsuccessful on all the nodes.

2.16 总结

1. 添加节点分为3个阶段,

第一阶段是复制GIRD HOME到新节点

第二阶段是复制RDBMS HOME到新节点

第三阶段是DBCA添加数据库实例。

2. 第一阶段主要工作是复制GIRD HOME到新节点,配置GRID,并且启动GRID,同时更新OCR信息,更新inventory信息。

3. 第二阶段主要工作是复制RDBMS HOME到新节点,更新inventory信息。

4. 第三阶段主要工作是DBCA创建新的数据库实例(包括创建undo 表空间,redo log,初始化参数等),更新OCR信息(包括注册新的数据库实例等)。

5. 在添加/删除节点的过程中,原有的节点一直是online状态,不需要停机,对客户端业务没有影响。

6. 添加/删除节点前,备份OCR (默认每4小时自动备份ocr),在某些情况下添加/删除节点失败,可以通过恢复原来的OCR来解决问题。

7. 新节点的ORACLE_BASE和ORACLE_HOME 路径在添加过程中会自动创建,无需手动创建。

8. 正常安装11.2 GRID时,OUI界面提供SSH 配置功能,但是添加节点脚本addNode.sh没有这个功能,因此需要手动配置oracle用户和grid用户的SSH 用户等效性。

时间: 2024-10-11 07:37:34

Oracle 11gR2 RAC 添加节点的相关文章

Oracle Study之--Oracle 11g RAC添加节点错误

Oracle Study之--Oracle 11g RAC添加节点错误 系统环境:     操作系统:RedHat EL5     Cluster:  Oracle 11gR2 Grid     Oracle:   Oracle 11gR2  故障一:新节点和原节点时间不同步,添加节点失败 1.在新节点执行"root.sh"  [root@wqy3 install]# /u01/11.2.0/grid/root.sh  Running Oracle 11g root.sh script

实验:Oracle单节点RAC添加节点

环境:RHEL 6.5 + Oracle 11.2.0.4 单节点RAC 需求:单节点RAC添加新节点 1.添加节点前的准备工作 2.正式添加节点 3.其他配置工作 1.添加节点前的准备工作 参考Oracle官方文档: Oracle? Clusterware Administration and Deployment Guide 11g Release 2 (11.2) -> Adding and Deleting Cluster Nodes 1.1 确保硬件连接正常 1.1 Make phys

Oracle 11gR2 RAC 新特性说明

最近接触了一下Oracle 11g R2 的RAC,发现变化很大. 所以在自己动手做实验之前还是先研究下它的新特性比较好. 一.    官网介绍 先看一下Oracle 的官网文档里对RAC 新特性的一点说明. Oracle Database 11g Release 2 (11.2.0.2) New Features in Oracle RAC http://download.oracle.com/docs/cd/E11882_01/rac.112/e16795/whatsnew.htm#CHDJ

Oracle Study之-AIX6.1构建Oracle 11gR2 RAC(1)

Oracle Study之-AIX6.1构建Oracle 11gR2 RAC(1) 环境: 操作系统: AIX 6100-09(SP3) Cluster: HACMP6.1 集群软件: GI 11.2.0.1 数据库:   Oracle 11.2.0.1 构建AIX平台下RAC 依据共享存储的使用方式可以分两种: 1.建立基于并发卷组(VG concurrent)的共享存储 2.建立基于ASM下的RAW的共享存储         由于本系统阵列(SUN T300)不支持并发存储,不具有reser

oracle 11gr2 rac中的4种IP解说

关于在配置oracle 11gr2  rac时的4种IP,有不少朋友对此很迷惑,本文在此解说一下. 打开一个RAC节点的/etc/hosts文件 cat /etc/hosts # Public IP 192.168.1.138rac1.localdomain rac1 192.168.1.139rac2.localdomain rac2 #Private IP 172.16.10.138rac1-priv.localdomain rac1-priv 172.16.10.139rac2-priv.

Oracle Study之--Oracle 11gR2 RAC crs启动故障

Oracle Study之--Oracle 11gR2 RAC crs启动故障 系统环境:     操作系统:RedHat EL5     Cluster:  Oracle 11gR2 Grid     Oracle:   Oracle 11gR2  RAC环境中,其中一个节点crsd进程启动失败: [[email protected] ~]$ crsctl check crsCRS-4638: Oracle High Availability Services is onlineCRS-453

Redhat 5.8 ORACLE 11gR2 RAC安装文档2-grid安装

3.安装Grid 3.1.安装rpm包 两个节点都要安装,以节点1为例 [[email protected] yum.repos.d]# yum install compat-libstdc++-33 ksh gcc gcc-c++ libgomp elfutils-libelf-devel glibc-devel glibc-headers libaio-devel libstdc++-devel sysstat unixODBC unixODBC-devel –y -----.. Compl

Oracle Study之--AMD CPU安装Oracle 11gR2 RAC错误

Oracle Study之--AMD CPU安装Oracle 11gR2 RAC错误 系统环境: 操作系统: RedHat  EL55     Oracle :     Oracle 11gR2 Grid +  Oracle 1.错误现象 [[email protected] install]# /u01/11.2.0/grid/root.shRunning Oracle 11g root.sh script... The following environment variables are 

测试Oracle 11gr2 RAC 非归档模式下,offline drop数据文件后的数据库的停止与启动测试全过程

测试Oracle 11gr2 RAC 非归档模式下,offline drop数据文件后的数据库的停止与启动测试全过程 最近系统出现问题,由于数据库产生的日志量太大无法开启归档模式,导致offline的数据文件无法online! 数据库在启动的时候不检查offline的数据文件! 下面进行测试 数据库版本 SQL> select * from v$version; BANNER ------------------------------------------------------------