redhat6 + 11G RAC 双节点部署

 

一、配置网络环境

node1

[[email protected] ~]#vi/etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=node1

[[email protected] ~]# vi/etc/sysconfig/network-scripts/ifcfg-eth0

# Intel Corporation 82540EM GigabitEthernet Controller

DEVICE=eth0

BOOTPROTO=static

IPADDR=192.168.10.41

NETMASK=255.255.255.0

GATEWAY=192.168.10.1

ONBOOT=yes

[[email protected] ~]#vi/etc/sysconfig/network-scripts/ifcfg-eth1

# Intel Corporation 82540EM GigabitEthernet Controller

DEVICE=eth1

BOOTPROTO=static

IPADDR=10.10.10.41

NETMASK=255.255.255.0

ONBOOT=yes

[[email protected] ~]#vi/etc/hosts

# Do not remove the following line, orvarious programs

# that require network functionality willfail.

127.0.0.1 localhost

::1             localhost6.localdomain6 localhost6

192.168.10.41 node1

192.168.10.43 node1-vip

10.10.10.41 node1-priv

192.168.10.42 node2

192.168.10.44 node2-vip

10.10.10.42 node2-priv

192.168.10.55 rac_scan

[[email protected] ~]#service network restart

node2node1基本相同,IP和主机名不同)

 

二、建立用户、组、oracle和grid文件夹

node1

[[email protected] ~]#vimkuser.sh

groupadd-g 200 oinstall

groupadd-g 201 dba

groupadd-g 202 oper

groupadd-g 203 asmadmin

groupadd-g 204 asmoper

groupadd-g 205 asmdba

useradd-u 200 -g oinstall -G dba,asmdba,oper oracle

useradd-u 201 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid

[[email protected] ~]#shmkuser.sh

[[email protected] ~]#vimkdir.sh

mkdir-p /u01/app/oraInventory

chown-R grid:oinstall /u01/app/oraInventory/

chmod-R 775 /u01/app/oraInventory/

mkdir-p /u01/11.2.0/grid

chown-R grid:oinstall /u01/11.2.0/grid/

chmod-R 775 /u01/11.2.0/grid/

mkdir-p /u01/app/oracle

mkdir-p /u01/app/oracle/cfgtoollogs

mkdir-p /u01/app/oracle/product/11.2.0/db_1

chown-R oracle:oinstall /u01/app/oracle

chmod-R 775 /u01/app/oracle

[[email protected] ~]#shmkdir.sh

[[email protected] ~]#passwdoracle

[[email protected] ~]#passwdgrid

[[email protected]~]# id oracle

uid=200(oracle)gid=200(oinstall) groups=200(oinstall),201(dba),202(oper),205(asmdba)

[[email protected]~]# id grid

uid=201(grid)gid=200(oinstall) groups=200(oinstall),201(dba),202(oper),203(asmadmin),204(asmoper),205(asmdba)

[[email protected]~]# id nobody

uid=99(nobody)gid=99(nobody) groups=99(nobody)

 

node2node1相同)

三、修改/etc目录下的4个文件

node1

[[email protected]~]#vi /etc/sysctl.conf

fs.aio-max-nr= 1048576

fs.file-max= 6815744

kernel.shmall= 2097152

kernel.shmmax= 536870912

kernel.shmmni= 4096

kernel.sem= 250 32000 100 128

net.ipv4.ip_local_port_range= 9000 65500

net.core.rmem_default= 262144

net.core.rmem_max= 4194304

net.core.wmem_default= 262144

net.core.wmem_max= 1048586

[[email protected]~]# sysctl –p

[[email protected]~]#vi /etc/security/limits.conf

oraclesoft nproc 2047

oraclehard nproc 16384

oraclesoft nofile1024

oraclehard nofile 65536

oraclesoft stack10240

gridsoft nproc 2047

gridhard nproc 16384

gridsoft nofile 1024

gridhard nofile65536

gridsoft stack 10240

[[email protected]~]#vi /etc/pam.d/login

sessionrequired /lib/security/pam_limits.so

[[email protected]~]#vi /etc/profile

if [$USER = "oracle" ]||[ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

fi

node2node1相同)

 

 

四、关闭ntp服务,采用oracle自带的时间,

关闭邮件服务

node1

[[email protected]~]# chkconfig ntpd off

[[email protected]~]# chkconfig ntpd --list

[[email protected]~]# mv /etc/ntp.conf /etc/ntp.conf.bak

[[email protected]~]# chkconfig sendmail off

[[email protected]~]# chkconfig sendmail --list

node2node1相同)

五、修改oracle和grid用户的环境变量

 

node1

[roo[email protected]de1 ~]#su - oracle

[[email protected]~]$vi .bash_profile

export EDITOR=vi

exportORACLE_SID=prod1

exportORACLE_BASE=/u01/app/oracle

exportORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

exportPATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin

umask 022

[[email protected]~]$. .bash_profile

[roo[email protected]de1 ~]#su – grid

[[email protected]~]$vi .bash_profile

export EDITOR=vi

exportORACLE_SID=+ASM1

exportORACLE_BASE=/u01/app/oracle

exportORACLE_HOME=/u01/11.2.0/grid

exportGRID_HOME=/u01/11.2.0/grid

exportLD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

exportTHREADS_FLAG=native

exportPATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin

umask 022

[[email protected]~]$. .bash_profile

node2node1相同)

 

六、硬盘分区  创建ASM磁盘

node1

查看系统里所有磁盘情况

[[email protected]~]# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255heads, 63 sectors/track, 2610 cylinders

Units= cylinders of 16065 * 512 = 8225280 bytes

DeviceBoot      Start         End      Blocks  Id  System

/dev/sda1*         1          13      104391   83  Linux

/dev/sda2          14       2610    20860402+   8e  Linux LVM

Disk/dev/sdb: 32.2 GB, 32212254720 bytes

255heads, 63 sectors/track, 3916 cylinders

Units= cylinders of 16065 * 512 = 8225280 bytes

Disk/dev/sdb doesn‘t contain a valid partition table

Disk/dev/sdc: 21.4 GB, 21474836480 bytes

255heads, 63 sectors/track, 2610 cylinders

Units= cylinders of 16065 * 512 = 8225280 bytes

Disk/dev/sdc doesn‘t contain a valid partition table

给/dev/sdb磁盘分区

[[email protected]~]# fdisk/dev/sdb

给/dev/sdc磁盘分区

[[email protected]~]# fdisk/dev/sdc

查看系统里的磁盘信息

[[email protected]~]# fdisk -l

格式化/dev/sdb1磁盘

[[email protected]~]# mkfs.ext3/dev/sdb1

挂载新磁盘/dev/sdb1到/u01,查看挂载情况

[[email protected]~]# mount/dev/sdb1 /u01

[[email protected]~]# df -h

Filesystem                       Size Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00  18G 6.1G   11G  38% /

/dev/sda1                        99M  12M   82M  13% /boot

tmpfs                            782M     0 782M   0% /dev/shm

/dev/sdb1                        30G 173M   28G   1% /u01

查看物理内存和换页空间swap

[[email protected]~]# free -m

total       used       free    shared    buffers     cached

Mem:          1562       1525         37          0         11       1438

-/+buffers/cache:         75       1486

Swap:         2047          0       2047

创建一个大文件

[[email protected]]# ddif=/dev/zero of=/u01/swapfile1 bs=1024k count=2048

2048+0records in

2048+0records out

2147483648bytes (2.1 GB) copied, 5.66353 seconds, 379 MB/s

创建swap文件

[[email protected]~]# mkswap -c/u01/swapfile1

Settingup swapspace version 1, size = 2147479 Kb

挂载swap文件

[[email protected]~]# swapon/u01/swapfile1

查看物理内存和更改后的换页空间swap

[[email protected]~]# free -m

total       used       free    shared    buffers     cached

Mem:          1562       1525         37          0         11       1438

-/+buffers/cache:         75       1486

Swap:         4095          0       4095

将挂载的新磁盘,增加的swap文件写入到fstab文件,重启系统后会自动挂载

[[email protected]~]# vi/etc/fstab

/dev/VolGroup00/LogVol00/                      ext3    defaults        1 1

LABEL=/boot             /boot                   ext3    defaults        1 2

tmpfs                   /dev/shm                tmpfs   defaults,size=1g        0 0

devpts                  /dev/pts                devpts  gid=5,mode=620  0 0

sysfs                   /sys                    sysfs   defaults        0 0

proc                    /proc                   proc    defaults       0 0

/dev/VolGroup00/LogVol01swap                   swap    defaults        0 0

/dev/sdb1              /u01                    ext3    defaults        0 0

/u01/swapfile1          swap                   swap    defaults        0 0

[[email protected]~]# mkfs.ext3/dev/sdb1

挂载新磁盘/dev/sdb1到/u01,查看挂载情况

[[email protected]~]# mount/dev/sdb1 /u01

[[email protected]~]# df -h

Filesystem                       Size Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00  18G 6.1G   11G  38% /

/dev/sda1                        99M  12M   82M  13% /boot

tmpfs                            782M     0 782M   0% /dev/shm

/dev/sdb1                        30G 173M   28G   1% /u01

查看物理内存和换页空间swap

[[email protected]~]# free -m

total       used       free    shared    buffers     cached

Mem:          1562       1525         37          0         11       1438

-/+buffers/cache:         75       1486

Swap:         2047          0       2047

创建一个大文件

[[email protected]]# ddif=/dev/zero of=/u01/swapfile1 bs=1024k count=2048

2048+0records in

2048+0records out

2147483648bytes (2.1 GB) copied, 5.66353 seconds, 379 MB/s

创建swap文件

[[email protected]~]# mkswap -c/u01/swapfile1

Settingup swapspace version 1, size = 2147479 Kb

挂载swap文件

[[email protected]~]# swapon/u01/swapfile1

查看物理内存和更改后的换页空间swap

[[email protected]~]# free -m

total       used       free    shared    buffers     cached

Mem:          1562       1525         37          0         11       1438

-/+buffers/cache:         75       1486

Swap:         4095          0       4095

将挂载的新磁盘,增加的swap文件写入到fstab文件,重启系统后会自动挂载

[[email protected]~]# vi/etc/fstab

/dev/VolGroup00/LogVol00/                      ext3    defaults        1 1

LABEL=/boot             /boot                   ext3    defaults        1 2

tmpfs                   /dev/shm                tmpfs   defaults,size=1g        0 0

devpts                  /dev/pts                devpts  gid=5,mode=620  0 0

sysfs                   /sys                    sysfs   defaults        0 0

proc                    /proc                   proc    defaults       0 0

/dev/VolGroup00/LogVol01swap                   swap    defaults        0 0

/dev/sdb1              /u01                    ext3    defaults        0 0

/u01/swapfile1          swap                   swap    defaults        0 0

 

查看ASM磁盘管理软件的位置 (从网站下载并上传到linux系统)

[[email protected]~]# cd /soft/asm

[[email protected]]# ls

oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm

oracleasmlib-2.0.4-1.el5.i386.rpm

oracleasm-support-2.1.3-1.el5.i386.rpm

注意与内核版本的匹配

[[email protected]]# uname -a

Linux node1 2.6.18-194.el5 #1 SMP Tue Mar16 21:52:43 EDT 2010 i686 i686 i386 GNU/Linux

安装ASM管理软件

[[email protected]]# rpm -ivh *.rpm

warning:oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm:Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...              ###########################################[100%]

1:oracleasm-support      ########################################### [ 33%]

2:oracleasm-2.6.18-194.el########################################### [ 67%]

3:oracleasmlib            ###########################################[100%]

配置 oracleasm初始化

[[email protected]]# serviceoracleasm configure

Configuringthe Oracle ASM library driver.

Thiswill configure the on-boot properties of the Oracle ASM library

driver.   The following questions will determinewhether the driver is

loadedon boot and what permissions it will have.  The current values

willbe shown in brackets (‘[]‘).   Hitting<ENTER> without typing an

answerwill keep that current value.   Ctrl-Cwill abort.

Defaultuser to own the driver interface []: grid

Defaultgroup to own the driver interface []: asmadmin

StartOracle ASM library driver on boot (y/n) [n]: y

Scanfor Oracle ASM disks on boot (y/n) [y]:

WritingOracle ASM library driver configuration: done

Initializingthe Oracle ASMLib driver: [   OK   ]

Scanningthe system for Oracle ASMLib disks: [  OK   ]

建立 oracleasm 磁盘

[[email protected]]# serviceoracleasm createdisk OCR_VOTE1 /dev/sdc1

Markingdisk "OCR_VOTE1" as an ASM disk: [  OK   ]

[[email protected]]# serviceoracleasm createdisk OCR_VOTE2 /dev/sdc2

Markingdisk "OCR_VOTE2" as an ASM disk: [  OK   ]

[[email protected]]# serviceoracleasm createdisk OCR_VOTE3 /dev/sdc3

Markingdisk "OCR_VOTE3" as an ASM disk: [  OK   ]

[[email protected]]# serviceoracleasm createdisk ASM_DATA1 /dev/sdc5

Markingdisk "ASM_DATA1" as an ASM disk: [  OK   ]

[[email protected]]# serviceoracleasm createdisk ASM_DATA2 /dev/sdc6

Markingdisk "ASM_DATA2" as an ASM disk: [  OK   ]

[[email protected]]# serviceoracleasm createdisk ASM_RCY1 /dev/sdc7

Markingdisk "ASM_RCY1" as an ASM disk: [   OK   ]

[[email protected]]# serviceoracleasm createdisk ASM_RCY2 /dev/sdc8

Markingdisk "ASM_RCY2" as an ASM disk: [  OK   ]

[[email protected]]# serviceoracleasm listdisks

ASM_DATA1

ASM_DATA2

ASM_RCY1

ASM_RCY2

OCR_VOTE1

OCR_VOTE2

OCR_VOTE3

这个时候把node1 /soft/asm的三个包拷贝到node2 /soft/asm里

 

拷贝完后查看ASM磁盘管理软件的位置 (从网站下载并上传到linux系统)

注意与内核版本的匹配

[[email protected]]# uname -a

Linux node1 2.6.18-194.el5 #1 SMP Tue Mar16 21:52:43 EDT 2010 i686 i686 i386 GNU/Linux

安装ASM管理软件

[[email protected]]# rpm -ivh *.rpm

warning:oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm:Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...              ###########################################[100%]

1:oracleasm-support      ########################################### [ 33%]

2:oracleasm-2.6.18-194.el########################################### [ 67%]

3:oracleasmlib            ###########################################[100%]

Node2也需要执行oraclasm初始化

[[email protected]]# serviceoracleasm configure

Configuringthe Oracle ASM library driver.

Thiswill configure the on-boot properties of the Oracle ASM library

driver.   The following questions will determinewhether the driver is

loadedon boot and what permissions it will have.  The current values

willbe shown in brackets (‘[]‘).   Hitting<ENTER> without typing an

answerwill keep that current value.   Ctrl-Cwill abort.

Defaultuser to own the driver interface []: grid

Defaultgroup to own the driver interface []: asmadmin

StartOracle ASM library driver on boot (y/n) [n]: y

Scanfor Oracle ASM disks on boot (y/n) [y]:

WritingOracle ASM library driver configuration: done

Initializingthe Oracle ASMLib driver: [   OK   ]

Scanningthe system for Oracle ASMLib disks: [  OK   ]

 

然后执行asm扫描并查看

[[email protected]]# serviceoracleasm scandisks

Scanningthe system for Oracle ASMLib disks: [  OK   ]

[[email protected]]# serviceoracleasm listdisks

ASM_DATA1

ASM_DATA2

ASM_RCY1

ASM_RCY2

OCR_VOTE1

OCR_VOTE2

OCR_VOTE3

(node1和node2,共享磁盘/dev/sdc不用配置,其他配置相同)

七、建立主机间的信任关系

建立节点之间 oracle 、grid  用户之间的信任(通过 ssh  生成成对秘钥)

node1   --oracle用户

[[email protected]~]# su - oracle

[[email protected]~]$ mkdir .ssh

[[email protected]~]$ ls -a

.  .. .bash_history  .bash_logout  .bash_profile .bashrc  .emacs  .kde .mozilla  .ssh  .viminfo

[[email protected]~]$ ssh-keygen-t rsa

[[email protected] ~]$ ssh-keygen -t dsa

Node2   --oracle用户

[[email protected]~]# su - oracle

[[email protected]~]$ mkdir .ssh

[[email protected]~]$ ls -a

.  .. .bash_history  .bash_logout  .bash_profile .bashrc  .emacs  .kde .mozilla  .ssh  .viminfo

[[email protected]~]$ ssh-keygen-t rsa

[[email protected] ~]$ ssh-keygen -t dsa

配置信任关系

[[email protected]~]$ ls .ssh

id_dsa   id_dsa.pub  id_rsa   id_rsa.pub   known_hosts

[[email protected]~]$ cat.ssh/id_rsa.pub >> .ssh/authorized_keys

[[email protected]~]$ cat.ssh/id_dsa.pub >> .ssh/authorized_keys

[[email protected]~]$ ssh node2cat .ssh/id_rsa.pub >> .ssh/authorized_keys

[[email protected] ~]$ ssh node2 cat .ssh/id_dsa.pub >>.ssh/authorized_keys

[email protected]‘spassword:

[[email protected]~]$ scp.ssh/authorized_keys node2:~/.ssh

[email protected]‘spassword:

authorized_keys                                          100% 1988     1.9KB/s  00:00

验证信任关系

[[email protected]~]$ ssh node1date

[[email protected] ~]$ ssh node1-priv date

[[email protected] ~]$ ssh node2-priv date

[[email protected] ~]$ ssh node2 date

[[email protected]~]$ ssh node1date

WedAug 27 00:48:15 CST 2014

[[email protected]~]$ sshnode1-priv date

WedAug 27 00:48:17 CST 2014

[[email protected]~]$ ssh node2date

WedAug 27 00:48:18 CST 2014

[[email protected]~]$ sshnode2-priv date

WedAug 27 00:48:21 CST 2014

[[email protected]~]$ ssh node2date;date

WedAug 27 00:50:28 CST 2014

WedAug 27 00:50:29 CST 2014

[[email protected]~]$ sshnode2-priv date;date

WedAug 27 00:50:38 CST 2014

WedAug 27 00:50:38 CST 2014

[[email protected]~]$ ssh node2date

[[email protected] ~]$ ssh node2-priv date

[[email protected] ~]$ ssh node1-priv date

[[email protected] ~]$ ssh node1 date

[[email protected]~]$ ssh node2date

WedAug 27 00:49:09 CST 2014

[[email protected]~]$ sshnode2-priv date

WedAug 27 00:49:11 CST 2014

[[email protected]~]$ ssh node1date

WedAug 27 00:49:15 CST 2014

[[email protected]~]$ sshnode1-priv date

WedAug 27 00:49:19 CST 2014

[[email protected]~]$ ssh node1date;date

WedAug 27 00:51:28 CST 2014

WedAug 27 00:51:29 CST 2014

[[email protected]~]$ ssh node1-privdate;date

WedAug 27 00:51:48 CST 2014

WedAug 27 00:51:48 CST 2014

node1   --grid用户

[[email protected]~]# su - grid

[[email protected]~]$ mkdir .ssh

[[email protected]~]$ ls -a

.  .. .bash_history  .bash_logout  .bash_profile .bashrc  .emacs  .kde .mozilla  .ssh .viminfo

[[email protected]~]$ ssh-keygen-t rsa

G

[[email protected]~]$ ssh-keygen-t dsa

Node2   --grid用户

[[email protected]~]# su - grid

[[email protected]~]$ mkdir .ssh

[[email protected]~]$ ls -a

.  .. .bash_history  .bash_logout  .bash_profile .bashrc  .emacs  .kde .mozilla  .ssh .viminfo

[[email protected]~]$ ssh-keygen-t rsa

[[email protected] ~]$ ssh-keygen -t dsa

配置信任关系

[[email protected]~]$ cat.ssh/id_rsa.pub >> .ssh/authorized_keys

[[email protected]~]$ cat.ssh/id_dsa.pub >> .ssh/authorized_keys

[[email protected]~]$ ssh node2cat .ssh/id_rsa.pub >> .ssh/authorized_keys

[[email protected] ~]$ ssh node2 cat .ssh/id_dsa.pub >>.ssh/authorized_keys

[email protected]‘spassword:

[[email protected]~]$ scp.ssh/authorized_keys node2:~/.ssh

[email protected]‘spassword:

authorized_keys                                          100% 1984     1.9KB/s   00:00

验证信任关系

[[email protected]~]$ ssh node1date

[[email protected] ~]$ ssh node1-priv date

[[email protected] ~]$ ssh node2-priv date

[[email protected] ~]$ ssh node2 date

[[email protected]~]$ ssh node1date

WedAug 27 00:57:37 CST 2014

[[email protected]~]$ sshnode1-priv date

WedAug 27 00:57:39 CST 2014

[[email protected]~]$ ssh node2date

WedAug 27 00:57:41 CST 2014

[[email protected]~]$ sshnode2-priv date

WedAug 27 00:57:43 CST 2014

[[email protected]~]$ sshnode2-priv date;date

WedAug 27 00:57:50 CST 2014

WedAug 27 00:57:51 CST 2014

[[email protected]~]$ ssh node2date;date

WedAug 27 00:58:01 CST 2014

WedAug 27 00:58:01 CST 2014

[[email protected]~]$ ssh node2date

[[email protected] ~]$ ssh node2-priv date

[[email protected] ~]$ ssh node1-priv date

[[email protected]~]$ ssh node1date

[[email protected]~]$ ssh node2date

WedAug 27 00:59:01 CST 2014

[[email protected]~]$ sshnode2-priv date

WedAug 27 00:59:03 CST 2014

[[email protected]~]$ ssh node1date

WedAug 27 00:59:05 CST 2014

[[email protected]~]$ sshnode1-priv date

WedAug 27 00:59:08 CST 2014

[[email protected]~]$ sshnode1-priv date;date

WedAug 27 00:59:12 CST 2014

WedAug 27 00:59:12 CST 2014

[[email protected]~]$ ssh node1date;date

WedAug 27 00:59:25 CST 2014

WedAug 27 00:59:24 CST 2014

八、校验安装前的环境

以 grid  用户的身份校验安装环境(在 grid 的安装软件包目录下)

[[email protected]~]# cd /soft

[[email protected]]# ls

asm linux_11gR2_database_1of2.zip  linux_11gR2_database_2of2.zip  linux_11gR2_grid.zip

[[email protected]]# unzip linux_11gR2_grid.zip

[[email protected]]# ls

asm grid  linux_11gR2_database_1of2.zip  linux_11gR2_database_2of2.zip  linux_11gR2_grid.zip

[[email protected]]# chown -Rgrid:oinstall grid/

[[email protected]]# chmod -R775 grid/

[[email protected]]# chown -Rgrid:oinstall /tmp/bootstrap/    没有这个目录就不用操作了

[[email protected]]# chmod -R775 /tmp/bootstrap/              没有这个目录就不用操作了

[[email protected]]# su - grid

[[email protected]~]$ cd/soft/grid/

[[email protected]]$./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose

注意其中“failed”的位置

对于校验中没有安装的软件包进行安装(所有节点)

(node1和node2 相同)最后所有的节点都应该是passed自己检查一下。

安装Grid

[[email protected] ~]# /u01/app/oraInventory/orainstRoot.sh

Changingpermissions of /u01/app/oraInventory.

Addingread,write permissions for group.

Removingread,write,execute permissions for world.

Changinggroupname of /u01/app/oraInventory to oinstall.

Theexecution of the script is complete.

节点2也要运行/u01/app/oraInventory/orainstRoot.sh

[[email protected]~]# /u01/11.2.0/grid/root.sh

节点2也要运行/u01/11.2.0/grid/root.sh

(node2也这样,记住node1运行完第一个脚本,node2也要运行第一个脚本,然后node1再运行第二个脚本

 node2也再运行第二个脚本,顺序不能错。)

完成grid安装后,检查crs进程是否开启

node1

[[email protected]~]# vi /etc/profile

exportPATH=$PATH:/u01/11.2.0/grid/bin

[[email protected]~]# source /etc/profile

[[email protected]~]# crsctl check crs

CRS-4638:Oracle High Availability Services is online

CRS-4537:Cluster Ready Services is online

CRS-4529:Cluster Synchronization Services is online

CRS-4533:Event Manager is online

[[email protected]~]# crs_stat -t

Name           Type           Target    State    Host

------------------------------------------------------------

ora....ER.lsnrora....er.type ONLINE    ONLINE    node1

ora....N1.lsnrora....er.type ONLINE    ONLINE    node1

ora....VOTE.dgora....up.type ONLINE    ONLINE    node1

ora.asm        ora.asm.type   ONLINE   ONLINE    node1

ora.eons       ora.eons.type  ONLINE   ONLINE    node1

ora.gsd        ora.gsd.type   OFFLINE  OFFLINE

ora....networkora....rk.type ONLINE    ONLINE    node1

ora....SM1.asmapplication    ONLINE    ONLINE   node1

ora....E1.lsnrapplication    ONLINE    ONLINE   node1

ora.node1.gsd  application   OFFLINE   OFFLINE

ora.node1.ons  application   ONLINE    ONLINE    node1

ora.node1.vip  ora....t1.type ONLINE    ONLINE   node1

ora....SM2.asmapplication    ONLINE    ONLINE   node2

ora....E2.lsnrapplication    ONLINE    ONLINE   node2

ora.node2.gsd  application   OFFLINE   OFFLINE

ora.node2.ons  application   ONLINE    ONLINE    node2

ora.node2.vip  ora....t1.type ONLINE    ONLINE   node2

ora.oc4j       ora.oc4j.type  OFFLINE  OFFLINE

ora.ons        ora.ons.type   ONLINE   ONLINE    node1

ora....ry.acfsora....fs.type ONLINE    ONLINE    node1

ora.scan1.vip  ora....ip.type ONLINE    ONLINE   node1

node2

[[email protected]~]# vi /etc/profile

exportPATH=$PATH:/u01/11.2.0/grid/bin

[[email protected]~]# source /etc/profile

[[email protected]~]# crsctl check crs

CRS-4638:Oracle High Availability Services is online

CRS-4537:Cluster Ready Services is online

CRS-4529:Cluster Synchronization Services is online

CRS-4533:Event Manager is online

[[email protected]~]# crs_stat -t

Name           Type           Target   State     Host

------------------------------------------------------------

ora....ER.lsnrora....er.type ONLINE    ONLINE    node1

ora....N1.lsnrora....er.type ONLINE    ONLINE    node1

ora....VOTE.dgora....up.type ONLINE    ONLINE    node1

ora.asm        ora.asm.type   ONLINE   ONLINE    node1

ora.eons       ora.eons.type  ONLINE   ONLINE    node1

ora.gsd        ora.gsd.type   OFFLINE  OFFLINE

ora....networkora....rk.type ONLINE    ONLINE   node1

ora....SM1.asmapplication    ONLINE    ONLINE   node1

ora....E1.lsnrapplication    ONLINE    ONLINE   node1

ora.node1.gsd  application   OFFLINE   OFFLINE

ora.node1.ons  application   ONLINE    ONLINE    node1

ora.node1.vip  ora....t1.type ONLINE    ONLINE   node1

ora....SM2.asmapplication    ONLINE    ONLINE   node2

ora....E2.lsnrapplication    ONLINE    ONLINE   node2

ora.node2.gsd  application   OFFLINE   OFFLINE

ora.node2.ons  application   ONLINE    ONLINE    node2

ora.node2.vip  ora....t1.type ONLINE    ONLINE   node2

ora.oc4j       ora.oc4j.type  OFFLINE  OFFLINE

ora.ons        ora.ons.type   ONLINE   ONLINE    node1

ora....ry.acfsora....fs.type ONLINE    ONLINE    node1

ora.scan1.vip  ora....ip.type ONLINE    ONLINE   node1

完成grid安装后,检查crs进程是否开启

node1

[[email protected]~]# vi /etc/profile

exportPATH=$PATH:/u01/11.2.0/grid/bin

[[email protected]~]# source /etc/profile

[[email protected]~]# crsctl check crs

CRS-4638:Oracle High Availability Services is online

CRS-4537:Cluster Ready Services is online

CRS-4529:Cluster Synchronization Services is online

CRS-4533:Event Manager is online

[[email protected]~]# crs_stat -t

Name           Type           Target    State    Host

------------------------------------------------------------

ora....ER.lsnrora....er.type ONLINE    ONLINE    node1

ora....N1.lsnrora....er.type ONLINE    ONLINE    node1

ora....VOTE.dgora....up.type ONLINE    ONLINE    node1

ora.asm        ora.asm.type   ONLINE   ONLINE    node1

ora.eons       ora.eons.type  ONLINE   ONLINE    node1

ora.gsd        ora.gsd.type   OFFLINE  OFFLINE

ora....networkora....rk.type ONLINE    ONLINE    node1

ora....SM1.asmapplication    ONLINE    ONLINE   node1

ora....E1.lsnrapplication    ONLINE    ONLINE   node1

ora.node1.gsd  application   OFFLINE   OFFLINE

ora.node1.ons  application   ONLINE    ONLINE    node1

ora.node1.vip  ora....t1.type ONLINE    ONLINE   node1

ora....SM2.asmapplication    ONLINE    ONLINE   node2

ora....E2.lsnrapplication    ONLINE    ONLINE   node2

ora.node2.gsd  application   OFFLINE   OFFLINE

ora.node2.ons  application   ONLINE    ONLINE    node2

ora.node2.vip  ora....t1.type ONLINE    ONLINE   node2

ora.oc4j       ora.oc4j.type  OFFLINE  OFFLINE

ora.ons        ora.ons.type   ONLINE   ONLINE    node1

ora....ry.acfsora....fs.type ONLINE    ONLINE    node1

ora.scan1.vip  ora....ip.type ONLINE    ONLINE   node1

九、安装oracle软件

[[email protected]~]# cd /soft/

[[email protected]]# ls

asmgridlinux_11gR2_database_1of2.ziplinux_11gR2_database_2of2.zip linux_11gR2_grid.zip

[[email protected]]# unzip linux_11gR2_database_1of2.zip

…………

[[email protected]]# unzip linux_11gR2_database_2of2.zip

…………

[[email protected]]# ls

asmgrid database linux_11gR2_database_1of2.ziplinux_11gR2_database_2of2.zip linux_11gR2_grid.zip

[[email protected]]# chown -R oracle:oinstall database/

[[email protected]]# chmod -R 775 database/

[[email protected]~]# su - oracle

[[email protected]~]$ cd /soft/database/

[[email protected]]$ ls

doc  install response  rpm  runInstaller sshsetup  stage  welcome.html

安装前的准备与grid方式一样

[[email protected]]$ ./runInstaller

StartingOracle Universal Installer...

CheckingTemp space: must be greater than 80 MB.  Actual 7196 MB    Passed

Checkingswap space: must be greater than 150 MB.  Actual 4005 MB    Passed

Checkingmonitor: must be configured to display at least 256 colors.    Actual 16777216    Passed

Preparingto launch Oracle Universal Installer from /tmp/OraInstall2014-08-27_03-43-06AM.Please wait ...[[email protected] database]$

node1

[[email protected]~]# /u01/app/oracle/product/11.2.0/db_1/root.sh

RunningOracle 11g root.sh script...

Thefollowing environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME=  /u01/app/oracle/product/11.2.0/db_1

Enterthe full pathname of the local bin directory: [/usr/local/bin]:

Thefile "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:y

Copying dbhome to /usr/local/bin ...

Thefile "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:y

Copying oraenv to /usr/local/bin ...

Thefile "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:y

Copying coraenv to /usr/local/bin ...

Entrieswill be added to the /etc/oratab file as needed by

DatabaseConfiguration Assistant when a database is created

Finishedrunning generic part of root.sh script.

Nowproduct-specific root actions will be performed.

Finishedproduct-specific root actions.

node2

[[email protected]~]# /u01/app/oracle/product/11.2.0/db_1/root.sh

RunningOracle 11g root.sh script...

Thefollowing environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME=  /u01/app/oracle/product/11.2.0/db_1

Enterthe full pathname of the local bin directory: [/usr/local/bin]:

Thefile "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:y

Copying dbhome to /usr/local/bin ...

Thefile "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:y

Copying oraenv to /usr/local/bin ...

Thefile "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:y

Copying coraenv to /usr/local/bin ...

Entrieswill be added to the /etc/oratab file as needed by

DatabaseConfiguration Assistant when a database is created

Finishedrunning generic part of root.sh script.

Nowproduct-specific root actions will be performed.

Finishedproduct-specific root actions.

十、创建ASM磁盘组

[[email protected]~]# su - grid

[[email protected]~]$ asmca

十一、DBCA建库

[[email protected] ~]$dbca

完成oracle数据库的安装

验证

[[email protected] ~]$sqlplus / as sysdba

SQL*Plus: Release11.2.0.1.0 Production on Wed Aug 27 04:52:36 2014

Copyright (c)1982, 2009, Oracle.  All rights reserved.

Connected to:

Oracle Database11g Enterprise Edition Release 11.2.0.1.0 - Production

With thePartitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Data Mining andReal Application Testing options

SQL> select status from gv$instance;

STATUS

------------

OPEN

OPEN

SQL> show parameter name

NAME                                 TYPE        VALUE

----------------------------------------------- ------------------------------

db_file_name_convert                 string

db_name                              string      prod

db_unique_name                       string      prod

global_names                         boolean     FALSE

instance_name                        string      prod1

lock_name_space                      string

log_file_name_convert                string

service_names                        string      prod

时间: 2024-08-29 01:49:24

redhat6 + 11G RAC 双节点部署的相关文章

Oracle Study之--Oracle 11g RAC添加节点错误

Oracle Study之--Oracle 11g RAC添加节点错误 系统环境:     操作系统:RedHat EL5     Cluster:  Oracle 11gR2 Grid     Oracle:   Oracle 11gR2  故障一:新节点和原节点时间不同步,添加节点失败 1.在新节点执行"root.sh"  [root@wqy3 install]# /u01/11.2.0/grid/root.sh  Running Oracle 11g root.sh script

Oracle 11g rac 生产环境部署详录

作者:田逸([email protected]) 基本规划 ◎设备选型 1.服务器:Dell R620 两台.cpu 8 core,内存64G,600G 15000转sas硬盘,双电源,hba卡一块,连接存储线缆一根(连接hba卡和共享存储). 2.存储:dell MD3200 一台.双控制器,12块600G 15000转sas硬盘.为追求最高可用性,使用的raid级别是raid10. 3.交换机:华为3com两台,型号为h3c S5048E.注意:网络端口最好是全千兆. 4.网线:2-3米机制

oracle 11g RAC安装节点二执行结果错误CRS-5005: IP Address: 192.168.1.24 is already in use in the network

[[email protected] ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oins

CentOS 6.8平台Oracle 12.1.0.2.0 RAC双节点数据库集群搭建

环境准备 节点一:CentOS 6.8 x86-64 CPU:4c     内存:8g     SWAP:8g 业务ip:192.168.50.20 私网ip:10.98.50.20 节点二:CentOS 6.8 x86-64 CPU:4c     内存:8g     SWAP:8g 业务ip:192.168.60.21 私网ip:10.98.50.21 数据库与集群软件: linuxamd64_12102_grid_1of2.zip linuxamd64_12102_grid_2of2.zip

Oracle 11g RAC 二节点root.sh执行报错故障一例

OEL6.X IBM v3500存储多路径配置   http://koumm.blog.51cto.com/703525/1439760 2. 采用RHEL6.5 multipath多路径软件安装采用ASMLIB方式配置ASM共享磁盘成功,但是在第二节点执 行root.sh报如下错误提示,解决方式见3. Disk Group CRS creation failed with the following message:   ORA-15018: diskgroup cannot be creat

MariaDB GALERA 集群双节点部署

节点1:10.2.2.41 节点2:10.2.2.42 软件: mariadb-galera-10.0.22-linux-x86_64.tar.gz #galera相关参数:(两个节点配置文件类似) wsrep_on=onbinlog_format=ROWdefault-storage-engine=innodbinnodb_autoinc_lock_mode=2bind-address=10.2.2.41wsrep_provider=/usr/local/mariadb-galera-10.0

双节点部署rabbitmq集群

rabbitmq消息队列简介 RabbitMQ是实现了高级消息队列协议(AMQP)的开源消息代理软件(亦称面向消息的中间件) 可伸缩性:集群服务 消息持久化:从内存持久化消息到硬盘,再从硬盘加载到内存 安装rabbitmq [[email protected] ~]# yum install -y rabbitmq-server [[email protected] ~]# yum install -y rabbitmq-server 配置监听地址 [[email protected] ~]#

rman异机恢复(RAC双节点恢复到单节点)

一.数据库全备 RUN {ALLOCATE CHANNEL ch00 DEVICE TYPE disk;ALLOCATE CHANNEL ch01 DEVICE TYPE disk;backup as compressed backupset database filesperset 5 format '/apps/oracle_backup_20141209/bk_%d_%T%s_%p' ;backup current controlfile format '/apps/oracle_back

Oracle 11g RAC搭建(VMware环境)

Oracle 11g RAC搭建(VMware环境) Oracle 11g RAC搭建VMware环境 安装环境与网络规划 安装环境 网络规划 环境配置 通过SecureCRT建立命令行连接 关闭防火墙 创建必要的用户组和目录并授权 节点配置检查 系统文件设置 配置IP和hostshostname 配置grid和oracle用户环境变量 配置oracle用户ssh互信 配置裸盘 配置grid用户ssh互信 挂载安装软件文件夹 安装用于Linux的cvuqdisk 手动运行cvu使用验证程序验证O