Oracle 11g RAC搭建(VMware环境)

Oracle 11g RAC搭建(VMware环境)

  • Oracle 11g RAC搭建VMware环境

    • 安装环境与网络规划

      • 安装环境
      • 网络规划
    • 环境配置
      • 通过SecureCRT建立命令行连接
      • 关闭防火墙
      • 创建必要的用户组和目录并授权
      • 节点配置检查
      • 系统文件设置
      • 配置IP和hostshostname
      • 配置grid和oracle用户环境变量
      • 配置oracle用户ssh互信
      • 配置裸盘
      • 配置grid用户ssh互信
      • 挂载安装软件文件夹
      • 安装用于Linux的cvuqdisk
      • 手动运行cvu使用验证程序验证Oracle集群件要求所有节点都执行
    • 安装Grid Infrastructure
      • 安装流程
      • 安装grid后的资源检查
      • 为数据和快速恢复去创建ASM磁盘组
    • 安装Oracle database软件RAC
      • 安装流程
      • 创建集群数据库
    • RAC维护
      • 查看服务状态
      • 检查CRS状态
      • 查看集群中节点配置信息
      • 查看集群件的表决磁盘信息
      • 查看集群SCAN VIP信息
      • 启停集群数据库
    • EM管理
    • 本地sqlplus连接

安装环境与网络规划

安装环境

主机操作系统:windows 10

虚拟机VMware12:两台Oracle Linux R6 U5 x86_64

Oracle Database software: Oracle11gR2

Cluster software: Oracle grid infrastructure 11gR2

共享存储:ASM

[[email protected] ~]# lsb_release -a
LSB Version:    :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: OracleServer
Description:    Oracle Linux Server release 6.5
Release:        6.5
Codename:       n/a
[[email protected] ~]# uname -r
3.8.13-16.2.1.el6uek.x86_64

细节说明:

1. 安装Oracle Linux时,注意分配两个网卡,一个网卡为Host Only方式,用于两台虚拟机节点的通讯,另一个网卡为Nat方式,用于连接外网,后面再手动分配静态IP。每台主机的内存和swap规划为至少2.5G。硬盘规划为:boot 500M,其他空间分配为LVM方式管理,LVM划分2.5G为swap,其他为/。

两台Oracle Linux主机名为rac1、rac2

注意这里安装的两个操作系统最好在不同的硬盘中,否则I/O会很吃力。

2. 由于采用的是共享存储ASM,而且搭建集群需要共享空间作注册盘(OCR)和投票盘(votingdisk)。VMware创建共享存储方式:

进入VMware安装目录,cmd命令下:

C:\Program Files (x86)\VMware\VMware Workstation>
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\ocr.vmdk
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\ocr2.vmdk
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\votingdisk.vmdk
vmware-vdiskmanager.exe -c -s 20000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\data.vmdk
vmware-vdiskmanager.exe -c -s 10000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\backup.vmdk

这里创建了两个1G的ocr盘,一个1G的投票盘,一个20G的数据盘,一个10G的备份盘。

修改RAC1虚拟机目录下的vmx配置文件:

scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsi1.sharedBus = "virtual"

scsi1:1.present = "TRUE"
scsi1:1.mode = "independent-persistent"
scsi1:1.filename = "F:\VMware\RAC\Sharedisk\ocr.vmdk"
scsi1:1.deviceType = "plainDisk"

scsi1:2.present = "TRUE"
scsi1:2.mode = "independent-persistent"
scsi1:2.filename = "F:\VMware\RAC\Sharedisk\votingdisk.vmdk"
scsi1:2.deviceType = "plainDisk" 

scsi1:3.present = "TRUE"
scsi1:3.mode = "independent-persistent"
scsi1:3.filename = "F:\VMware\RAC\Sharedisk\data.vmdk"
scsi1:3.deviceType = "plainDisk"

scsi1:4.present = "TRUE"
scsi1:4.mode = "independent-persistent"
scsi1:4.filename = "F:\VMware\RAC\Sharedisk\backup.vmdk"
scsi1:4.deviceType = "plainDisk"

scsi1:5.present = "TRUE"
scsi1:5.mode = "independent-persistent"
scsi1:5.filename = "F:\VMware\RAC\Sharedisk\ocr2.vmdk"
scsi1:5.deviceType = "plainDisk"

disk.locking = "false"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.DataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"

修改RAC2的vmx配置文件:

scsi1.sharedBus = "virtual"
disk.locking = "false"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.DataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
gui.lastPoweredViewMode = "fullscreen"
checkpoint.vmState = ""
usb:0.present = "TRUE"
usb:0.deviceType = "hid"
usb:0.port = "0"
usb:0.parent = "-1"

这里就在RAC2的虚拟机设置中手动添加创建好的五个虚拟硬盘,要求是独立永久属性。

网络规划

硬件配置要求:

- 每个服务器节点至少需要2块网卡,一个对外网络接口,一个私有网路接口(心跳)。

- 如果你通过OUI安装Oracle集群软件,需要保证每个节点用于外网或私网接口(网卡名)保证一致。比如,node1使用eth0作为对外接口,node2就不能使用eth1作为对外接口。

IP配置要求:

这里不采用DHCP方式,指定静态的scan ip(scan ip可以实现集群的负载均衡,由集群软件按情况分配给某一节点)。

每个节点分配一个ip、一个虚拟ip、一个私有ip。

其中ip、vip和scan-ip需要在同一个网段。

非GNS下手动配置IP实例:

Identity Home Node Host Node Given Name Type Address
RAC1 Public RAC1 RAC1 rac1 Public 192.168.248.101
RAC1 VIP RAC1 RAC1 rac1-vip Public 192.168.248.201
RAC1 Private RAC1 RAC1 rac1-priv Private 192.168.109.101
RAC2 RAC2 RAC2 rac2 Public 192.168.248.102
RAC2 VIP RAC2 RAC2 rac2-vip Public 192.168.248.202
RAC2 Private RAC2 RAC2 rac2-priv Private 192.168.109.102
SCAN IP none Selected by Oracle Clusterware scan-ip virtual 192.168.248.110

环境配置

默认情况下,下面操作在每个节点下均要进行,密码均设置oracle

1. 通过SecureCRT建立命令行连接

  • sqlplus中Backspace出现^H的乱码

    Options->Session Options->Terminal->Emulation->Mapped Keys->Other mappings

    勾选Backspace sends delete

  • vi中不能使用delete和home

    Options->Session Options->Terminal->Emulation

    设置Terminal为Linux

    勾选Select an alternate keyboard emulation为Linux

2. 关闭防火墙

[root@rac1 ~]# setenforce 0
setenforce: SELinux is disabled
[root@rac1 ~]# vi /etc/sysconfig/selinux
SELINUX=disabled
[root@rac1 ~]# service iptables stop
[root@rac1 ~]# chkconfig iptables off

3. 创建必要的用户、组和目录,并授权

/usr/sbin/groupadd -g 1000 oinstall
/usr/sbin/groupadd -g 1020 asmadmin
/usr/sbin/groupadd -g 1021 asmdba
/usr/sbin/groupadd -g 1022 asmoper
/usr/sbin/groupadd -g 1031 dba
/usr/sbin/groupadd -g 1032 oper
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/grid
mkdir /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/

参照官方文档,采用GI与DB分开安装和权限的策略,对于多实例管理有利。

4. 节点配置检查

内存大小:至少2.5GB

Swap大小:

当内存为2.5GB-16GB时,Swap需要大于等于系统内存。

当内存大于16GB时,Swap等于16GB即可。

查看内存和swap大小:

[root@rac1 ~]# grep MemTotal /proc/meminfo
MemTotal:        2552560 kB
[root@rac1 ~]# grep SwapTotal /proc/meminfo
SwapTotal:       2621436 kB

如果swap太小,swap调整方法

通过此种方式进行swap 的扩展,首先要计算出block的数目。具体为根据需要扩展的swapfile的大小,以M为单位。block=swap分区大小*1024, 例如,需要扩展64M的swapfile,则:block=64*1024=65536.

然后做如下步骤:

#dd if=/dev/zero of=/swapfile bs=1024 count=65536

#mkswap /swapfile

#swapon /swapfile

#vi /etc/fstab

增加/swapf swap swap defaults 0 0

# cat /proc/swaps 或者# free –m //查看swap分区大小

# swapoff /swapf //关闭扩展的swap分区

5. 系统文件设置

(1)内核参数设置:

[[email protected] ~]# vi /etc/sysctl.conf

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.shmmax = 68719476736

kernel.shmall = 4294967296

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 1306910720

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048586

net.ipv4.tcp_wmem = 262144 262144 262144

net.ipv4.tcp_rmem = 4194304 4194304 4194304

确认修改内核

[[email protected] ~]# sysctl -p

也可以采用Oracle Linux光盘中的相关安装包来调整

[[email protected] Packages]# pwd

/mnt/cdrom/Packages

[[email protected] Packages]# ll | grep preinstall

-rw-r–r– 1 root root 15524 Dec 25 2012 oracle-rdbms-server-11gR2-preinstall-1.0-7.el6.x86_64.rpm

(2)配置oracle、grid用户的shell限制

[[email protected] ~]# vi /etc/security/limits.conf

grid soft nproc 2047

grid hard nproc 16384

grid soft nofile 1024

grid hard nofile 65536

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

(3)配置login

[[email protected] ~]# vi /etc/pam.d/login

session required pam_limits.so

  1. 安装需要的软件包

    binutils-2.20.51.0.2-5.11.el6 (x86_64)

    compat-libcap1-1.10-1 (x86_64)

    compat-libstdc++-33-3.2.3-69.el6 (x86_64)

    compat-libstdc++-33-3.2.3-69.el6.i686

    gcc-4.4.4-13.el6 (x86_64)

    gcc-c++-4.4.4-13.el6 (x86_64)

    glibc-2.12-1.7.el6 (i686)

    glibc-2.12-1.7.el6 (x86_64)

    glibc-devel-2.12-1.7.el6 (x86_64)

    glibc-devel-2.12-1.7.el6.i686

    ksh

    libgcc-4.4.4-13.el6 (i686)

    libgcc-4.4.4-13.el6 (x86_64)

    libstdc++-4.4.4-13.el6 (x86_64)

    libstdc++-4.4.4-13.el6.i686

    libstdc++-devel-4.4.4-13.el6 (x86_64)

    libstdc++-devel-4.4.4-13.el6.i686

    libaio-0.3.107-10.el6 (x86_64)

    libaio-0.3.107-10.el6.i686

    libaio-devel-0.3.107-10.el6 (x86_64)

    libaio-devel-0.3.107-10.el6.i686

    make-3.81-19.el6

    sysstat-9.0.4-11.el6 (x86_64)

    这里使用的是配置本地源的方式,自己先进行配置:

    [[email protected] ~]# mount /dev/cdrom /mnt/cdrom/

    [[email protected] ~]# vi /etc/yum.repos.d/dvd.repo

    [dvd]

    name=dvd

    baseurl=file:///mnt/cdrom

    gpgcheck=0

    enabled=1

    [[email protected] ~]# yum clean all

    [[email protected] ~]# yum makecache

    [[email protected] ~]# yum install gcc gcc-c++ glibc* glibc-devel* ksh libgcc* libstdc++* libstdc++-devel* make sysstat

6.配置IP和hosts、hostname

(1)配置ip

//这里的网关有vmware中网络设置决定,eth0为连接外网,eth0内网心跳

//rac1主机下:

[[email protected] ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

IPADDR=192.168.248.101

PREFIX=24

GATEWAY=192.168.248.2

DNS1=114.114.114.114

[[email protected] ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1

IPADDR=192.168.109.101

PREFIX=24

//rac2主机下

[[email protected] ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

IPADDR=192.168.248.102

PREFIX=24

GATEWAY=192.168.248.2

DNS1=114.114.114.114

[[email protected] ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1

IPADDR=192.168.109.102

PREFIX=24

(2)配置hostname

//rac1主机下

[[email protected] ~]# vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=rac1

GATEWAY=192.168.248.2

NOZEROCONF=yes

//rac2主机下

[[email protected] ~]# vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=rac2

GATEWAY=192.168.248.2

NOZEROCONF=yes

(3)配置hosts

rac1和rac2均要添加:

[[email protected] ~]# vi /etc/hosts

192.168.248.101 rac1

192.168.248.201 rac1-vip

192.168.109.101 rac1-priv

192.168.248.102 rac2

192.168.248.202 rac2-vip

192.168.109.102 rac2-priv

192.168.248.110 scan-ip

7.配置grid和oracle用户环境变量

Oracle_sid需要根据节点不同进行修改

[[email protected] ~]# su - grid

[[email protected] ~]$ vi .bash_profile

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1  # RAC1
export ORACLE_SID=+ASM2  # RAC2
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
umask 022

需要注意的是ORACLE_UNQNAME是数据库名,创建数据库时指定多个节点是会创建多个实例,ORACLE_SID指的是数据库实例名

[[email protected] ~]# su - oracle

[[email protected] ~]$ vi .bash_profile

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=orcl1  # RAC1
export ORACLE_SID=orcl2  # RAC2
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

$ source .bash_profile使配置文件生效

8.配置oracle用户ssh互信

这是很关键的一步,虽然官方文档中声称安装GI和RAC的时候OUI会自动配置SSH,但为了在安装之前使用CVU检查各项配置,还是手动配置互信更优。

ssh-keygen -t rsa
ssh-keygen -t dsa

[oracle@RAC1 ~]$
ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys

[oracle@RAC1 .ssh]$ scp authorized_keys rac2:~/.ssh/
[oracle@RAC1 .ssh]$ chmod 600 authorized_keys

ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date

需要注意的是生成密钥时不设置密码,授权文件权限为600,同时需要两个节点互相ssh通过一次。

9.配置裸盘

使用asm管理存储需要裸盘,前面配置了共享硬盘到两台主机上。配置裸盘的方式有两种(1)oracleasm添加(2)/etc/udev/rules.d/60-raw.rules配置文件添加

在配置裸盘之前需要先格式化硬盘:

fdisk /dev/sdb
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
最后 w 命令保存更改

重复步骤,格式化其他盘,得到如下分区

[[email protected] ~]# ls /dev/sd*

/dev/sda /dev/sda2 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sda1 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf

添加裸盘:

[[email protected] ~]# vi /etc/udev/rules.d/60-raw.rules
ACTION=="add",KERNEL=="/dev/sdb1",RUN+=‘/bin/raw /dev/raw/raw1 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m"
ACTION=="add",KERNEL=="/dev/sdc1",RUN+=‘/bin/raw /dev/raw/raw2 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m"
ACTION=="add",KERNEL=="/dev/sdd1",RUN+=‘/bin/raw /dev/raw/raw3 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m"
ACTION=="add",KERNEL=="/dev/sde1",RUN+=‘/bin/raw /dev/raw/raw4 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="65",RUN+="/bin/raw /dev/raw/raw4 %M %m"
ACTION=="add",KERNEL=="/dev/sdf1",RUN+=‘/bin/raw /dev/raw/raw5 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="81",RUN+="/bin/raw /dev/raw/raw5 %M %m"

KERNEL=="raw[1-5]",OWNER="grid",GROUP="asmadmin",MODE="660"

[[email protected] ~]# start_udev
Starting udev:                                             [  OK  ]
[[email protected] ~]# ll /dev/raw/
total 0
crw-rw---- 1 grid asmadmin 162, 1 Apr 13 13:51 raw1
crw-rw---- 1 grid asmadmin 162, 2 Apr 13 13:51 raw2
crw-rw---- 1 grid asmadmin 162, 3 Apr 13 13:51 raw3
crw-rw---- 1 grid asmadmin 162, 4 Apr 13 13:51 raw4
crw-rw---- 1 grid asmadmin 162, 5 Apr 13 13:51 raw5
crw-rw---- 1 root disk     162, 0 Apr 13 13:51 rawctl

这里需要注意的是配置的,前后都不能有空格,否则会报错。最后看到的raw盘权限必须是grid:asmadmin用户。

10.配置grid用户ssh互信

[root@rac1 ~]#sh-keygen -t rsa
[root@rac1 ~]#ssh-keygen -t dsa
[root@rac2 ~]#sh-keygen -t rsa
[root@rac2 ~]#ssh-keygen -t dsa

[root@rac1 ~]#su - grid

[grid@RAC1 ~]$
ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys

[grid@RAC1 .ssh]$ scp authorized_keys rac2:~/.ssh/
[oracle@RAC1 .ssh]$ chmod 600 authorized_keys

11.挂载安装软件文件夹

这里是主机windows系统开启文件夹共享,让后虚拟机挂载即可

mkdir -p /home/grid/db

mount -t cifs -o username=share,password=123456 //192.168.248.1/DB /home/grid/db

mkdir -p /home/oracle/db

mount -t cifs -o username=share,password=123456 //192.168.248.1/DB /home/oracle/db

12.安装用于Linux的cvuqdisk

在Oracle RAC两个节点上安装cvuqdisk,否则,集群验证使用程序就无法发现共享磁盘,当运行(手动运行或在Oracle Grid Infrastructure安装结束时自动运行)集群验证使用程序,会报错“Package cvuqdisk not installed”

注意使用适用于硬件体系结构(x86_64或i386)的cvuqdisk RPM。

cvuqdisk RPM在grid的安装介质上的rpm目录中。

13.手动运行cvu使用验证程序验证Oracle集群件要求(所有节点都执行)

rac1到grid软件目录下执行runcluvfy.sh命令:

[root@rac1 ~]# su - grid
[grid@rac1 ~]$ cd db/grid/
[grid@rac1 grid]$ ls
doc      readme.html  rpm           runInstaller  stage
install  response     runcluvfy.sh  sshsetup      welcome.html
[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose

查看cvu报告,修正错误

这里CVU执行的所有其他检查的结果为”passed”,只出现了如下错误:

Checking DNS response time for an unreachable node

Node Name Status



rac2 failed

rac1 failed

PRVF-5637 : DNS response time could not be checked on following nodes: rac2,rac1

File “/etc/resolv.conf” is not consistent across nodes

这个错误是因为没有配置DNS,但不影响安装,后面也会提示resolv.conf错误,我们用静态的scan-ip,所以可以忽略。

安装Grid Infrastructure

1.安装流程

只需要在一个节点上安装即可,会自动复制到其他节点中,这里在rac1中安装。

进入图形化界面,在grid用户下进行安装

[root@rac1 ~]# su - grid
[grid@rac1 ~]$ cd db/grid/
doc/          readme.html   rpm/          runInstaller  stage/
install/      response/     runcluvfy.sh  sshsetup/     welcome.html
[grid@rac1 ~]$ cd db/grid/
[grid@rac1 grid]$ ./runInstaller

跳过更新

这里写图片描述

选择安装集群

选择自定义安装

选择语言为English

定义集群名字,SCAN Name 为hosts中定义的scan-ip,取消GNS

界面只有第一个节点rac1,点击“Add”把第二个节点rac2加上

选择网卡

配置ASM,这里选择前面配置的裸盘raw1,raw2,raw3,冗余方式为External即不冗余。因为是不用于,所以也可以只选一个设备。这里的设备是用来做OCR注册盘和votingdisk投票盘的。

配置ASM实例需要为具有sysasm权限的sys用户,具有sysdba权限的asmsnmp用户设置密码,这里设置统一密码为oracle,会提示密码不符合标准,点击OK即可

不选择智能管理

检查ASM实例权限分组情况

选择grid软件安装路径和base目录

选择grid安装清单目录

环境检测出现resolv.conf错误,是因为没有配置DNS,可以忽略

安装grid概要

开始安装

复制安装到其他节点

安装grid完成,提示需要root用户依次执行脚本orainstRoot.sh ,root.sh (一定要先在rac1执行完脚本后,才能在其他节点执行)

在rac1中执行脚本

[[email protected] rpm]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[[email protected] rpm]# /u01/app/
11.2.0/       grid/         oracle/       oraInventory/
[[email protected] rpm]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start ‘ora.mdnsd‘ on ‘rac1‘
CRS-2676: Start of ‘ora.mdnsd‘ on ‘rac1‘ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd‘ on ‘rac1‘
CRS-2676: Start of ‘ora.gpnpd‘ on ‘rac1‘ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor‘ on ‘rac1‘
CRS-2672: Attempting to start ‘ora.gipcd‘ on ‘rac1‘
CRS-2676: Start of ‘ora.cssdmonitor‘ on ‘rac1‘ succeeded
CRS-2676: Start of ‘ora.gipcd‘ on ‘rac1‘ succeeded
CRS-2672: Attempting to start ‘ora.cssd‘ on ‘rac1‘
CRS-2672: Attempting to start ‘ora.diskmon‘ on ‘rac1‘
CRS-2676: Start of ‘ora.diskmon‘ on ‘rac1‘ succeeded
CRS-2676: Start of ‘ora.cssd‘ on ‘rac1‘ succeeded

ASM created and started successfully.

Disk Group OCR created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root‘, privgrp ‘root‘..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 496abcfc4e214fc9bf85cf755e0cc8e2.
Successfully replaced voting disk group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   496abcfc4e214fc9bf85cf755e0cc8e2 (/dev/raw/raw1) [OCR]
Located 1 voting disk(s).
CRS-2672: Attempting to start ‘ora.asm‘ on ‘rac1‘
CRS-2676: Start of ‘ora.asm‘ on ‘rac1‘ succeeded
CRS-2672: Attempting to start ‘ora.OCR.dg‘ on ‘rac1‘
CRS-2676: Start of ‘ora.OCR.dg‘ on ‘rac1‘ succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

在rac2执行脚本

[[email protected] grid]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[[email protected] grid]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

完成脚本后,点击OK,Next,下一步

这里出现了一个错误

根据提示查看日志

[[email protected] grid]$ vi /u01/app/oraInventory/logs/installActions2016-04-10_04-57-29PM.log
命令模式查找错误:/ERROR
WARNING:
INFO: Completed Plugin named: Oracle Cluster Verification Utility
INFO: Checking name resolution setup for "scan-ip"...
INFO: ERROR:
INFO: PRVG-1101 : SCAN name "scan-ip" failed to resolve
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "scan-ip" (IP address: 192.168.2
48.110) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "scan-i
p"
INFO: Verification of SCAN VIP and Listener setup failed

由错误日志可知,是因为没有配置resolve.conf,可以忽略

安装完成

安装grid清单位置

至此grid集群软件安装完成

2.安装grid后的资源检查

以grid用户执行以下命令。

[[email protected] ~]# su - grid

检查crs状态

[grid@rac1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

检查Clusterware资源

[[email protected] ~]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host
----------------------------------------------------------------------
ora....ER.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    rac1
ora....N1.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    rac1
ora.OCR.dg     ora....up.type 0/5    0/     ONLINE    ONLINE    rac1
ora.asm        ora.asm.type   0/5    0/     ONLINE    ONLINE    rac1
ora.cvu        ora.cvu.type   0/5    0/0    ONLINE    ONLINE    rac1
ora.gsd        ora.gsd.type   0/5    0/     OFFLINE   OFFLINE
ora....network ora....rk.type 0/5    0/     ONLINE    ONLINE    rac1
ora.oc4j       ora.oc4j.type  0/1    0/2    ONLINE    ONLINE    rac1
ora.ons        ora.ons.type   0/3    0/     ONLINE    ONLINE    rac1
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    rac1
ora....C1.lsnr application    0/5    0/0    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    0/5    0/0    OFFLINE   OFFLINE
ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1
ora.rac1.vip   ora....t1.type 0/0    0/0    ONLINE    ONLINE    rac1
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    rac2
ora....C2.lsnr application    0/5    0/0    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    0/5    0/0    OFFLINE   OFFLINE
ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2
ora.rac2.vip   ora....t1.type 0/0    0/0    ONLINE    ONLINE    rac2
ora.scan1.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    rac1  

检查集群节点

[grid@rac1 ~]$ olsnodes -n
rac1    1
rac2    2

检查两个节点上的Oracle TNS监听器进程

[grid@rac1 ~]$ ps -ef|grep lsnr|grep -v ‘grep‘|grep -v ‘ocfs‘|awk ‘{print$9}‘
LISTENER_SCAN1
LISTENER

确认针对Oracle Clusterware文件的Oracle ASM功能:

如果在 Oracle ASM 上暗转过了OCR和表决磁盘文件,则以Grid Infrastructure 安装所有者的身份,使用给下面的命令语法来确认当前正在运行已安装的Oracle ASM:

[[email protected] ~]$ srvctl status asm -a
ASM is running on rac2,rac1
ASM is enabled.

3.为数据和快速恢复去创建ASM磁盘组

官方文档中规定了不同冗余策略下OCR、Voting disk、Database和Recovery所需的大小

只在节点rac1执行即可

进入grid用户下

[[email protected] ~]# su - grid

利用asmca

[[email protected] ~]$ asmca

这里看到安装grid时配置的OCR盘已存在

添加DATA盘,点击create,使用裸盘raw4

同样创建FRA盘,使用裸盘raw5

ASM磁盘组情况

ASM实例

安装Oracle database软件(RAC)

1.安装流程

只需要在节点rac1上执行即可

[[email protected] ~]# su - oracle

[[email protected] ~]$ cd db/database

[[email protected] database]$ ./runInstaller

进入图形化界面,跳过更新

选择只安装数据库软件

选择Oracel Real Application Clusters database installation按钮(默认),确保勾选所有的节点

这里的SSH Connectivity是配置每个节点之间的oracle用户互信,前面已手动配置过,可以不配

选择语言English

选择安装企业版软件

选择安装Oracle软件路径,其中ORACLE_BASE,ORACLE_HOME均选择之前配置好的

oracle权限授予用户组

安装前的预检查

这两个错误前面有说明,忽略

安装RAC的概要信息

开始安装,会自动复制到其他节点

安装完,在每个节点用root用户执行脚本

[[email protected] etc]# /u01/app/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

安装完成,close

至此在RAC双节点上完成oracle软件安装,安装日志在

2.创建集群数据库

在节点rac1上用oracle用户执行dbca创建RAC数据库

[[email protected] ~]# su - oracle

[[email protected] ~]$ dbca

选择创建数据库

选择自定义数据库(也可以是通用)

配置类型选择Admin-Managed,输入全局数据库名orcl,每个节点实例SID前缀为orcl,选择双节点

选择默认,配置OEM,启用数据库自动维护任务

统一设置sys,system,dbsnmp,sysman用户的密码为oracle

使用ASM存储,使用OMF(oracle的自动管理文件),数据区选择之前创建的DATA磁盘组

设置ASM密码为oracle

指定数据闪回区,选择之前创建好的FRA磁盘组,不开归档

组建选择

选择字符集AL32UTF8

选择默认的数据存储信息

开始创建数据库,勾选生成数据库的脚本

数据库的概要信息

开始安装组建

完成数据库安装

RAC维护

1.查看服务状态

忽略gsd问题

[[email protected] ~]# su - grid
[[email protected] ~]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.DATA.dg    ora....up.type ONLINE    ONLINE    rac1
ora.FRA.dg     ora....up.type ONLINE    ONLINE    rac1
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rac1
ora....N1.lsnr ora....er.type ONLINE    ONLINE    rac1
ora.OCR.dg     ora....up.type ONLINE    ONLINE    rac1
ora.asm        ora.asm.type   ONLINE    ONLINE    rac1
ora.cvu        ora.cvu.type   ONLINE    ONLINE    rac1
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    rac1
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    rac1
ora.ons        ora.ons.type   ONLINE    ONLINE    rac1
ora.orcl.db    ora....se.type ONLINE    ONLINE    rac1
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    OFFLINE   OFFLINE
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   ora....t1.type ONLINE    ONLINE    rac1
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    OFFLINE   OFFLINE
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   ora....t1.type ONLINE    ONLINE    rac2
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    rac1

检查集群运行状态

[[email protected] ~]$ srvctl status database -d orcl

Instance orcl1 is running on node rac1

Instance orcl2 is running on node rac2

2.检查CRS状态

检查本地节点的CRS状态

[grid@rac1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

检查集群的CRS状态

[grid@rac1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

3.查看集群中节点配置信息

[grid@rac1 ~]$ olsnodes
rac1
rac2

[grid@rac1 ~]$ olsnodes -n
rac1    1
rac2    2

[grid@rac1 ~]$ olsnodes -n -i -s -t
rac1    1       rac1-vip        Active  Unpinned
rac2    2       rac2-vip        Active  Unpinned

4.查看集群件的表决磁盘信息

[[email protected] ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   496abcfc4e214fc9bf85cf755e0cc8e2 (/dev/raw/raw1) [OCR]
Located 1 voting disk(s).

5.查看集群SCAN VIP信息

[grid@rac1 ~]$ srvctl config scan
SCAN name: scan-ip, Network: 1/192.168.248.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /scan-ip/192.168.248.110

查看集群SCAN Listener信息

[grid@rac1 ~]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521

6.启、停集群数据库

整个集群的数据库启停

进入grid用户

[[email protected] ~]$ srvctl stop database -d orcl

[[email protected] ~]$ srvctl start database -d orcl

关闭所有节点

进入root用户

关闭所有节点

[[email protected] bin]# pwd

/u01/app/11.2.0/grid/bin

[[email protected] bin]# ./crsctl stop crs

实际只关闭了当前结点

EM管理

oracle用户下执行

[oracle@rac1 ~]$ emctl status dbconsole
[oracle@rac1 ~]$ emctl start dbconsole
[oracle@rac1 ~]$ emctl stop dbconsole

本地sqlplus连接

windows中安装oracle客户端版

修改tsnames.ora

D:\develop\app\orcl\product\11.2.0\client_1\network\admin\tsnames.ora

添加

RAC_ORCL =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.248.110)(PORT = 1521))
      )
      (CONNECT_DATA =
        (SERVER = DEDICATED)
        (SERVICE_NAME = orcl)
      )
    )

这里的HOST写的是scan-ip

C:\Users\sxtcx>sqlplus sys/[email protected]_ORCL as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on 星期四 4月 14 14:37:30 2016

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

连接到:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select instance_name, status from v$instance;

INSTANCE_NAME                    STATUS
-------------------------------- ------------------------
orcl1                            OPEN

当开启第二个命令行窗口连接时,发现实例名为orcl2,可以看出,scan-ip的加入可以具有负载均衡的作用。



参考网址:

Oracle 11g R2+RAC+ASM+OracleLinux6.4安装详解(图)

VMWARE vSphere+REDHAT 6.3+ORACLE 11G RAC配置数据库集群

VMware搭建Oracle 11g RAC测试环境 For Linux

linux 扩展swap


时间: 2025-01-04 11:48:00

Oracle 11g RAC搭建(VMware环境)的相关文章

oracle 11G rac 安装(VMware + rhel6.3)

闲来有空,整理下VMware workstation上oracle 11 RAC的安装 环境: VMware Workstation :9.0.0 build-812388 OS :Red Hat Enterprise Linux Server release 6.3 (Santiago) Oracle :11203 配置环境如图所示 磁盘 45G 分别有两块网卡 内存2.5G Swap为内存的2倍 VMware virtual Ethernet adapter 1    192.168.10网

RHEL6.6安装Oracle 11g RAC - 基于VMware的实验环境

实验环境准备虚拟机:VMware® Workstation 14 Pro操作系统:Red Hat Enterprise Linux 6.6 x86_64rhel-server-6.6-x86_64-dvd.isooracle软件:oracle 11.2.0.4p13390677_112040_Linux-x86-64_1of7.zipp13390677_112040_Linux-x86-64_2of7.zipp13390677_112040_Linux-x86-64_3of7.zip 虚拟服务器

Oracle 12cR1 RAC 在VMware Workstation上安装(上)—OS环境配置

Oracle 12cR1 RAC 在VMware Workstation上安装(上)-OS环境配置 1.1  整体规划部分 1.1.1  所需软件介绍 Oracle RAC不支持异构平台.在同一个集群中,可以支持具有速度和规模不同的机器,但所有节点必须运行在相同的操作系统.Oracle RAC不支持具有不同的芯片架构的机器. 序号 类型 内容 1 数据库 p17694377_121020_Linux-x86-64_1of8.zip p17694377_121020_Linux-x86-64_2o

oracle 11g rac 笔记(VMware 和esxi主机都可以使用)

这个只是笔记,防止丢失,没事见整理 在vmware安装目录 创建磁盘: vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 D:\VMWARE\racsharedisk\ocr.vmdk vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 D:\VMWARE\racsharedisk\ocr2.vmdk vmware-vdiskmanager.exe -c -s 1000Mb -a l

Oracle 11g RAC到单实例ASM的物理Standby搭建

一.DG环境配置 此次搭建Standby的主库为:" RedHat 6.5+11G+RAC+ASM安装与配置(三节点)",11g单实例ASM安装使用ASMLib的方式,不在使用UDEV方式,磁盘为本机磁盘,非远程挂载的磁盘.本次搭建包含了11g单实例ASM的详细安装过程. 1. 基本环境 主库: 实例名:racdb1,racdb2,racdb3        DB_NAME:racdb              DB_UNIQUE_NAME:racdb 备库: 实例名:racdg  

Oracle 11g rac 生产环境部署详录

作者:田逸([email protected]) 基本规划 ◎设备选型 1.服务器:Dell R620 两台.cpu 8 core,内存64G,600G 15000转sas硬盘,双电源,hba卡一块,连接存储线缆一根(连接hba卡和共享存储). 2.存储:dell MD3200 一台.双控制器,12块600G 15000转sas硬盘.为追求最高可用性,使用的raid级别是raid10. 3.交换机:华为3com两台,型号为h3c S5048E.注意:网络端口最好是全千兆. 4.网线:2-3米机制

Oracle 12cR1 RAC 在VMware Workstation上安装(下)—静默安装

Oracle 12cR1 RAC 在VMware Workstation上安装(下)-静默安装 1.1  静默安装 1.1.1  静默安装grid 安装之前使用脚本进行校验,确保所有的failed选项都可以忽略. ./runcluvfy.sh stage -pre crsinst -n raclhr-12cR1-N1,raclhr-12cR1-N2 -fixup -verbose 1.1.1.1  新建/etc/oraInst.loc文件 在2个节点上新建/etc/oraInst.loc,文件中

Oracle 12cR1 RAC 在VMware Workstation上安装(中)—图形界面安装

Oracle 12cR1 RAC 在VMware Workstation上安装(中)-图形界面安装 1.1  图形界面安装 1.1.1  安装GRID 安装日志:/u01/app/oraInventory/logs/installActions2014-06-05_06-12-27AM.log 首先打开Xmanager - Passive软件,或者直接以grid用户登录虚拟机,在虚拟机里边进行图形界面操作. [[email protected] ~]$ export DISPLAY=192.16

Oracle 11g RAC oc4j/gsd Offline

Oracle 11g RAC中,发现oc4j以及gsd服务都处于offline状态,这是Oracle 11g RAC默认情形.即便如此,并不影响数据库的使用,因为 oc4j 是用于WLM 的一个资源, WLM在 11.2.0.2 才可用.GSD则是用于支持dbca,srvctl,oem等的交互工具.本文描述将这两个服务切换到online. [python] view plain copy print? 1.环境 [[email protected] ~]# cat /etc/issue Ente