ORACLE RAC 11G 更改 /etc/hosts文件

来自官方文档:(1)Can I change the public hostname in my Oracle Database 10g Cluster using Oracle Clusterware?

    Hostname changes are not supported in Oracle Clusterware (CRS), unless you want to perform a deletenode followed by a new addnode operation.
The hostname is used to store among other things the flag files and Oracle Clusterware stack will not start if hostname is changed.

(2)Does the hostname have to match the public name or can it be anything else?

    When there is no vendor clusterware, only Oracle Clusterware, then the public node name must match the host name. When vendor clusterware is present, it determines the public node names, and the installer doesn‘t present an opportunity to change them. So, when you have a choice, always choose the hostname.

From: Metalink Note 220970.1 RAC Frequently Asked Questions:

(1). RAC 环境一旦安装好后, 主机名就不能修改,除非先删除节点,修改Hostname, 在添加节点。

(2). 主机名必须和public 名一致。这一点在安装文档中有特别强调。

大神博客链接:http://blog.csdn.net/tianlesoftware/article/details/6055612

由于之前的安装失误,将/etc/hosts文件配置成如下格式

#管理网段
30.11.3.178  rappdb1
30.11.3.179  rappdb2
#public IP
30.2.21.161  rappdb1-pub
30.2.21.163  rappdb2-pub

30.2.21.162  rappdb1-vip
30.2.21.163  rappdb2-vip

172.2.21.101  rappdb1-priv
172.2.21.102  rappdb2-priv

当安装完成后,监听中发现监听的是 管理IP(30.11.3.178)和VIP(30.2.21.162),正常应该是监听PublicIP和VIP。一般不建议直接修改hosts文件。

修复操作如下:

一、先卸载CRS
在非最后一个节点运行

/oracle/asm/crs/install/rootcrs.pl -verbose -deconfig -force

Using configuration parameter file: /oracle/asm/crs/install/crsconfig_params
Network exists: 1/8.8.6.0/255.255.255.0/en0, type static
VIP exists: /dbrac1-vip/8.8.6.11/8.8.6.0/255.255.255.0/en0, hosting node dbrac1
VIP exists: /dbrac2-vip/8.8.6.21/8.8.6.0/255.255.255.0/en0, hosting node dbrac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop ‘ora.registry.acfs‘ on ‘dbrac1‘
CRS-2677: Stop of ‘ora.registry.acfs‘ on ‘dbrac1‘ succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘dbrac1‘
CRS-2673: Attempting to stop ‘ora.crsd‘ on ‘dbrac1‘
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘dbrac1‘
CRS-2673: Attempting to stop ‘ora.oc4j‘ on ‘dbrac1‘
CRS-2673: Attempting to stop ‘ora.ASMCRS.dg‘ on ‘dbrac1‘
CRS-2673: Attempting to stop ‘ora.racdb.db‘ on ‘dbrac1‘
CRS-2677: Stop of ‘ora.racdb.db‘ on ‘dbrac1‘ succeeded
CRS-2673: Attempting to stop ‘ora.ASMVG1.dg‘ on ‘dbrac1‘
CRS-2677: Stop of ‘ora.ASMVG1.dg‘ on ‘dbrac1‘ succeeded
CRS-2677: Stop of ‘ora.oc4j‘ on ‘dbrac1‘ succeeded
CRS-2672: Attempting to start ‘ora.oc4j‘ on ‘dbrac2‘
CRS-2676: Start of ‘ora.oc4j‘ on ‘dbrac2‘ succeeded
CRS-2677: Stop of ‘ora.ASMCRS.dg‘ on ‘dbrac1‘ succeeded
CRS-2673: Attempting to stop ‘ora.asm‘ on ‘dbrac1‘
CRS-2677: Stop of ‘ora.asm‘ on ‘dbrac1‘ succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘dbrac1‘ has completed
CRS-2677: Stop of ‘ora.crsd‘ on ‘dbrac1‘ succeeded
CRS-2673: Attempting to stop ‘ora.drivers.acfs‘ on ‘dbrac1‘
CRS-2673: Attempting to stop ‘ora.crf‘ on ‘dbrac1‘
CRS-2673: Attempting to stop ‘ora.ctssd‘ on ‘dbrac1‘
CRS-2673: Attempting to stop ‘ora.evmd‘ on ‘dbrac1‘
CRS-2673: Attempting to stop ‘ora.asm‘ on ‘dbrac1‘
CRS-2673: Attempting to stop ‘ora.mdnsd‘ on ‘dbrac1‘
CRS-2677: Stop of ‘ora.crf‘ on ‘dbrac1‘ succeeded
CRS-2677: Stop of ‘ora.ctssd‘ on ‘dbrac1‘ succeeded
CRS-2677: Stop of ‘ora.evmd‘ on ‘dbrac1‘ succeeded
CRS-2677: Stop of ‘ora.mdnsd‘ on ‘dbrac1‘ succeeded
CRS-2677: Stop of ‘ora.drivers.acfs‘ on ‘dbrac1‘ succeeded
CRS-2677: Stop of ‘ora.asm‘ on ‘dbrac1‘ succeeded
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip‘ on ‘dbrac1‘
CRS-2677: Stop of ‘ora.cluster_interconnect.haip‘ on ‘dbrac1‘ succeeded
CRS-2673: Attempting to stop ‘ora.cssd‘ on ‘dbrac1‘
CRS-2677: Stop of ‘ora.cssd‘ on ‘dbrac1‘ succeeded
CRS-2673: Attempting to stop ‘ora.gipcd‘ on ‘dbrac1‘
CRS-2677: Stop of ‘ora.gipcd‘ on ‘dbrac1‘ succeeded
CRS-2673: Attempting to stop ‘ora.gpnpd‘ on ‘dbrac1‘
CRS-2677: Stop of ‘ora.gpnpd‘ on ‘dbrac1‘ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘dbrac1‘ has completed
CRS-4133: Oracle High Availability Services has been stopped.
This may take several minutes. Please wait ...
0518-307 odmdelete: 1 objects deleted.
0518-307 odmdelete: 1 objects deleted.
0518-307 odmdelete: 1 objects deleted.
Successfully deconfigured Oracle clusterware stack on this node

在最后一个节点运行

/oracle/asm/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode

Using configuration parameter file: /oracle/asm/crs/install/crsconfig_params
CRS resources for listeners are still configured
Network exists: 1/8.8.6.0/255.255.255.0/en0, type static
VIP exists: /dbrac2-vip/8.8.6.21/8.8.6.0/255.255.255.0/en0, hosting node dbrac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop ‘ora.registry.acfs‘ on ‘dbrac2‘
CRS-2677: Stop of ‘ora.registry.acfs‘ on ‘dbrac2‘ succeeded
CRS-2673: Attempting to stop ‘ora.crsd‘ on ‘dbrac2‘
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘dbrac2‘
CRS-2673: Attempting to stop ‘ora.oc4j‘ on ‘dbrac2‘
CRS-2673: Attempting to stop ‘ora.ASMCRS.dg‘ on ‘dbrac2‘
CRS-2673: Attempting to stop ‘ora.racdb.db‘ on ‘dbrac2‘
CRS-2677: Stop of ‘ora.racdb.db‘ on ‘dbrac2‘ succeeded
CRS-2673: Attempting to stop ‘ora.ASMVG1.dg‘ on ‘dbrac2‘
CRS-2677: Stop of ‘ora.ASMVG1.dg‘ on ‘dbrac2‘ succeeded
CRS-2677: Stop of ‘ora.oc4j‘ on ‘dbrac2‘ succeeded
CRS-2677: Stop of ‘ora.ASMCRS.dg‘ on ‘dbrac2‘ succeeded
CRS-2673: Attempting to stop ‘ora.asm‘ on ‘dbrac2‘
CRS-2677: Stop of ‘ora.asm‘ on ‘dbrac2‘ succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘dbrac2‘ has completed
CRS-2677: Stop of ‘ora.crsd‘ on ‘dbrac2‘ succeeded
CRS-2673: Attempting to stop ‘ora.ctssd‘ on ‘dbrac2‘
CRS-2673: Attempting to stop ‘ora.evmd‘ on ‘dbrac2‘
CRS-2673: Attempting to stop ‘ora.asm‘ on ‘dbrac2‘
CRS-2677: Stop of ‘ora.evmd‘ on ‘dbrac2‘ succeeded
CRS-2677: Stop of ‘ora.ctssd‘ on ‘dbrac2‘ succeeded
CRS-2677: Stop of ‘ora.asm‘ on ‘dbrac2‘ succeeded
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip‘ on ‘dbrac2‘
CRS-2677: Stop of ‘ora.cluster_interconnect.haip‘ on ‘dbrac2‘ succeeded
CRS-2673: Attempting to stop ‘ora.cssd‘ on ‘dbrac2‘
CRS-2677: Stop of ‘ora.cssd‘ on ‘dbrac2‘ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor‘ on ‘dbrac2‘
CRS-2676: Start of ‘ora.cssdmonitor‘ on ‘dbrac2‘ succeeded
CRS-2672: Attempting to start ‘ora.cssd‘ on ‘dbrac2‘
CRS-2672: Attempting to start ‘ora.diskmon‘ on ‘dbrac2‘
CRS-2676: Start of ‘ora.diskmon‘ on ‘dbrac2‘ succeeded
CRS-2676: Start of ‘ora.cssd‘ on ‘dbrac2‘ succeeded
CRS-4611: Successful deletion of voting disk +ASMCRS.
ASM de-configuration trace file location: /tmp/asmcadc_clean2012-11-20_10-05-15-AM.log
ASM Clean Configuration START
ASM Clean Configuration END

ASM with SID +ASM1 deleted successfully. Check /tmp/asmcadc_clean2012-11-20_10-05-15-AM.log for details.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘dbrac2‘
CRS-2673: Attempting to stop ‘ora.mdnsd‘ on ‘dbrac2‘
CRS-2673: Attempting to stop ‘ora.crf‘ on ‘dbrac2‘
CRS-2673: Attempting to stop ‘ora.ctssd‘ on ‘dbrac2‘
CRS-2673: Attempting to stop ‘ora.asm‘ on ‘dbrac2‘
CRS-2677: Stop of ‘ora.mdnsd‘ on ‘dbrac2‘ succeeded
CRS-2677: Stop of ‘ora.crf‘ on ‘dbrac2‘ succeeded
CRS-2677: Stop of ‘ora.asm‘ on ‘dbrac2‘ succeeded
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip‘ on ‘dbrac2‘
CRS-2677: Stop of ‘ora.ctssd‘ on ‘dbrac2‘ succeeded
CRS-2677: Stop of ‘ora.cluster_interconnect.haip‘ on ‘dbrac2‘ succeeded
CRS-2673: Attempting to stop ‘ora.cssd‘ on ‘dbrac2‘
CRS-2677: Stop of ‘ora.cssd‘ on ‘dbrac2‘ succeeded
CRS-2673: Attempting to stop ‘ora.gipcd‘ on ‘dbrac2‘
CRS-2677: Stop of ‘ora.gipcd‘ on ‘dbrac2‘ succeeded
CRS-2673: Attempting to stop ‘ora.gpnpd‘ on ‘dbrac2‘
CRS-2677: Stop of ‘ora.gpnpd‘ on ‘dbrac2‘ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘dbrac2‘ has completed
CRS-4133: Oracle High Availability Services has been stopped.
This may take several minutes. Please wait ...
0518-307 odmdelete: 1 objects deleted.
0518-307 odmdelete: 1 objects deleted.
0518-307 odmdelete: 1 objects deleted.
Successfully deconfigured Oracle clusterware stack on this node

二、修改主机名 (本案例不涉及修改主机名)
smitty tcpip修改主机名
第一节点:dbrac1修改为dbrac100
第一节点:dbrac2修改为dbrac200

三、修改/etc/hosts

#管理网段
#30.11.3.178  rappdb1
#30.11.3.179  rappdb2
#public IP
30.2.21.161  rappdb1
30.2.21.163  rappdb2

30.2.21.162  rappdb1-vip
30.2.21.163  rappdb2-vip

172.2.21.101  rappdb1-priv
172.2.21.102  rappdb2-priv

四、修改11g grid的参数文件
修改$ORACLE_HOME/crs/install/crsconfig_params中相关部分:

ORACLE_HOME=/oracle/asm
ORACLE_BASE=/oracle/grid
OLD_CRS_HOME=

JREDIR=/oracle/asm/jdk/jre/
JLIBDIR=/oracle/asm/jlib

VNDR_CLUSTER=false
OCR_LOCATIONS=NO_VAL
CLUSTER_NAME=dbrac-cluster
HOST_NAME_LIST=dbrac100,dbrac200
NODE_NAME_LIST=dbrac100,dbrac200
PRIVATE_NAME_LIST=
VOTING_DISKS=NO_VAL
#VF_DISCOVERY_STRING=%s_vfdiscoverystring%
ASM_UPGRADE=false
ASM_SPFILE=
ASM_DISK_GROUP=ASMCRS
ASM_DISCOVERY_STRING=
ASM_DISKS=/dev/rhdisk3
ASM_REDUNDANCY=EXTERNAL
CRS_STORAGE_OPTION=1
CSS_LEASEDURATION=400
CRS_NODEVIPS="dbrac100-vip/255.255.255.0/en0,dbrac200-vip/255.255.255.0/en0"
NODELIST=dbrac100,dbrac200
NETWORKS="en0"/8.8.6.0:public,"en1"/7.7.9.0:cluster_interconnect
SCAN_NAME=dbrac-scan
SCAN_PORT=1521
GPNP_PA=
OCFS_CONFIG=

# GNS consts
GNS_CONF=false
GNS_ADDR_LIST=
GNS_DOMAIN_LIST=
GNS_ALLOW_NET_LIST=
GNS_DENY_NET_LIST=
GNS_DENY_ITF_LIST=

#### Required by OUI add node
NEW_HOST_NAME_LIST=
NEW_NODE_NAME_LIST=
NEW_PRIVATE_NAME_LIST=
NEW_NODEVIPS="dbrac100-vip/255.255.255.0/en0,dbrac200-vip/255.255.255.0/en0"

############### OCR constants
# GPNPCONFIGDIR is handled differently in dev (T_HAS_WORK for all)
# GPNPGCONFIGDIR in dev expands to T_HAS_WORK_GLOBAL
GPNPCONFIGDIR=$ORACLE_HOME
GPNPGCONFIGDIR=$ORACLE_HOME
OCRLOC=
OLRLOC=
OCRID=
CLUSTER_GUID=

CLSCFG_MISSCOUNT=

#### IPD/OS
CRFHOME="/oracle/asm"

注:10g RAC的参数配置文件: $ORA_CRS_HOME/install/rootconfig

五、在各个节点依次运行root.sh脚本
节点1运行root.sh
另外一个节点运行root.sh

六、注册信息
1.grid用户asmca命令加载ASMVG1
2.oracle用户注册集群信息
./srvctl add database -d racdb -o /oracle/db/product/11.2
./srvctl add instance -d racdb -i racdb1 -n dbrac100     
./srvctl add instance -d racdb -i racdb2 -n dbrac200
root用户
./srvctl add scan -n  dbrac-scan -S 8.8.6.0/255.255.255.0

3.grid用户netca注册监听

七、完成检查状态
$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.ASMCRS.dg  ora....up.type ONLINE    ONLINE    dbrac100    
ora.ASMVG1.dg  ora....up.type ONLINE    ONLINE    dbrac100    
ora....ER.lsnr ora....er.type ONLINE    ONLINE    dbrac100    
ora.asm        ora.asm.type   ONLINE    ONLINE    dbrac100    
ora....SM1.asm application    ONLINE    ONLINE    dbrac100    
ora....00.lsnr application    ONLINE    ONLINE    dbrac100    
ora....100.gsd application    OFFLINE   OFFLINE               
ora....100.ons application    ONLINE    ONLINE    dbrac100    
ora....100.vip ora....t1.type ONLINE    ONLINE    dbrac100    
ora....SM2.asm application    ONLINE    ONLINE    dbrac200    
ora....00.lsnr application    ONLINE    ONLINE    dbrac200    
ora....200.gsd application    OFFLINE   OFFLINE               
ora....200.ons application    ONLINE    ONLINE    dbrac200    
ora....200.vip ora....t1.type ONLINE    ONLINE    dbrac200    
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    dbrac100    
ora.ons        ora.ons.type   ONLINE    ONLINE    dbrac100    
ora.racdb.db   ora....se.type ONLINE    ONLINE    dbrac100    
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    dbrac100

时间: 2024-10-14 04:24:36

ORACLE RAC 11G 更改 /etc/hosts文件的相关文章

安装Oracle RAC 11g

1.Oracle Enterprise Linux 和 iSCSI 上构建 Oracle RAC 11g 集群 2.Oracle RAC 的所有共享磁盘存储将基于 iSCSI,iSCSI 使用在第三个节点(该节点在本文中称为网络存储服务器)上运行的 Openfiler 2.3 版 x86_64 3.每个 Linux 节点仅配置两个网络接口 - eth0 用于连接公共网络, eth1 用于 Oracle RAC 专用互连"和"连接网络存储服务器以便进行共享 iSCSI 访问.而在实现生产

如何获取 oracle RAC 11g asm spfile 的位置

 方法一: [[email protected] ~]# su - grid [[email protected] ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Thu Jul 3 09:36:27 2014 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise

srvctl commands in Oracle RAC 11g

srvctl commands in Oracle RAC 11g SRVCTL (Server Control utility) commands in Oracle 11g RAC srvctl command target [options] commands: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config targets: database/db|inst

oracle之 RAC 11G ASM下控制文件多路复用

如果数据库仅有一组control file文件,需要添加一组或者多组,保证一组文件损坏或者丢失导致数据库宕机. -- 环境说明SQL> select * from v$version;BANNER--------------------------------------------------------------------------------Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Produc

ORACLE RAC 11g的搭建安装完全手册(适用于RedHat 6.4)

说明:因工作需要,苦练RAC安装.在百度狂搜之后,发现还没有一份完整版的,适合新手搭建RAC 11G环境的安装手册. 本安装步骤手册通过本人根据实践,整理后的真实安装步骤. 在通过对RAC 10g和11g进行安装,发现还有有些区别.在RAC 11g中,RAC已经被整合到oracle grid infrastructure.下面是整个安装步骤,与需要的朋友分享下.      安装环境列表: 1.VMwave 9.0 64位虚拟机 2.安装2台操作系统:RedHat 6.4 64bit 3.orac

如何获得 oracle RAC 11g asm spfile S档

 方法一: [[email protected] ~]# su - grid [[email protected] ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Thu Jul 3 09:36:27 2014 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise

11.2.04 Oracle RAC 目录中 crfclust.bdb文件过大,Bug 20186278

今天发现我们的数据库服务器CRS安装目录突然增大,经过查找发现在crf目录中存在一个非常大的crf文件,通过MOS查找,发现命中Bug 20186278, 记录一下,以防忘记 [[email protected] smidb11]$ pwd /oracle/app/11.2.0/grid_1/crf/db/smidb11 [[email protected] smidb11]$ ls -l total 31863680 -rw-r----- 1 root root   578748416 Aug

oracle rac 11g 日志占满系统盘导致数据库down掉

oracle日志占满系统盘满了导致数据库挂掉 并且archivelog日志也满了倒是数据库无法open,无法登陆,幸好还可以到mount状态.解决办法: 1.根据报错查看 表象: crs 服务down掉 CRS-0184: Cannot communicate with the CRS daemon删除/var/tmp/.oracle 目录删除oracle alter日志:cd /u01/app/oracle/rdbms/bol/BOL1/alterrm -rf * 到此重启机器 可以启动ora

ORACLE RAC 11G 添加以及删除UNDO表空间

在生产环境上,由于闪存盘的容量有限,现在需要将闪存盘里面的UNDO表空间,替换到非闪存的磁盘里面. 磁盘的使用情况如下: 表空间使用情况如下: RAC两个节点占用将近167G的空间. 操作步骤如下: 在其他磁盘新建RAC两个节点的undo表空间,然后设置成默认的UNDO表空间,后面再新建名称一模一样的UNDO表空间,切换回来(之所以要切换回一样的UNDO表空间,是防止某些应用程序写死). 操作如下,RAC两个节点: 一.新建UNDO表空间 节点1: create undo tablespace