Oracle RAC 重建OCR和Votedisk

哈哈,刚说最后一篇,闲的无聊又搞了个测试。

环境:

OS:redhat 5.8

DB:Oracle 10.2.0.5 raw device

我们要养成经常备份ocr跟votedisk的习惯。但是ocr跟votedisk没有备份也是可以重建的。就像控制文件。但是过程较为麻烦。以下为详细步骤:

首先备份ocr跟votedisk:

[[email protected] ~]#
[[email protected] ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :    1048296
         Used space (kbytes)      :       3292
         Available space (kbytes) :    1045004
         ID                       : 1891074113
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/raw/raw2
                                    Device/File integrity check succeeded

         Cluster registry integrity check succeeded

[[email protected] ~]# crsctl qeury css votedisk
Unknown parameter: qeury
[[email protected] ~]# crsctl query css votedisk
0.     0    /dev/raw/raw3
1.     0    /dev/raw/raw4
2.     0    /dev/raw/raw5

[[email protected] ~]# dd if=/dev/raw/raw3 of=/opt/votedisk.bak
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 462.285 seconds, 2.3 MB/s
[[email protected] ~]# ocrconfig -export /opt/ocr.bak
[[email protected] ~]# cd /opt/
[[email protected] opt]# ls
ocr.bak  ORCLfmap  votedisk.bak

删除配置:

[[email protected] install]# ./rootdelete.sh
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources. This could take several minutes.
Error while stopping resources. Possible cause: CRSD is down.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr'
Cleaning up Network socket directories

[[email protected] install]# ./rootdelete.sh
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources. This could take several minutes.
Error while stopping resources. Possible cause: CRSD is down.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr'
Cleaning up Network socket directories

在安装节点执行:
[[email protected] install]# ./rootdeinstall.sh
Removing contents from OCR mirror device
2560+0 records in
2560+0 records out
10485760 bytes (10 MB) copied, 0.521651 seconds, 20.1 MB/s
Removing contents from OCR device
2560+0 records in
2560+0 records out
10485760 bytes (10 MB) copied, 0.496207 seconds, 21.1 MB/s

重跑root脚本:

[[email protected] crs]# ./root.sh
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.
        rac1
CSS is inactive on these nodes.
        rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

[[email protected] crs]# ./root.sh
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Invalid syntax used for option "nodevips". Check usage (vipca -help) for proper syntax.

配置VIP

[[email protected] crs]# oifcfg getif
[[email protected] crs]# oifcfg iflist
eth0  192.168.56.0
eth1  192.168.11.0
[[email protected] crs]# oifcfg setif -global eth0/192.168.56.0:public
[[email protected] crs]# oifcfg setif -global eht1/192.168.11.0:cluster_interconnect
[[email protected] crs]# oifcfg getif
eth0  192.168.56.0  global  public
eht1  192.168.11.0  global  cluster_interconnect
运行vipca配置VIP

查看结果:

[[email protected] crs]# crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.rac1.gsd   application    ONLINE    ONLINE    rac1
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   application    ONLINE    ONLINE    rac1
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac2 

配置监听:

[[email protected] admin]# mv listener.ora listener.ora.bak

[[email protected] admin]# mv listener.ora listener.ora.bak

使用netca创建监听

[[email protected] admin]# crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....C1.lsnr application    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    ONLINE    ONLINE    rac1
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   application    ONLINE    ONLINE    rac1
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac2

将数据库加入CRS:

[[email protected] ~]$ srvctl add database -d zhdb -o $ORACLE_HOME

启动数据库:

srvctl start database -d zhdb

[[email protected] dbs]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....C1.lsnr application    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    ONLINE    ONLINE    rac1
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   application    ONLINE    ONLINE    rac1
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac2
ora.zhdb.db    application    ONLINE    ONLINE    rac1
ora....b2.inst application    ONLINE    ONLINE    rac2  

自此OCR跟Votedisk重建成功

时间: 2024-11-05 12:30:04

Oracle RAC 重建OCR和Votedisk的相关文章

Oracle 10g下ocr和votedisk的管理

ocr和votedisk是什么? 作为集群,oracle cluster需要共享存储来存放整个集群的配置信息,ocr便是用例存放这些配置信息的地方,ocr的存储容量一般不会太大,在10g下,oracle建议256M已经足以.ocr必须需要存储在集群文件系统或者裸设备上,出于性能上的考虑,本人建议将ocr建立在裸设备上,性能高并且管理也不复杂(ocr和votedisk的数量一般不会太多).ocr中存放的是集群的配置信息,这些信息只能在一个节点上进行维护操作,这一节点叫做Master Node,其他

oracle rac重建控制文件

1.使用sqlplus连接到已经mount或open的rac数据库 sql> alter database backup controlfile to trace noresetlogs; 2.找出对应的trace文件3.编写脚本control.sql startup nomount create controlfile reuse database "orcl" noresetlogs noarchivelog maxlogfiles 192 maxlogmembers 3 m

Oracle rac集群环境中的特殊问题

备注:本文摘抄于张晓明<大话Oracle RAC:集群 高可用性 备份与恢复> 因为集群环境需要多个计算机协同工作,要达到理想状态,必须要考虑在集群环境下面临的新挑战. 1.并发控制 在集群环境中,关键数据通常是并发存放的,比如放在共享磁盘上.而集群内各个成员的生身份是对等的,所有节点对数据有相同的访问权利.这时就必须有某种机制能够控制节点对数据的访问. 在Oracle rac中,是利用DLM (Distribute Look Management)机制来进行多个实例间的并发控制. 2.健忘症

Oracle Rac 11.2.0.3迁移OCR和VOTEDISK

环境:AIX7.1+Oracle Rac 11.2.0.3 迁移描述:今天在装Oracle Rac的时候,错误的将500G的数据盘用作OCRDG了,遂后续比较麻烦,只能讲ocr和votedisk迁移到新建的OCRDG上,并把DATADG删除并格式化该盘.(OCRDG为normal 冗余) 操作如下: [email protected]:/home/grid>/oraapp/grid/gridhome/bin/ocrcheck Status of Oracle Cluster Registry i

11gR2 RAC 独占模式replace votedisk遭遇PROC-26,restore ocr遭遇CRS-4000、PROT-35

原文链接:http://blog.itpub.net/23135684/viewspace-748816/ 11gR2 RAC系统的存储数据全然丢失,全部节点的软件都安装在本地磁盘中.本地磁盘保留了OCR的备份,以下讨论通过replace votedisk和restore ocr的方式恢复Clusterware的正常执行: 1.启动CRS到独占模式. [[email protected] bin]#./crsctl stop has -f [[email protected] bin]#./cr

重新初始化RAC的OCR盘和Votedisk盘,修复RAC系统

假设我们的RAC环境中OCR磁盘和votedisk磁盘全部被破坏,并且都没有备份,那么我们该如何恢复我们的RAC环境.最近简单的办法就是重新初始化我们的ocr盘和votedisk盘,把集群中的所有相关资源重新注册到OCR磁盘和votedisk磁盘中. 1.停掉所有节点的Clusterware Stack [[email protected] bin]# ./crsctl stop crs Stopping resources. Successfully stopped CRS resources

Oracle RAC OCR 的备份与恢复

Oracle Clusterware把整个集群的配置信息放在共享存储上,这些信息包括了集群节点的列表.集群数据库实例到节点的映射以及CRS应用程序资源信息.也即是存放在ocr 磁盘(或者ocfs文件)上.因此对于这个配置文件的重要性是不言而喻的.任意使得ocr配置发生变化的操作在操作之间或之后都建议立即备份ocr.本文主要基于Oracle 10g RAC环境描述OCR的备份与恢复.        OCR 相关参考:        Oracle RAC OCR 与健忘症        Oracle

Oracle Study之--AIX RAC下OCR磁盘故障(PROT-602)

Oracle Study之--AIX RAC下OCR磁盘故障(PROT-602) ********************************************************************************  Welcome to AIX Version 5.3!                                                **                                                  

10gR2-11gR1,11gR2如何干净的清除并重建OCR和表决磁盘

下面分别讨论10gR2-11gR1和11gR2干净的清除并重建OCR和表决磁盘的方法. 一.10gR2-11gR1干净的清除并重建OCR和表决磁盘的方法 参考METALINK文章:ID 399482.1 How to Recreate OCR/Voting Disk Accidentally Deleted [ID 399482.1]   修改时间 13-JUN-2011     类型 HOWTO     状态 PUBLISHED   In this Document  Goal  Soluti