Oracle 11g R2 RAC删除一节点过程

实验场景:

两节点RAC,主机名是db1、db2,现在需要删除db2,本示例是在正常状态下删除。

1.  db1,db2节点检查CSS服务器是否正常,如下即为正常。

[[email protected] ~]# su - grid    
[[email protected] ~]$ olsnodes -t -s    
db1     Active  Unpinned    
db2     Active  Unpinned    
[[email protected] ~]$

如果pinned, 则需要在db1节点上执行:

[[email protected] ~]$ crsctl unpin css -n db2

2.  使用dbca删掉db2实例

在任一保留的节点上删除db2实例    
[[email protected] ~]# su - oracle    
[[email protected] ~]$ dbca

1)验证db2实例已被删除

查看活动的实例:    
$ sqlplus / as sysdba    
SQL> select thread#,status,instance from v$thread;

THREAD# STATUS INSTANCE  
---------- ------ ------------------------------    
         1 OPEN   orcl1

2) 查看库的配置:

[[email protected] ~]$ srvctl config database -d orcl  
Database unique name: orcl    
Database name: orcl    
Oracle home: /u01/app/oracle/product/11.2.0/db_1    
Oracle user: oracle    
Spfile: +DATA/orcl/spfileorcl.ora    
Domain:    
Start options: open    
Stop options: immediate    
Database role: PRIMARY    
Management policy: AUTOMATIC    
Server pools: orcl    
Database instances: orcl1    
Disk Groups: DATA,RECOVERY    
Mount point paths:    
Services:    
Type: RAC    
Database is administrator managed

3. 停止db2节点的监听

[[email protected] ~]# su - grid  
[[email protected] ~]$ srvctl disable listener -l listener -n db2    
[[email protected] ~]$ srvctl config listener -a    
Name: LISTENER    
Network: 1, Owner: grid    
Home: <CRS home>    
  /u01/app/11.2.0/grid on node(s) db2,db1    
End points: TCP:1521    
[[email protected] ~]$    
[[email protected] ~]$ srvctl stop listener -l listener -n db2    
[[email protected] ~]$

4. 在db2节点使用使用oracle用户更新集群列表

# su - oracle    
$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={db2}" -local

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4031 MB    Passed  
The inventory pointer is located at /etc/oraInst.loc    
The inventory is located at /u01/app/oraInventory    
‘UpdateNodeList‘ was successful.

5. 删除db2节点的数据库软件

在db2节点上执行:

# su - oracle    
$ $ORACLE_HOME/deinstall/deinstall -local

Checking for required files and bootstrapping ...  
Please wait ...    
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################    
## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/db_1    
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database    
Oracle Base selected for deinstall is: /u01/app/oracle    
Checking for existence of central inventory location /u01/app/oraInventory    
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid    
The following nodes are part of this cluster: db2    
Checking for sufficient temp space availability on node(s) : ‘db2‘

## [END] Install check configuration ##

Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2015-12-29_11-35-16-AM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2015-12-29_11-35-19-AM.log

Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2015-12-29_11-35-22-AM.log

Enterprise Manager Configuration Assistant END  
Oracle Configuration Manager check START    
OCM check log file location : /u01/app/oraInventory/logs//ocm_check7428.log    
Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################    
Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid    
The cluster node(s) on which the Oracle home deinstallation will be performed are:db2    
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, ‘db2‘, and the global configuration will be removed.    
Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/db_1    
Inventory Location where the Oracle home registered is: /u01/app/oraInventory    
The option -local will not modify any database configuration for this Oracle home.

No Enterprise Manager configuration to be updated for any database(s)  
No Enterprise Manager ASM targets to update    
No Enterprise Manager listener targets to migrate    
Checking the config status for CCR    
Oracle Home exists with CCR directory, but CCR is not configured    
CCR check is finished    
Do you want to continue (y - yes, n - no)? [n]: y    
A log of this session will be written to: ‘/u01/app/oraInventory/logs/deinstall_deconfig2015-12-29_11-35-12-AM.out‘    
Any error messages from this session will be written to: ‘/u01/app/oraInventory/logs/deinstall_deconfig2015-12-29_11-35-12-AM.err‘

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2015-12-29_11-35-22-AM.log

Updating Enterprise Manager ASM targets (if any)  
Updating Enterprise Manager listener targets (if any)    
Enterprise Manager Configuration Assistant END    
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2015-12-29_11-47-34-AM.log

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2015-12-29_11-47-34-AM.log

De-configuring Local Net Service Names configuration file...  
Local Net Service Names configuration file de-configured successfully.

De-configuring backup files...  
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START  
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean7428.log    
Oracle Configuration Manager clean END    
Setting the force flag to false    
Setting the force flag to cleanup the Oracle Base    
Oracle Universal Installer clean START

Detach Oracle home ‘/u01/app/oracle/product/11.2.0/db_1‘ from the central inventory on the local node : Done

Delete directory ‘/u01/app/oracle/product/11.2.0/db_1‘ on the local node : Done

Failed to delete the directory ‘/u01/app/oracle‘. The directory is in use.  
Delete directory ‘/u01/app/oracle‘ on the local node : Failed <<<<

Oracle Universal Installer cleanup completed with errors.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory ‘/tmp/deinstall2015-12-29_11-34-55AM‘ on node ‘db2‘

## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################    
Cleaning the config for CCR    
As CCR is not configured, so skipping the cleaning of CCR configuration    
CCR clean is finished    
Successfully detached Oracle home ‘/u01/app/oracle/product/11.2.0/db_1‘ from the central inventory on the local node.    
Successfully deleted directory ‘/u01/app/oracle/product/11.2.0/db_1‘ on the local node.    
Failed to delete directory ‘/u01/app/oracle‘ on the local node.    
Oracle Universal Installer cleanup completed with errors.

Oracle deinstall tool successfully cleaned up temporary directories.  
#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

6. 在保留的db1节点上停止db2节点NodeApps

[[email protected] bin]$ srvctl stop nodeapps -n db2 -f

发现停了db2节点的ons和VIP    
[[email protected] ~]$ crs_stat -t                  
Name           Type           Target    State     Host       
------------------------------------------------------------    
ora.CRS.dg     ora....up.type ONLINE    ONLINE    db1        
ora.DATA.dg    ora....up.type ONLINE    ONLINE    db1        
ora....ER.lsnr ora....er.type ONLINE    ONLINE    db1        
ora....N1.lsnr ora....er.type ONLINE    ONLINE    db1        
ora....VERY.dg ora....up.type ONLINE    ONLINE    db1        
ora.asm        ora.asm.type   ONLINE    ONLINE    db1        
ora.cvu        ora.cvu.type   ONLINE    ONLINE    db1        
ora....SM1.asm application    ONLINE    ONLINE    db1        
ora....B1.lsnr application    ONLINE    ONLINE    db1        
ora.db1.gsd    application    OFFLINE   OFFLINE              
ora.db1.ons    application    ONLINE    ONLINE    db1        
ora.db1.vip    ora....t1.type ONLINE    ONLINE    db1        
ora....SM2.asm application    ONLINE    ONLINE    db2        
ora....B2.lsnr application    OFFLINE   OFFLINE              
ora.db2.gsd    application    OFFLINE   OFFLINE              
ora.db2.ons    application    OFFLINE   OFFLINE              
ora.db2.vip    ora....t1.type OFFLINE   OFFLINE              
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE              
ora....network ora....rk.type ONLINE    ONLINE    db1        
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    db2        
ora.ons        ora.ons.type   ONLINE    ONLINE    db1        
ora.orcl.db    ora....se.type ONLINE    ONLINE    db1        
ora....ry.acfs ora....fs.type ONLINE    ONLINE    db1        
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    db1

7. 在db1节点使用oracle用户更新集群列表

在每个保留的db1节点上执行:

# su - oracle    
$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={db1}"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4031 MB    Passed  
The inventory pointer is located at /etc/oraInst.loc    
The inventory is located at /u01/app/oraInventory    
‘UpdateNodeList‘ was successful.

8. 删除db2节点的集群软件

在db2节点上root执行:

# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params  
网络存在: 1/192.168.0.0/255.255.255.0/eth0, 类型 static    
VIP 存在: /db1-vip/192.168.0.8/192.168.0.0/255.255.255.0/eth0, 托管节点 db1    
VIP 存在: /db2-vip/192.168.0.9/192.168.0.0/255.255.255.0/eth0, 托管节点 db2    
GSD 已存在    
ONS 存在: 本地端口 6100, 远程端口 6200, EM 端口 2016    
PRKO-2426 : ONS 已在节点上停止: db2    
PRKO-2425 : VIP 已在节点上停止: db2    
PRKO-2440 : 网络资源已停止。

CRS-2673: Attempting to stop ‘ora.registry.acfs‘ on ‘db2‘  
CRS-2677: Stop of ‘ora.registry.acfs‘ on ‘db2‘ succeeded    
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘db2‘    
CRS-2673: Attempting to stop ‘ora.crsd‘ on ‘db2‘    
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘db2‘    
CRS-2673: Attempting to stop ‘ora.oc4j‘ on ‘db2‘    
CRS-2673: Attempting to stop ‘ora.CRS.dg‘ on ‘db2‘    
CRS-2673: Attempting to stop ‘ora.DATA.dg‘ on ‘db2‘    
CRS-2673: Attempting to stop ‘ora.RECOVERY.dg‘ on ‘db2‘    
CRS-2677: Stop of ‘ora.DATA.dg‘ on ‘db2‘ succeeded    
CRS-2677: Stop of ‘ora.RECOVERY.dg‘ on ‘db2‘ succeeded    
CRS-2677: Stop of ‘ora.oc4j‘ on ‘db2‘ succeeded    
CRS-2672: Attempting to start ‘ora.oc4j‘ on ‘db1‘    
CRS-2677: Stop of ‘ora.CRS.dg‘ on ‘db2‘ succeeded    
CRS-2673: Attempting to stop ‘ora.asm‘ on ‘db2‘    
CRS-2677: Stop of ‘ora.asm‘ on ‘db2‘ succeeded    
CRS-2676: Start of ‘ora.oc4j‘ on ‘db1‘ succeeded    
CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘db2‘ has completed    
CRS-2677: Stop of ‘ora.crsd‘ on ‘db2‘ succeeded    
CRS-2673: Attempting to stop ‘ora.mdnsd‘ on ‘db2‘    
CRS-2673: Attempting to stop ‘ora.drivers.acfs‘ on ‘db2‘    
CRS-2673: Attempting to stop ‘ora.ctssd‘ on ‘db2‘    
CRS-2673: Attempting to stop ‘ora.evmd‘ on ‘db2‘    
CRS-2673: Attempting to stop ‘ora.asm‘ on ‘db2‘    
CRS-2677: Stop of ‘ora.ctssd‘ on ‘db2‘ succeeded    
CRS-2677: Stop of ‘ora.evmd‘ on ‘db2‘ succeeded    
CRS-2677: Stop of ‘ora.mdnsd‘ on ‘db2‘ succeeded    
CRS-2677: Stop of ‘ora.asm‘ on ‘db2‘ succeeded    
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip‘ on ‘db2‘    
CRS-2677: Stop of ‘ora.cluster_interconnect.haip‘ on ‘db2‘ succeeded    
CRS-2673: Attempting to stop ‘ora.cssd‘ on ‘db2‘    
CRS-2677: Stop of ‘ora.cssd‘ on ‘db2‘ succeeded    
CRS-2673: Attempting to stop ‘ora.gipcd‘ on ‘db2‘    
CRS-2677: Stop of ‘ora.drivers.acfs‘ on ‘db2‘ succeeded    
CRS-2677: Stop of ‘ora.gipcd‘ on ‘db2‘ succeeded    
CRS-2673: Attempting to stop ‘ora.gpnpd‘ on ‘db2‘    
CRS-2677: Stop of ‘ora.gpnpd‘ on ‘db2‘ succeeded    
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘db2‘ has completed    
CRS-4133: Oracle High Availability Services has been stopped.    
Removing Trace File Analyzer    
Successfully deconfigured Oracle clusterware stack on this node

9. 在db1节点上删除db2节点

# /u01/app/11.2.0/grid/bin/crsctl delete node -n db2

CRS-4661: Node db2 successfully deleted.

[[email protected] ~]#  /u01/app/11.2.0/grid/bin/olsnodes -t -s 
db1     Active  Unpinned    
[[email protected] ~]#

10. db2节点使用grid用户更新集群列表

在db2节点上执行:

# su - grid    
$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={db2}" CRS=true -local

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4031 MB    Passed  
The inventory pointer is located at /etc/oraInst.loc    
The inventory is located at /u01/app/oraInventory    
‘UpdateNodeList‘ was successful.

11. db2节点删除集群软件

在db2节点上执行:

# su - grid    
$ /u01/app/11.2.0/grid/deinstall/deinstall -local

期间会有交互,一直回车用默认值,最后产生一个脚本,用root在另一终端执行

---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "db2".

/tmp/deinstall2015-12-29_00-43-59PM/perl/bin/perl -I/tmp/deinstall2015-12-29_00-43-59PM/perl/lib -I/tmp/deinstall2015-12-29_00-43-59PM/crs/install /tmp/deinstall2015-12-29_00-43-59PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-12-29_00-43-59PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------

新开一个终端,以root 用户运行提示的脚本,如下:

/tmp/deinstall2015-12-29_00-43-59PM/perl/bin/perl -I/tmp/deinstall2015-12-29_00-43-59PM/perl/lib -I/tmp/deinstall2015-12-29_00-43-59PM/crs/install /tmp/deinstall2015-12-29_00-43-59PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-12-29_00-43-59PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Using configuration parameter file: /tmp/deinstall2015-12-29_00-43-59PM/response/deinstall_Ora11g_gridinfrahome1.rsp  
****Unable to retrieve Oracle Clusterware home.    
Start Oracle Clusterware stack and try again.    
CRS-4047: No Oracle Clusterware components configured.    
CRS-4000: Command Stop failed, or completed with errors.    
Either /etc/oracle/ocr.loc does not exist or is not readable    
Make sure the file exists and it has read and execute access    
Either /etc/oracle/ocr.loc does not exist or is not readable    
Make sure the file exists and it has read and execute access    
CRS-4047: No Oracle Clusterware components configured.    
CRS-4000: Command Modify failed, or completed with errors.    
CRS-4047: No Oracle Clusterware components configured.    
CRS-4000: Command Delete failed, or completed with errors.    
CRS-4047: No Oracle Clusterware components configured.    
CRS-4000: Command Stop failed, or completed with errors.    
################################################################    
# You must kill processes or reboot the system to properly #    
# cleanup the processes started by Oracle clusterware          #    
################################################################    
ACFS-9313: No ADVM/ACFS installation detected.    
Either /etc/oracle/olr.loc does not exist or is not readable    
Make sure the file exists and it has read and execute access    
Either /etc/oracle/olr.loc does not exist or is not readable    
Make sure the file exists and it has read and execute access    
Failure in execution (rc=-1, 256, 没有那个文件或目录) for command /etc/init.d/ohasd deinstall    
error: package cvuqdisk is not installed    
Successfully deconfigured Oracle clusterware stack on this node

运行完后,返回原终端按回车,继续运行暂停的脚本。

Remove the directory: /tmp/deinstall2015-12-29_00-43-59PM on node:    
Setting the force flag to false    
Setting the force flag to cleanup the Oracle Base    
Oracle Universal Installer clean START

Detach Oracle home ‘/u01/app/11.2.0/grid‘ from the central inventory on the local node : Done

Delete directory ‘/u01/app/11.2.0/grid‘ on the local node : Done

Delete directory ‘/u01/app/oraInventory‘ on the local node : Done

Delete directory ‘/u01/app/grid‘ on the local node : Done

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory ‘/tmp/deinstall2015-12-29_00-43-59PM‘ on node ‘db2‘

## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################    
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1    
Oracle Clusterware is stopped and successfully de-configured on node "db2"    
Oracle Clusterware is stopped and de-configured successfully.    
Successfully detached Oracle home ‘/u01/app/11.2.0/grid‘ from the central inventory on the local node.    
Successfully deleted directory ‘/u01/app/11.2.0/grid‘ on the local node.    
Successfully deleted directory ‘/u01/app/oraInventory‘ on the local node.    
Successfully deleted directory ‘/u01/app/grid‘ on the local node.    
Oracle Universal Installer cleanup was successful.

Run ‘rm -rf /etc/oraInst.loc‘ as root on node(s) ‘db2‘ at the end of the session.

Run ‘rm -rf /opt/ORCLfmap‘ as root on node(s) ‘db2‘ at the end of the session.  
Run ‘rm -rf /etc/oratab‘ as root on node(s) ‘db2‘ at the end of the session.    
Oracle deinstall tool successfully cleaned up temporary directories.    
#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

当会话结束时在节点 ‘db2‘ 上以 root 用户身份运行 ‘rm -rf /etc/oraInst.loc‘ 。    
当会话结束时在节点 ‘db2‘ 上以 root 身份运行 ‘rm -rf /opt/ORCLfmap‘ 。    
当会话结束时在节点 ‘db2‘ 上以 root 身份运行‘rm -rf /etc/oratab‘

12. db1上使用grid用户更新集群列表

在db1节点上执行:

# su - grid    
$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={db1}" CRS=true

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4031 MB    Passed  
The inventory pointer is located at /etc/oraInst.loc    
The inventory is located at /u01/app/oraInventory    
‘UpdateNodeList‘ was successful.

13. 验证db2节点被删除

在保留的db1节点上:

[[email protected] ~]$ cluvfy stage -post nodedel -n db2

Performing post-checks for node removal

Checking CRS integrity...

Clusterware version consistency passed

CRS integrity check passed

Node removal check passed

Post-check for node removal was successful.

[[email protected] ~]$ crsctl status resource -t

验证db2节点被删除

查看活动的实例:

时间: 2024-10-07 23:00:42

Oracle 11g R2 RAC删除一节点过程的相关文章

oracle 11g r2 rac ssh两节点互信对等配置Permission denied (publickey,gssapi-with-mic,password)

问题:安装oracle 11g r2 RAC grid 时,配置两节点ssh互信对等配置不成功,具体错误信息如下: ------------------------------------------------------------------------ Verifying SSH connectivity has been setup from rac1 to rac1 -----------------------------------------------------------

Oracle 11G R2 RAC中的scan ip 的用途和基本原理【转】

Oracle 11G R2 RAC增加了scan ip功能,在11.2之前,client链接数据库的时候要用vip,假如你的cluster有4个节点,那么客户端的tnsnames.ora中就对应有四个主机vip的一个连接串,如果cluster增加了一个节点,那么对于每个连接数据库的客户端都需要修改这个tnsnames.ora. 引入了scan以后,就方便了客户端连接的一个接口,顾名思义 single client access name ,简单客户端连接名,这是一个唯一的名称,在整个公司网络内部

Oracle 11g R2 RAC安装规划

前言 使用虚拟机VMWARE安装Oracle 11g R2 RAC,需要模拟两个主机节点和一个共享存储,安装系统和创建虚拟存储文件这里不作介绍,可以自行百度方法,很简单. 一.主机规划 二.数据库规划 三.准备工作 3.1.HOSTS和主机名配置 #在所有节点添加主机名,重启生效: [[email protected] ~]# cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=node1 NTPSERVERARGS=iburst [[email

Oracle 11g R2 RAC dbca新建实例报错

此oracle问题本人在论坛上作了提问http://bbs.51cto.com/thread-1167548-1.html,最后自己找到方法解决,以此博客再作记录. 环境:CentOS6.5 64位,Oracle 11g R2 11.2.0.1.0 现象:oracle rac生产环境中,已经有一个实例正常使用,有需求再建一实例. 新建实例过程中,最后步骤具体报错如下:    [Thread-829] [ 2015-09-09 11:29:42.007 CST ] [DatabaseImpl.cr

Oracle 11g R2 RAC RMAN备份脚本示例

一.将RAC切换成归档模式 1. 修改数据库的归档模式,通常在安装RAC的时候都会配置归档并且使用闪回区,已经配置过归档下面的方式可以略过. SQL> alter system set cluster_database=false scope=spfile sid='*'; 2. 关闭所有实例(两边都要shutdown) SQL> shutdown immediate 或直接关闭所有实例   $ srvctl stop database -d orcl 3. 在任意一个实例上将数据库启动到mo

RHEL6.7 x64双节点安装Oracle 11g r2 RAC

基础环境 使用两台HP DL580服务器作为RAC节点,存储使用IBM V7000.具体环境如下: 设备 用途 IP地址 磁盘空间 HP DL580 RAC节点01 RAC01-pub:116.1.1.57 RAC01-priv:4.4.4.35 RAC01-vip:116.1.1.59 RAC02-pub:16.1.1.58 RAC02-priv:4.4.4.36 RAC02-vip:116.1.1.60 RAC-scan:116.1.1.61 300G HP DL580 RAC节点02 30

Oracle 11g R2 rac通过rman 恢复到单实例数据库

生产环境是2个节点的rac + dataguard(物理备库也是两个节点的rac),通过rman每天进行备份,现在需要定期对生产库进行恢复操作 恢复步骤如下: 1.      把生产库的备份拷贝到目标端 建立存放备份的目录修改目录属主属组 mkdir /oracle/backup chown oracle:oinstall /oracle/backup 2.      拷贝备份到目标端 scp -P 22 incremental_level*  [email protected]:/oracle

ORACLE 11g r2 &nbsp; RAC 安装实施规划

安装RAC并不难,难就难在前期的实施规划上,根据用户需求采购了硬件之后,我们需求在实施之前做好详细的规划 ,这就包括主机规划.SAN交换机规划.存储柜规划.ORACLE数据库软件规划等,要将整个软硬件融为一体,充分考虑RAC系统的安全性,可靠性,可用性等因素,始终记住一个规划得好RAC系统才能充分发挥其优于单节点的优势,同时也为后期的运维管理提供方便.这篇博文主要是关于前期实施RAC的规划提供参考. 网络拓扑图 一.主机规划 系统配置 硬盘:6*300G  15krpm SAS 内存:128G内

Oracle 11g R2 RAC TAF 服务端配置

Oracle RACFailover 详解:http://www.51CTO提醒您,请勿滥发广告!/bbs/thread-31215-1-1.html How To Configure Server Side TransparentApplication Failover [ID 460982.1] 1.创建TAFService [[email protected] bin]$ ./srvctl add service -d orcl -s server_taf -r "orcl1,orcl2&