管理11gRAC基本命令 (转载)

在 Oracle Clusterware 11g 第 2 版 (11.2) 中,有许多子程序和命令已不再使用:
    crs_stat
    crs_register
    crs_unregister
    crs_start
    crs_stop
    crs_getperm
    crs_profile
    crs_relocate
    crs_setperm
    crsctl check crsd
    crsctl check cssd
    crsctl check evmd
    crsctl debug log
    crsctl set css votedisk
    crsctl start resources
    crsctl stop resources

检查集群的运行状况 —(集群化命令)

以 grid 用户身份运行以下命令。

[[email protected] ~]$ crsctl check cluster 
    CRS-4537: Cluster Ready Services is online 
    CRS-4529: Cluster Synchronization Services is online 
    CRS-4533: Event Manager is online

所有 Oracle 实例 —(数据库状态)

[[email protected] ~]$ srvctl status database -d racdb 
    Instance racdb1 is running on node racnode1 
    Instance racdb2 is running on node racnode2

单个 Oracle 实例 —(特定实例的状态)

[[email protected] ~]$ srvctl status instance -d racdb -i racdb1 
    Instance racdb1 is running on node racnode1

节点应用程序 —(状态)

[[email protected] ~]$ srvctl status nodeapps 
    VIP racnode1-vip is enabled 
    VIP racnode1-vip is running on node: racnode1 
    VIP racnode2-vip is enabled VIP racnode2-vip is running on node: racnode2 
    Network is enabled Network is running on node: racnode1 
    Network is running on node: racnode2 GSD is disabled 
    GSD is not running on node: racnode1 
    GSD is not running on node: racnode2 
    ONS is enabled 
    ONS daemon is running on node: racnode1 
    ONS daemon is running on node: racnode2 
    eONS is enabled eONS daemon is running on node: racnode1 
    eONS daemon is running on node: racnode2

节点应用程序 —(配置)

[[email protected] ~]$ srvctl config nodeapps 
    VIP exists.:racnode1 VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0 
    VIP exists.:racnode2 
    VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0 
    GSD exists. 
    ONS daemon exists. Local port 6100, remote port 6200 
    eONS daemon exists. Multicast port 24057, multicast IP address 234.194.43.168, 
    listening port 2016

数据库 —(配置)

[[email protected] ~]$ srvctl config database -d racdb -a 
    Database unique name: racdb 
    Database name: racdb 
    Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1 
    Oracle user: oracle 
    Spfile: +RACDB_DATA/racdb/spfileracdb.ora 
    Domain: idevelopment.info 
    Start options: open 
    Stop options: immediate 
    Database role: PRIMARY 
    Management policy: AUTOMATIC 
    Server pools: racdb 
    Database instances: racdb1,racdb2 
    Disk Groups: RACDB_DATA,FRA 
    Services:  
    Database is enabled 
    Database is administrator managed

ASM —(状态)

[[email protected] ~]$ srvctl status asm 
    ASM is running on racnode1,racnode2

ASM —(配置)

$ srvctl config asm -a 
    ASM home: /u01/app/11.2.0/grid 
    ASM listener: LISTENER 
    ASM is enabled.

TNS 监听器 —(状态)

[[email protected] ~]$ srvctl status listener 
    Listener LISTENER is enabled 
    Listener LISTENER is running on node(s): racnode1,racnode2

TNS 监听器 —(配置)

[[email protected] ~]$ srvctl config listener -a 
    Name: LISTENER 
    Network: 1, Owner: grid 
    Home: <crs>  
     /u01/app/11.2.0/grid on node(s) racnode2,racnode1 
    End points: TCP:1521

SCAN —(状态)

[[email protected] ~]$ srvctl status scan 
    SCAN VIP scan1 is enabled 
    SCAN VIP scan1 is running on node racnode1

SCAN —(配置)

[[email protected] ~]$ srvctl config scan 
    SCAN name: racnode-cluster-scan, Network: 1/192.168.1.0/255.255.255.0/eth0 
    SCAN VIP name: scan1, IP: /racnode-cluster-scan/192.168.1.187

VIP —(特定节点的状态)

[[email protected] ~]$ srvctl status vip -n racnode1 
    VIP racnode1-vip is enabled 
    VIP racnode1-vip is running on node: racnode1

[[email protected] ~]$ srvctl status vip -n racnode2 
    VIP racnode2-vip is enabled 
    VIP racnode2-vip is running on node: racnode2

VIP —(特定节点的配置)

[[email protected] ~]$ srvctl config vip -n racnode1 
    VIP exists.:racnode1 
    VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0

[[email protected] ~]$ srvctl config vip -n racnode2 
    VIP exists.:racnode2 
    VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0

节点应用程序配置 —(VIP、GSD、ONS、监听器)

[[email protected] ~]$ srvctl config nodeapps -a -g -s -l 
    -l option has been deprecated and will be ignored. 
    VIP exists.:racnode1 
    VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0 
    VIP exists.:racnode2 
    VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0 
    GSD exists. 
    ONS daemon exists. Local port 6100, remote port 6200 
    Name: LISTENER 
    Network: 1, Owner: grid 
    Home: <crs>  
     /u01/app/11.2.0/grid on node(s) racnode2,racnode1 
    End points: TCP:1521

验证所有集群节点间的时钟同步

[[email protected] ~]$ cluvfy comp clocksync -verbose

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes... 
    Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes... 
    Check: CTSS Resource running on all nodes  
      Node Name                             Status                     
      ------------------------------------  ------------------------ 
      racnode1                              passed
                                       
    Result: CTSS resource check passed

Querying CTSS for time offset on all nodes... 
    Result: Query of CTSS for time offset passed

Check CTSS state started... 
    Check: CTSS state 
      Node Name                             State                    
      ------------------------------------  ------------------------ 
      racnode1                              Active                   
    CTSS is in Active state. Proceeding with check of clock time offsets on all nodes... 
    Reference Time Offset Limit: 1000.0 msecs 
    Check: Reference Time Offset 
      Node Name     Time Offset               Status                   
      ------------  ------------------------  ------------------------ 
      racnode1      0.0                       passed

Time offset is within the specified limits on the following set of nodes:  "[racnode1]"  
    Result: Check of clock time offsets passed

Oracle Cluster Time Synchronization Services check passed

Verification of Clock Synchronization across the cluster nodes was successful.

以下停止/启动操作需要以 root 身份来执行。

在本地服务器上停止 Oracle Clusterware 系统

在 racnode1 节点上使用 crsctl stop cluster 命令停止 Oracle Clusterware 系统:

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster 
CRS-2673: Attempting to stop ‘ora.crsd‘ on ‘racnode1‘ 
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 
‘racnode1‘
CRS-2673: Attempting to stop ‘ora.LISTENER.lsnr‘ on ‘racnode1‘
CRS-2673: Attempting to stop ‘ora.CRS.dg‘ on ‘racnode1‘
CRS-2673: Attempting to stop ‘ora.racdb.db‘ on ‘racnode1‘
CRS-2673: Attempting to stop ‘ora.registry.acfs‘ on ‘racnode1‘
CRS-2673: Attempting to stop ‘ora.LISTENER_SCAN1.lsnr‘ on ‘racnode1‘
CRS-2677: Stop of ‘ora.LISTENER.lsnr‘ on ‘racnode1‘ succeeded
CRS-2673: Attempting to stop ‘ora.racnode1.vip‘ on ‘racnode1‘
CRS-2677: Stop of ‘ora.LISTENER_SCAN1.lsnr‘ on ‘racnode1‘ succeeded
CRS-2673: Attempting to stop ‘ora.scan1.vip‘ on ‘racnode1‘
CRS-2677: Stop of ‘ora.scan1.vip‘ on ‘racnode1‘ succeeded
CRS-2672: Attempting to start ‘ora.scan1.vip‘ on ‘racnode2‘
CRS-2677: Stop of ‘ora.racnode1.vip‘ on ‘racnode1‘ succeeded
CRS-2672: Attempting to start ‘ora.racnode1.vip‘ on ‘racnode2‘
CRS-2677: Stop of ‘ora.registry.acfs‘ on ‘racnode1‘ succeeded
CRS-2676: Start of ‘ora.racnode1.vip‘ on ‘racnode2‘ succeeded            
                                
<-- Notice racnode1 VIP moved to racnode2
CRS-2676: Start of ‘ora.scan1.vip‘ on ‘racnode2‘ succeeded               
                                
<-- Notice SCAN moved to racnode2
CRS-2672: Attempting to start ‘ora.LISTENER_SCAN1.lsnr‘ on ‘racnode2‘
CRS-2676: Start of ‘ora.LISTENER_SCAN1.lsnr‘ on ‘racnode2‘ succeeded     
                                
<-- Notice LISTENER_SCAN1 moved to racnode2
CRS-2677: Stop of ‘ora.CRS.dg‘ on ‘racnode1‘ succeeded
CRS-2677: Stop of ‘ora.racdb.db‘ on ‘racnode1‘ succeeded
CRS-2673: Attempting to stop ‘ora.FRA.dg‘ on ‘racnode1‘
CRS-2673: Attempting to stop ‘ora.RACDB_DATA.dg‘ on ‘racnode1‘
CRS-2677: Stop of ‘ora.RACDB_DATA.dg‘ on ‘racnode1‘ succeeded
CRS-2677: Stop of ‘ora.FRA.dg‘ on ‘racnode1‘ succeeded
CRS-2673: Attempting to stop ‘ora.asm‘ on ‘racnode1‘
CRS-2677: Stop of ‘ora.asm‘ on ‘racnode1‘ succeeded
CRS-2673: Attempting to stop ‘ora.ons‘ on ‘racnode1‘
CRS-2673: Attempting to stop ‘ora.eons‘ on ‘racnode1‘
CRS-2677: Stop of ‘ora.ons‘ on ‘racnode1‘ succeeded
CRS-2673: Attempting to stop ‘ora.net1.network‘ on ‘racnode1‘
CRS-2677: Stop of ‘ora.net1.network‘ on ‘racnode1‘ succeeded
CRS-2677: Stop of ‘ora.eons‘ on ‘racnode1‘ succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘racnode1‘ has 
completed
CRS-2677: Stop of ‘ora.crsd‘ on ‘racnode1‘ succeeded
CRS-2673: Attempting to stop ‘ora.cssdmonitor‘ on ‘racnode1‘
CRS-2673: Attempting to stop ‘ora.ctssd‘ on ‘racnode1‘
CRS-2673: Attempting to stop ‘ora.evmd‘ on ‘racnode1‘
CRS-2673: Attempting to stop ‘ora.asm‘ on ‘racnode1‘
CRS-2677: Stop of ‘ora.cssdmonitor‘ on ‘racnode1‘ succeeded
CRS-2677: Stop of ‘ora.evmd‘ on ‘racnode1‘ succeeded
CRS-2677: Stop of ‘ora.ctssd‘ on ‘racnode1‘ succeeded
CRS-2677: Stop of ‘ora.asm‘ on ‘racnode1‘ succeeded
CRS-2673: Attempting to stop ‘ora.cssd‘ on ‘racnode1‘
CRS-2677: Stop of ‘ora.cssd‘ on ‘racnode1‘ succeeded
CRS-2673: Attempting to stop ‘ora.diskmon‘ on ‘racnode1‘
CRS-2677: Stop of ‘ora.diskmon‘ on ‘racnode1‘ succeeded

注:在运行“ crsctl stop cluster”命令之后,如果 Oracle Clusterware 管理的资源中有任何一个还在运行,则整个命令失败。使用 -f 选项无条件地停止所有资源并停止 Oracle Clusterware 系统。

另请注意,可通过指定 -all 选项在集群中所有服务器上停止 Oracle Clusterware 系统。以下命令将在 racnode1 和 racnode2 上停止 Oracle Clusterware 系统:

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all

在本地服务器上启动 Oracle Clusterware 系统

在 racnode1 节点上使用 crsctl start cluster 命令启动 Oracle Clusterware 系统:

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster 
CRS-2672: Attempting to start ‘ora.cssdmonitor‘ on ‘racnode1‘ 
CRS-2676: Start of ‘ora.cssdmonitor‘ on ‘racnode1‘ succeeded 
CRS-2672: Attempting to start ‘ora.cssd‘ on ‘racnode1‘ 
CRS-2672: Attempting to start ‘ora.diskmon‘ on ‘racnode1‘ 
CRS-2676: Start of ‘ora.diskmon‘ on ‘racnode1‘ succeeded 
CRS-2676: Start of ‘ora.cssd‘ on ‘racnode1‘ succeeded 
CRS-2672: Attempting to start ‘ora.ctssd‘ on ‘racnode1‘ 
CRS-2676: Start of ‘ora.ctssd‘ on ‘racnode1‘ succeeded 
CRS-2672: Attempting to start ‘ora.evmd‘ on ‘racnode1‘ 
CRS-2672: Attempting to start ‘ora.asm‘ on ‘racnode1‘ 
CRS-2676: Start of ‘ora.evmd‘ on ‘racnode1‘ succeeded 
CRS-2676: Start of ‘ora.asm‘ on ‘racnode1‘ succeeded 
CRS-2672: Attempting to start ‘ora.crsd‘ on ‘racnode1‘ 
CRS-2676: Start of ‘ora.crsd‘ on ‘racnode1‘ succeeded

注:可通过指定 -all 选项在集群中所有服务器上启动 Oracle Clusterware 系统。

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -all

还可以通过列出服务器(各服务器之间以空格分隔)在集群中一个或多个指定的服务器上启动 Oracle Clusterware 系统:

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -n 
racnode1 racnode2

使用 SRVCTL 启动/停止所有实例

最后,可使用以下命令来启动/停止所有实例及相关服务:

[[email protected] ~]$ srvctl stop database -d racdb  
[[email protected] ~]$ srvctl start database -d racdb

http://blog.itpub.net/29337971/viewspace-1079546/

管理11gRAC基本命令 (转载)

时间: 2024-10-21 10:19:24

管理11gRAC基本命令 (转载)的相关文章

管理11gRAC基本命令 (转载) 很详细

在 Oracle Clusterware 11g 第 2 版 (11.2) 中,有许多子程序和命令已不再使用:    crs_stat    crs_register    crs_unregister    crs_start    crs_stop    crs_getperm    crs_profile    crs_relocate    crs_setperm    crsctl check crsd    crsctl check cssd    crsctl check evmd

Linux基础--进程管理及其基本命令

本文主要讲解Linux中进程管理的基本命令使用方法. 1. top命令 作用: 动态显示进程状态 格式: top [options] 常用选项: -d: 后面可以接秒数,就是整个程序画面更新的秒数, 默认是5秒 -b: 以批次的方式执行 top -p: 指定对某个PID进行观察 字段意义: top - 23:52:38 up 19:58,  2 users,   load average: 0.00, 0.00, 0.00 过去1分钟,5分钟,15分钟的系统平均负载: 如果高于1表示系统繁忙 T

Oracle权限管理详解(转载)

转载:http://czmmiao.iteye.com/blog/1304934  作者:czmmiao Oracle 权限 权限允许用户访问属于其它用户的对象或执行程序,ORACLE系统提供三种权限:Object 对象级.System 系统级.Role 角色级.这些权限可以授予给用户.特殊用户public或角色,如果授予一个权限给特殊用户"Public"(用户public是oracle预定义的,每个用户享有这个用户享有的权限),那么就意味作将该权限授予了该数据库的所有用户.对管理权限

cocos2d-x内存管理机制解析(转载)

最近在看内存管理的源码,发现这篇文章讲的不错,思路很清晰,故转载收藏. 原地址:http://blog.csdn.net/a7833756/article/details/7628328 1.cocos2d-x 内存管理的方式,cocos2d-x采用引用计数的方式进行内存管理,当一个对象的引用计数为0的时候,就会被引擎自动delete掉. 所有cocos2d-x里面的类都继承ccobject类(应该是吧.),下面看ccobject类源码: 这里 m_uReference 就是引用计数,在对象构造

Shiro权限管理详解&lt;转载&gt;

1 权限管理1.1 什么是权限管理 基本上涉及到用户参与的系统都要进行权限管理,权限管理属于系统安全的范畴,权限管理实现对用户访问系统的控制,按照安全规则或者安全策略控制用户可以访问而且只能访问自己被授权的资源. 权限管理包括用户身份认证和授权两部分,简称认证授权.对于需要访问控制的资源用户首先经过身份认证,认证通过后用户具有该资源的访问权限方可访问. 1.2 用户身份认证1.2.1 概念 身份认证,就是判断一个用户是否为合法用户的处理过程.最常用的简单身份认证方式是系统通过核对用户输入的用户名

软件包管理——Linux基本命令(13)

1.包管理器 debian:deb文件, dpkg包管理器 redhat:rpm文件, rpm包管理器 包之间可能存在依赖关系,甚至循环依赖.解决依赖包管理工具: yum:rpm包管理器的前端工具 管理程序包的方式: 使用包管理器:rpm 使用前端工具:yum, dnf 2.rpm包管理 CentOS系统上使用rpm命令管理程序包: 安装.卸载.升级.查询.校验.数据库维护 (1)安装 rmp -i --instal 静默安装(默认) -v 显示详细信息 -vv 显示更详细信息 -h 显示进度

.Net Core下如何管理配置文件(转载)

原文地址:http://www.cnblogs.com/yaozhenfa/p/5408009.html 一.前言 根据该issues来看,System.Configuration在.net core中已经不存在了,那么取而代之的是由Microsoft.Extensions.Cnfiguration.XXX一系列的类库提供,对应的开源地址为点击这里. 从当前开源的代码来看,在.net core下提供了以下类库给我们: Microsoft.Extensions.Configuration.Abst

磁盘管理——Linux基本命令(14)

1.设备文件 (1)磁盘存储术语 head:磁头 track:磁道 cylinder: 柱面 secotr: 扇区,512bytes (2)寻址方式 ·CHS(cylinder,head,secotr)称为硬盘的三围 采用24bit位寻址 其中前10位表示cylinder,中间8位表示head,后面6位表示sector. 最大寻址空间8GB 因为一个硬盘的磁头数/盘面数为256(2^8),一个盘面上有1024(2^10)个磁道,每个磁道有64(2^6)个扇区,每个扇区512字节 256*1024

KbmMW 认证管理器说明(转载)

这是kbmmw 作者关于认证管理器的说明,我懒得翻译了,自己看吧. There are 5 parts of setting up an authorization manager: A) Defining what the resources are (often services or service functions, but can be anything you want to protect).B) Defining who the actors (typically users)