11g RAC R2 日常巡检--Grid

一.巡检RAC数据库

1.1列出数据库

[[email protected] ~]$ srvctl config database
racdb
[[email protected] ~]$

1.2列出数据库的实例

[[email protected] ~]$ srvctl status database -d racdb
Instance racdb1 is running on node node1
Instance racdb2 is running on node node2

1.3数据库的配置

[[email protected] ~]$ srvctl config database -d racdb -a
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/racdb/spfileracdb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2
Disk Groups: DATA
Services:
Database is enabled
Database is administrator managed
[[email protected] ~]$ 

二.巡检Grid

2.1集群名称

[[email protected] ~]$ cemutlo -n
scan-cluster
[[email protected] ~]$ 

2.2检查集群栈状态

[[email protected] ~]$ crsctl check cluster -all
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[[email protected] ~]$

2.3 集群的资源

[[email protected] ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       node1
               ONLINE  ONLINE       node2
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1
               ONLINE  ONLINE       node2
ora.asm
               ONLINE  ONLINE       node1                    Started
               ONLINE  ONLINE       node2                    Started
ora.eons
               ONLINE  ONLINE       node1
               ONLINE  ONLINE       node2
ora.gsd
               OFFLINE OFFLINE      node1
               OFFLINE OFFLINE      node2
ora.net1.network
               ONLINE  ONLINE       node1
               ONLINE  ONLINE       node2
ora.ons
               ONLINE  ONLINE       node1
               ONLINE  ONLINE       node2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node1
ora.node1.vip
      1        ONLINE  ONLINE       node1
ora.node2.vip
      1        ONLINE  ONLINE       node2
ora.oc4j
      1        OFFLINE OFFLINE
ora.racdb.db
      1        ONLINE  ONLINE       node1                    Open
      2        ONLINE  OFFLINE
ora.scan1.vip
      1        ONLINE  ONLINE       node2
ora.scan2.vip
      1        ONLINE  ONLINE       node1
ora.scan3.vip
      1        ONLINE  ONLINE       node1
[[email protected] ~]$ 

主机node1的更加详细的资源

[[email protected] ~]$ crsctl status res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       node1                    Started
ora.crsd
      1        ONLINE  ONLINE       node1
ora.cssd
      1        ONLINE  ONLINE       node1
ora.cssdmonitor
      1        ONLINE  ONLINE       node1
ora.ctssd
      1        ONLINE  ONLINE       node1                    ACTIVE:0
ora.diskmon
      1        ONLINE  ONLINE       node1
ora.evmd
      1        ONLINE  ONLINE       node1
ora.gipcd
      1        ONLINE  ONLINE       node1
ora.gpnpd
      1        ONLINE  ONLINE       node1
ora.mdnsd
      1        ONLINE  ONLINE       node1
[[email protected] ~]$

主机node2的更加详细的资源

[[email protected] ~]$ crsctl status res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       node2                    Started
ora.crsd
      1        ONLINE  ONLINE       node2
ora.cssd
      1        ONLINE  ONLINE       node2
ora.cssdmonitor
      1        ONLINE  ONLINE       node2
ora.ctssd
      1        ONLINE  ONLINE       node2                    ACTIVE:-11700
ora.diskmon
      1        ONLINE  ONLINE       node2
ora.evmd
      1        ONLINE  ONLINE       node2
ora.gipcd
      1        ONLINE  ONLINE       node2
ora.gpnpd
      1        ONLINE  ONLINE       node2
ora.mdnsd
      1        ONLINE  ONLINE       node2
[[email protected] ~]$ 

2.4检查节点应用

[[email protected] ~]$ srvctl status nodeapps
VIP node1-vip is enabled
VIP node1-vip is running on node: node1
VIP node2-vip is enabled
VIP node2-vip is running on node: node2
Network is enabled
Network is running on node: node1
Network is running on node: node2
GSD is disabled
GSD is not running on node: node1
GSD is not running on node: node2
ONS is enabled
ONS daemon is running on node: node1
ONS daemon is running on node: node2
eONS is enabled
eONS daemon is running on node: node1
eONS daemon is running on node: node2
[[email protected] ~]$ 

2.5 检查SCAN

检查scan-ip地址的配置
[[email protected] ~]$ srvctl config scan
SCAN name: scan-cluster.com, Network: 1/192.168.0.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /scan-cluster/192.168.0.24
SCAN VIP name: scan2, IP: /scan-cluster/192.168.0.25
SCAN VIP name: scan3, IP: /scan-cluster/192.168.0.26
[[email protected] ~]$ 

检查scan-ip地址的实际分布及状态
[[email protected] ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node node1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node node1
[[email protected] ~]$ 

检查scan监听配置
[[email protected] ~]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
[[email protected] ~]$ 

检查scan监听状态
[[email protected] ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node1
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node1
[[email protected] ~]$ 

2.6 检查VIP和监听

检查VIP的配置情况
[[email protected] ~]$ srvctl config vip -n node1
VIP exists.:node1
VIP exists.: /node1-vip/192.168.0.21/255.255.255.0/eth0
[[email protected] ~]$ srvctl config vip -n node2
VIP exists.:node2
VIP exists.: /node2-vip/192.168.0.31/255.255.255.0/eth0
[[email protected] ~]$

检查VIP的状态
[[email protected] ~]$ srvctl status nodeapps
或
[[email protected] ~]$ srvctl status vip -n node1
VIP node1-vip is enabled
VIP node1-vip is running on node: node1
[[email protected] ~]$ srvctl status vip -n node2
VIP node2-vip is enabled
VIP node2-vip is running on node: node2
[[email protected] ~]$ 

检查本地监听配置:
[[email protected] ~]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
  /u01/app/11.2.0/grid on node(s) node2,node1
End points: TCP:1521

检查本地监听状态:
[[email protected] ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): node1,node2
[[email protected] ~]$ 

2.7 检查ASM

检查ASM状态
[[email protected] ~]$ srvctl status asm -a
ASM is running on node1,node2
ASM is enabled.

检查ASM配置
[[email protected] ~]$ srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
[[email protected] ~]$ 

检查磁盘组
[[email protected] ~]$ srvctl status diskgroup -g DATA
Disk Group DATA is running on node1,node2
[[email protected] ~]$ 

2.8检查集群节点间的时钟同步

检查节点node1的时间同步
[[email protected] ~]$ cluvfy comp clocksync -verbose
.......
Verification of Clock Synchronization across the cluster nodes was successful.
[[email protected] ~]$
检查节点node2的时间同步
[[email protected] ~]$ cluvfy comp clocksync -verbose
..............
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
  Node Name     Time Offset               Status
  ------------  ------------------------  ------------------------
  node2         -89900.0                  failed
Result: PRVF-9661 : Time offset is NOT within the specified limits on the following nodes:
"[node2]" 

PRVF-9652 : Cluster Time Synchronization Services check failed

Verification of Clock Synchronization across the cluster nodes was unsuccessful on all the specified nodes.
[[email protected] ~]$ 

  注:节点2的服务器时间出现问题

  至此,对Grid的巡检基本上就算完成了

时间: 2024-12-19 05:20:23

11g RAC R2 日常巡检--Grid的相关文章

11g RAC R2 体系结构---用户及用户组

10.2 RAC 到11.2 RAC 用户及用户组的变化: 在10.2 RAC 的部署中,只需要一个用户(oracle)和一个用户组(dba).Database.Clusterware都是用oracle用户安装的. 在11.2 RAC 的部署中,创建了两个用户(oracle,grid)和6个用户组(oinstall,asmadmin,asmdba,dba,asmoper,oper).Grid是用grid安装的,Database是oracle用户安装的. 在10.2 RAC ASM是以oracle

安装Oracle 11g RAC R2 之Linux DNS 配置

Oracle 11g RAC 集群中引入了SCAN(Single Client Access Name)的概念,也就是指集群的单客户端访问名称.SCAN 这个特性为客户端提供了单一的主机名,用于访问集群中运行的 Oracle 数据库.如果您在集群中添加或删除节点,使用 SCAN 的客户端无需更改自己的 TNS 配置.无论集群包含哪些节点,SCAN 资源及其关联的 IP 地址提供了一个稳定的名称供客户端进行连接使用.在Oracle 11g grid 安装时即要求为该特性配置DNS解析方式或GNS解

11g RAC R2 体系结构---Grid

基于agent的管理方式 从oracle 11.2开始出现了多用户的概念,oracle开始使用一组多线程的daemon来同时支持多个用户的使用.管理资源,这些daemon叫做Agent.这些Agent都是些常驻内存的进程. Agent的分类 oracle grid 11.2的agent有多个,其中两个最重要的是oracle agnet和oracle root agent. oracle agnet 是以oracle用户身份运行(这个oracle用户是泛指,根据场合不同可能是grid,也可能是or

11g RAC集群启动关闭、各种资源检查、配置信息查看汇总。

简要:一:集群的启动与关闭 1. rac集群的手动启动[[email protected] bin]# ./crsctl start cluster -all2. 查看rac集群的状态[[email protected] bin]# ./crsctl stat res -t3. rac集群的关闭[[email protected] bin]# ./crscrl stop cluster -all--------------------------------二:集群的各种资源状态的检查 1. 检

11g rac grid用户来管理监听程序

Oracle 11g RAC 数据库监听默认都是grid用户通过集群工具来进行管理的,但是有些时候我们会发现数据库监听程序有oracle用户启动的,这里就存在一些问题.比如,出现的多余的监听进程我们可以kill掉 <roidb2:+ASM2:/home/grid>$ps -ef|grep tns root 10 2 0 08:11 ? 00:00:00 [netns] grid 4880 1 0 08:24 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr

orcle 11g rac crs状态正常,节点2数据库未启动

orcle 11g rac crs状态正常,节点2数据库未启动 安装完oracle11g R2 rac后,在节点1上查看数据库状态: [[email protected] ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Wed May 17 18:56:34 2017 Copyright (c) 1982, 2013, Oracle.  All rights reserved. Connected to: Or

oracle 11g RAC 的一些基本概念(三)

Grid Infrastructure共享组件 Grid Infrastructure使用两种类型的共享设备来管理集群资源和节点:OCR(Oracle Cluster Registry)和表决磁盘.Oracle 11.2引入一个新的文件,称作Oracle Local Registry(OLR),它只允许存放在本地. OCR和OLR OCR为所有节点所共享,包含了集群资源的所有信息和 Grid Infrastructure需要的操作许可.为了实现共享,OCR需要存放在裸设备.共享块设备.类似OCF

oracle 11g rac 笔记(VMware 和esxi主机都可以使用)

这个只是笔记,防止丢失,没事见整理 在vmware安装目录 创建磁盘: vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 D:\VMWARE\racsharedisk\ocr.vmdk vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 D:\VMWARE\racsharedisk\ocr2.vmdk vmware-vdiskmanager.exe -c -s 1000Mb -a l

Oracle 11g RAC搭建(VMware环境)

Oracle 11g RAC搭建(VMware环境) Oracle 11g RAC搭建VMware环境 安装环境与网络规划 安装环境 网络规划 环境配置 通过SecureCRT建立命令行连接 关闭防火墙 创建必要的用户组和目录并授权 节点配置检查 系统文件设置 配置IP和hostshostname 配置grid和oracle用户环境变量 配置oracle用户ssh互信 配置裸盘 配置grid用户ssh互信 挂载安装软件文件夹 安装用于Linux的cvuqdisk 手动运行cvu使用验证程序验证O