RAC维护命令

一、节点层(olsnodes)

rac1-> olsnodes -help

Usage: olsnodes [-n] [-p] [-i] [<node> | -l] [-g] [-v]

where

-n print node number with the node name

-p print private interconnect name with the node name

-i print virtual IP name with the node name

<node> print information for the specified node

-l print information for the local node

-g turn on logging

-v run in verbose mode

rac1-> olsnodes -p -n -i

rac1    1       rac1-priv       rac1-vip

rac2    2       rac2-priv       rac2-vip

二、网络层(oifcfg)

rac1-> oifcfg -help

Name:

oifcfg – Oracle Interface Configuration Tool.

Usage:  oifcfg iflist [-p [-n]]

oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}…

oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>] ]

oifcfg delif [-node <nodename> | -global] [<if_name>[/<subnet>]]

oifcfg [-help]

<nodename> – name of the host, as known to a communications network

<if_name>  - name by which the interface is configured in the system

<subnet>   – subnet address of the interface

<if_type>  - type of the interface { cluster_interconnect | public | storage }

rac1-> oifcfg iflist -n -p

eth0  10.10.10.0  PRIVATE  255.255.255.0

eth1  192.168.1.0  PRIVATE  255.255.255.0

rac1-> oifcfg getif

eth0  10.10.10.0  global  cluster_interconnect

eth1  192.168.1.0  global  public

rac1-> oifcfg getif -node rac1

rac1-> oifcfg getif -global rac1

eth0  10.10.10.0  global  cluster_interconnect

eth1  192.168.1.0  global  public

rac1-> oifcfg getif -type public

eth1  192.168.1.0  global  public

rac1-> oifcfg setif -global [email protected]/10.0.0.0:public

rac1-> oifcfg getif -type public

eth1  192.168.1.0  global  public

[email protected]  10.0.0.0  global  public

rac1-> oifcfg delif -global [email protected]/10.0.0.0

rac1-> oifcfg getif -type public

eth1  192.168.1.0  global  public

三、集群层

1、crsctl

rac1-> crsctl

Usage: crsctl check  crs          - checks the viability of the CRS stack

crsctl check  cssd         – checks the viability of CSS

crsctl check  crsd         – checks the viability of CRS

crsctl check  evmd         – checks the viability of EVM

crsctl set    css <parameter> <value> – sets a parameter override

crsctl get    css <parameter> – gets the value of a CSS parameter

crsctl unset  css <parameter> – sets CSS parameter to its default

crsctl query  css votedisk    - lists the voting disks used by CSS

crsctl add    css votedisk <path> – adds a new voting disk

crsctl delete css votedisk <path> – removes a voting disk

crsctl enable  crs    - enables startup for all CRS daemons

crsctl disable crs    - disables startup for all CRS daemons

crsctl start crs  - starts all CRS daemons.

crsctl stop  crs  - stops all CRS daemons. Stops CRS resources in case of cluster.

crsctl start resources  - starts CRS resources.

crsctl stop resources  - stops  CRS resources.

crsctl debug statedump evm  - dumps state info for evm objects

crsctl debug statedump crs  - dumps state info for crs objects

crsctl debug statedump css  - dumps state info for css objects

crsctl debug log css [module:level]{,module:level} …

- Turns on debugging for CSS

crsctl debug trace css – dumps CSS in-memory tracing cache

crsctl debug log crs [module:level]{,module:level} …

- Turns on debugging for CRS

crsctl debug trace crs – dumps CRS in-memory tracing cache

crsctl debug log evm [module:level]{,module:level} …

- Turns on debugging for EVM

crsctl debug trace evm – dumps EVM in-memory tracing cache

crsctl debug log res <resname:level> turns on debugging for resources

crsctl query crs softwareversion [<nodename>] – lists the version of CRS software installed

crsctl query crs activeversion – lists the CRS software operating version

crsctl lsmodules css – lists the CSS modules that can be used for debugging

crsctl lsmodules crs – lists the CRS modules that can be used for debugging

crsctl lsmodules evm – lists the EVM modules that can be used for debugging

If necesary any of these commands can be run with additional tracing by

adding a “trace” argument at the very front.

Example: crsctl trace check css

[email protected] bin]# ./crsctl stop  crs

Stopping resources.

Successfully stopped CRS resources

Stopping CSSD.

Shutting down CSS daemon.

Shutdown request successfully issued.

[[email protected] bin]# ./crsctl query css votedisk

0.     0    /ocfs/clusterware/votingdisk

located 1 votedisk(s).

[[email protected] bin]# ./crsctl add    css votedisk /ocfs/clusterware/votingdisk02 -force

Now formatting voting disk: /ocfs/clusterware/votingdisk01

successful addition of votedisk /ocfs/clusterware/votingdisk02.

[[email protected] bin]# ./crsctl add    css votedisk /ocfs/clusterware/votingdisk02 -force

Now formatting voting disk: /ocfs/clusterware/votingdisk02

successful addition of votedisk /ocfs/clusterware/votingdisk02.

[[email protected] bin]# ./crsctl query css votedisk

0.     0    /ocfs/clusterware/votingdisk

1.     0    /ocfs/clusterware/votingdisk01

2.     0    /ocfs/clusterware/votingdisk02

2、ocrdump

rac2-> ocrdump -help

Name:

ocrdump – Dump contents of Oracle Cluster Registry to a file.

Synopsis:

ocrdump [<filename>|-stdout] [-backupfile <backupfilename>] [-keyname <keyname>] [-xml] [-noheader]

Description:

Default filename is OCRDUMPFILE. Examples are:

prompt> ocrdump

writes cluster registry contents to OCRDUMPFILE in the current directory

prompt> ocrdump MYFILE

writes cluster registry contents to MYFILE in the current directory

prompt> ocrdump -stdout -keyname SYSTEM

writes the subtree of SYSTEM in the cluster registry to stdout

prompt> ocrdump -stdout -xml

writes cluster registry contents to stdout in xml format

Notes:

The header information will be retrieved based on best effort basis.

A log file will be created in

$ORACLE_HOME/log/<hostname>/client/ocrdump_<pid>.log. Make sure

you have file creation privileges in the above directory before

running this tool.

rac2-> ocrdump /tmp/ocr.out -keyname SYSTEM.css -xml

rac2-> ocrdump /tmp/ocr_a.out  -xml

rac2-> ocrdump -stdout -keyname SYSTEM.css -xml|more

3、ocrcheck

rac2-> ocrcheck

Status of Oracle Cluster Registry is as follows :

Version                  :          2

Total space (kbytes)     :     262144

Used space (kbytes)      :       4344

Available space (kbytes) :     257800

ID                       :  582586001

Device/File Name         : /ocfs/clusterware/ocr

Device/File integrity check succeeded

Device/File not configured

Cluster registry integrity check succeeded

4、ocrconfig

rac2-> ocrconfig

Name:

ocrconfig – Configuration tool for Oracle Cluster Registry.

Synopsis:

ocrconfig [option]

option:

-export <filename> [-s online]

- Export cluster register contents to a file

-import <filename>                  - Import cluster registry contents from a file

-upgrade [<user> [<group>]]

- Upgrade cluster registry from previous version

-downgrade [-version <version string>]

- Downgrade cluster registry to the specified version

-backuploc <dirname>                - Configure periodic backup location

-showbackup                         – Show backup information

-restore <filename>                 – Restore from physical backup

-replace ocr|ocrmirror [<filename>] – Add/replace/remove a OCR device/file

-overwrite                          - Overwrite OCR configuration on disk

-repair ocr|ocrmirror <filename>    - Repair local OCR configuration

-help                               – Print out this help information

Note:

A log file will be created in

$ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure

you have file creation privileges in the above directory before

running this tool.

rac2-> ocrconfig -showbackup

rac1     2011/07/29 20:18:22     /u01/app/oracle/product/10.2.0/crs_1/cdata/crs

rac1     2011/07/29 16:18:33     /u01/app/oracle/product/10.2.0/crs_1/cdata/crs

rac1     2011/07/29 16:18:33     /u01/app/oracle/product/10.2.0/crs_1/cdata/crs

rac1     2011/07/29 16:18:33     /u01/app/oracle/product/10.2.0/crs_1/cdata/crs

[[email protected] bin]# cd /u01/app/oracle/product/10.2.0/crs_1/cdata/crs

[[email protected] crs]# ll

总用量 16368

-rw-r–r–  1 root root 4554752  7月 29 20:18 backup00.ocr

-rw-r–r–  1 root root 4055040  7月 29 16:18 backup01.ocr

-rw-r–r–  1 root root 4055040  7月 29 16:18 day.ocr

-rw-r–r–  1 root root 4055040  7月 29 16:18 week.ocr

rac2-> ocrconfig -export /tmp/bak_ocr -s online

PROT-20: Insufficient permission to proceed. Require privileged user

[[email protected] bin]# ./ocrconfig -export /tmp/bak_ocr -s online

[[email protected] bin]# ll /tmp/bak_ocr

-rw-r–r–  1 root root 98692  7月 30 00:07 /tmp/bak_ocr

四、应用层

1、crs_stat

rac2-> crs_stat -help

Usage:  crs_stat [resource_name [...]] [-v] [-l] [-q] [-c cluster_member]

crs_stat [resource_name [...]] -t [-v] [-q] [-c cluster_member]

crs_stat -p [resource_name [...]] [-q]

crs_stat [-a] application -g

crs_stat [-a] application -r [-c cluster_member]

crs_stat -f [resource_name [...]] [-q] [-c cluster_member]

crs_stat -ls [resource_name [...]] [-q]

rac2-> crs_stat -t -v

Name           Type           R/RA   F/FT   Target    State     Host

———————————————————————-

ora…..XFF.cs application    0/0    0/1    ONLINE    ONLINE    rac1

ora….db1.srv application    0/0    0/0    ONLINE    ONLINE    rac1

ora.devdb.db   application    0/1    0/1    ONLINE    ONLINE    rac1

ora….b1.inst application    0/5    0/0    ONLINE    ONLINE    rac1

ora….b2.inst application    0/5    0/0    ONLINE    ONLINE    rac2

ora….SM1.asm application    0/5    0/0    ONLINE    ONLINE    rac1

ora….C1.lsnr application    0/5    0/0    ONLINE    ONLINE    rac1

ora.rac1.gsd   application    0/5    0/0    ONLINE    ONLINE    rac1

ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1

ora.rac1.vip   application    0/0    0/0    ONLINE    ONLINE    rac1

ora….SM2.asm application    0/5    0/0    ONLINE    ONLINE    rac2

ora….C2.lsnr application    0/5    0/0    ONLINE    ONLINE    rac2

ora.rac2.gsd   application    0/5    0/0    ONLINE    ONLINE    rac2

ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2

ora.rac2.vip   application    0/0    0/0    ONLINE    ONLINE    rac2

rac2-> crs_stat -p ora.devdb.db

NAME=ora.devdb.db

TYPE=application

ACTION_SCRIPT=/u01/app/oracle/product/10.2.0/crs_1/bin/racgwrap

ACTIVE_PLACEMENT=0

AUTO_START=1

CHECK_INTERVAL=600

DESCRIPTION=CRS application for the Database

FAILOVER_DELAY=0

FAILURE_INTERVAL=60

FAILURE_THRESHOLD=1

HOSTING_MEMBERS=

OPTIONAL_RESOURCES=

PLACEMENT=balanced

REQUIRED_RESOURCES=

RESTART_ATTEMPTS=1

SCRIPT_TIMEOUT=600

START_TIMEOUT=0

STOP_TIMEOUT=0

UPTIME_THRESHOLD=7d

USR_ORA_ALERT_NAME=

USR_ORA_CHECK_TIMEOUT=0

USR_ORA_CONNECT_STR=/ as sysdba

USR_ORA_DEBUG=0

USR_ORA_DISCONNECT=false

USR_ORA_FLAGS=

USR_ORA_IF=

USR_ORA_INST_NOT_SHUTDOWN=

USR_ORA_LANG=

USR_ORA_NETMASK=

USR_ORA_OPEN_MODE=

USR_ORA_OPI=false

USR_ORA_PFILE=

USR_ORA_PRECONNECT=none

USR_ORA_SRV=

USR_ORA_START_TIMEOUT=0

USR_ORA_STOP_MODE=immediate

USR_ORA_STOP_TIMEOUT=0

USR_ORA_VIP=

2、srvctl

rac2-> srvctl

用法: srvctl <command> <object> [<options>]

命令: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config

对象: database|instance|service|nodeapps|asm|listener

有关各个命令和对象的详细帮助, 请使用:

srvctl <command> <object> -h

rac2-> srvctl config database

devdb

rac2-> srvctl config database -d devdb

rac1 devdb1 /u01/app/oracle/product/10.2.0/db_1

rac2 devdb2 /u01/app/oracle/product/10.2.0/db_1

rac2-> srvctl config database -d devdb -a

rac1 devdb1 /u01/app/oracle/product/10.2.0/db_1

rac2 devdb2 /u01/app/oracle/product/10.2.0/db_1

DB_NAME: devdb

ORACLE_HOME: /u01/app/oracle/product/10.2.0/db_1

SPFILE: +DG1/devdb/spfiledevdb.ora

DOMAIN: null

DB_ROLE: null

START_OPTIONS: null

POLICY:  AUTOMATIC

ENABLE FLAG: DB ENABLED

rac2-> srvctl config nodeapps -n rac1

rac1 devdb1 /u01/app/oracle/product/10.2.0/db_1

rac2-> srvctl config nodeapps -h

用法: srvctl config nodeapps -n <node_name> [-a] [-g] [-o] [-s] [-l]

-n <node>           节点名

-a                  显示 TNS 条目

-g                  显示 GSD 配置

-s                  显示 ONS 守护程序配置

-l                  显示监听程序配置

-h                  打印用法

rac2-> srvctl config nodeapps -n rac2 -l

监听程序已存在。

rac2-> srvctl config nodeapps -n rac1 -a

VIP 已存在。: /rac1-vip/192.168.1.21/255.255.255.0/eth0:eth1

rac2-> srvctl config listener -n rac2

rac2 LISTENER_RAC2

rac2-> srvctl config asm -n rac1

+ASM1 /u01/app/oracle/product/10.2.0/db_1

rac2-> srvctl config service -h

用法: srvctl config service -d <name> [-s <service_name>] [-a] [-S <level>]

-d <name>           数据库的唯一名称

-s <service>        服务名

-a                  附加属性

-S <level>          EM 控制台的附加信息

-h                  打印用法

rac2-> srvctl config service -d devdb -a

XFF PREF: devdb1 AVAIL: devdb2 TAF: basic

–设置XFF service 不开机启动

ac2-> srvctl disable service -h

用法: srvctl disable service -d <name> -s “<service_name_list>” [-i <inst_name>]

-d <name>           数据库的唯一名称

-s “<serv,…>”     逗号分隔的服务名

-i <inst>           实例名

-h                  打印用法

rac2-> srvctl disable service -d devdb -s XFF -i devdb1

rac2-> srvctl config service -d devdb -a

XFF PREF: devdb1 AVAIL: devdb2 TAF: basic

在实例 devdb1 上禁用服务 XFF。

rac2-> srvctl enable service -h

用法: srvctl enable service -d <name> -s “<service_name_list>” [-i <inst_name>]

-d <name>           数据库的唯一名称

-s “<serv,…>”     逗号分隔的服务名

-i <inst>           实例名

-h                  打印用法

rac2-> srvctl enable service -d devdb -s XFF -i devdb1

rac2-> srvctl config service -d devdb -a

XFF PREF: devdb1 AVAIL: devdb2 TAF: basic

–添加xff2 service

rac2-> srvctl add service -h

用法: srvctl add service -d <name> -s <service_name> -r “<preferred_list>” [-a "<available_list>"] [-P <TAF_policy>]

-d <name>           数据库的唯一名称

-s <service>        服务名

-r “<pref_list>”    首选实例列表

-a “<avail_list>”   可用实例列表

-P <TAF_policy>     TAF 策略 (NONE, BASIC, 或 PRECONNECT)

用法: srvctl add service -d <name> -s <service_name> -u {-r “<new_pref_inst>” | -a “<new_avail_inst>”}

-d <name>           数据库的唯一名称

-s <service>        服务名

-u                  为服务配置添加一个新实例

-r <new_pref_inst>  新首选实例的名称

-a <new_avail_inst> 新可用实例的名称

-h                  打印用法

rac2-> srvctl add service -d devdb -s xff2 -r devdb2 -a devdb1  -P BASIC

rac2-> srvctl config service -d devdb -a

XFF PREF: devdb1 AVAIL: devdb2 TAF: basic

xff2 PREF: devdb2 AVAIL: devdb1 TAF: BASIC

rac2-> crs_stat -t -v

Name           Type           R/RA   F/FT   Target    State     Host

———————————————————————-

ora…..XFF.cs application    0/0    0/1    ONLINE    ONLINE    rac1

ora….db1.srv application    0/0    0/0    ONLINE    ONLINE    rac2

ora.devdb.db   application    0/1    0/1    ONLINE    ONLINE    rac1

ora….b1.inst application    0/5    0/0    ONLINE    ONLINE    rac1

ora….b2.inst application    0/5    0/0    ONLINE    ONLINE    rac2

ora….xff2.cs application    0/0    0/1    OFFLINE   OFFLINE

ora….db2.srv application    0/0    0/0    OFFLINE   OFFLINE

ora….SM1.asm application    0/5    0/0    ONLINE    ONLINE    rac1

ora….C1.lsnr application    0/5    0/0    ONLINE    ONLINE    rac1

ora.rac1.gsd   application    0/5    0/0    ONLINE    ONLINE    rac1

ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1

ora.rac1.vip   application    0/0    0/0    ONLINE    ONLINE    rac1

ora….SM2.asm application    0/5    0/0    ONLINE    ONLINE    rac2

ora….C2.lsnr application    0/5    0/0    ONLINE    ONLINE    rac2

ora.rac2.gsd   application    0/5    0/0    ONLINE    ONLINE    rac2

ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2

ora.rac2.vip   application    0/0    0/0    ONLINE    ONLINE    rac2

–设置xff2 service 开机自启动

rac2->  srvctl enable service -d devdb -s xff2 -i devdb1

rac2->  srvctl enable service -d devdb -s xff2 -i devdb2

–启动xff2 service

rac2-> srvctl start service -d devdb -s xff2 -i devdb1

rac2-> crs_stat -t

Name           Type           Target    State     Host

————————————————————

ora…..XFF.cs application    ONLINE    ONLINE    rac1

ora….db1.srv application    ONLINE    ONLINE    rac2

ora.devdb.db   application    ONLINE    ONLINE    rac1

ora….b1.inst application    ONLINE    ONLINE    rac1

ora….b2.inst application    ONLINE    ONLINE    rac2

ora….xff2.cs application    ONLINE    ONLINE    rac2

ora….db2.srv application    ONLINE    ONLINE    rac1

ora….SM1.asm application    ONLINE    ONLINE    rac1

ora….C1.lsnr application    ONLINE    ONLINE    rac1

ora.rac1.gsd   application    ONLINE    ONLINE    rac1

ora.rac1.ons   application    ONLINE    ONLINE    rac1

ora.rac1.vip   application    ONLINE    ONLINE    rac1

ora….SM2.asm application    ONLINE    ONLINE    rac2

ora….C2.lsnr application    ONLINE    ONLINE    rac2

ora.rac2.gsd   application    ONLINE    ONLINE    rac2

ora.rac2.ons   application    ONLINE    ONLINE    rac2

ora.rac2.vip   application    ONLINE    ONLINE    rac2

rac2-> srvctl status  service -d devdb -v

服务 XFF 正在实例 devdb2 上运行

服务 xff2 正在实例 devdb1 上运行

–停用xff2 service

rac2-> srvctl stop service -d devdb -s xff2 -i devdb1

rac2-> crs_stat -t

Name           Type           Target    State     Host

————————————————————

ora…..XFF.cs application    ONLINE    ONLINE    rac1

ora….db1.srv application    ONLINE    ONLINE    rac2

ora.devdb.db   application    ONLINE    ONLINE    rac1

ora….b1.inst application    ONLINE    ONLINE    rac1

ora….b2.inst application    ONLINE    ONLINE    rac2

ora….xff2.cs application    ONLINE    ONLINE    rac2

ora….db2.srv application    OFFLINE   OFFLINE

ora….SM1.asm application    ONLINE    ONLINE    rac1

ora….C1.lsnr application    ONLINE    ONLINE    rac1

ora.rac1.gsd   application    ONLINE    ONLINE    rac1

ora.rac1.ons   application    ONLINE    ONLINE    rac1

ora.rac1.vip   application    ONLINE    ONLINE    rac1

ora….SM2.asm application    ONLINE    ONLINE    rac2

ora….C2.lsnr application    ONLINE    ONLINE    rac2

ora.rac2.gsd   application    ONLINE    ONLINE    rac2

ora.rac2.ons   application    ONLINE    ONLINE    rac2

ora.rac2.vip   application    ONLINE    ONLINE    rac2

rac2-> srvctl status  service -d devdb -v

服务 XFF 正在实例 devdb2 上运行

服务 xff2 未运行。

–删除xff2 service

rac2-> srvctl remove service -h

用法: srvctl remove service -d <name> -s <service_name> [-i <inst_name>] [-f]

-d <name>           数据库的唯一名称

-s <service>        服务名

-i <inst>           实例名

-f                  强制删除

-h                  打印用法

rac2-> srvctl remove service -d devdb -s xff2

xff2 PREF: devdb2 AVAIL: devdb1

是否从数据库 devdb 中删除服务 xff2? (y/[n]) y

rac2-> crs_stat -t

Name           Type           Target    State     Host

————————————————————

ora…..XFF.cs application    ONLINE    ONLINE    rac1

ora….db1.srv application    ONLINE    ONLINE    rac2

ora.devdb.db   application    ONLINE    ONLINE    rac1

ora….b1.inst application    ONLINE    ONLINE    rac1

ora….b2.inst application    ONLINE    ONLINE    rac2

ora….SM1.asm application    ONLINE    ONLINE    rac1

ora….C1.lsnr application    ONLINE    ONLINE    rac1

ora.rac1.gsd   application    ONLINE    ONLINE    rac1

ora.rac1.ons   application    ONLINE    ONLINE    rac1

ora.rac1.vip   application    ONLINE    ONLINE    rac1

ora….SM2.asm application    ONLINE    ONLINE    rac2

ora….C2.lsnr application    ONLINE    ONLINE    rac2

ora.rac2.gsd   application    ONLINE    ONLINE    rac2

ora.rac2.ons   application    ONLINE    ONLINE    rac2

ora.rac2.vip   application    ONLINE    ONLINE    rac2

注意使用-h或者help帮助功能

转:http://www.xifenfei.com/2011/08/1384.html

时间: 2024-10-13 22:43:43

RAC维护命令的相关文章

【RAC】使用频率较高的CRS维护命令总结

与CRS有关的命令均存放在$ORA_CRS_HOME/bin目录下,使用时请多加留意.本文将在日常CRS维护中较常用的命令予以演示,供参考. 1.启动CRS如果CRS没有启动在使用crs_stat命令查看集群状态的时候将会收到如下的报错信息.[email protected] /home/oracle$ /oracle/app/crs/bin/crs_stat -tCRS-0184: Cannot communicate with the CRS daemon. 在RAC环境下启动所有节点上的C

对Oracle10g rac srvctl命令使用理解

srvctl命令是RAC维护中最常用到的命令,也最为复杂,使用这个命令可以操作CRS上的Database,Instance,ASM,Service.Listener和Node Application资源,其中Node Application资源又包括了GSD.ONS.VIP.这些资源还有独立的管理工具,比如ONS可以使用onsctl命令进行管理:  http://www.cnblogs.com/myrunning/p/4265522.htmllistener还可以通过lsnrctl命令进行管理:

MHA 日常维护命令集

MHA 日常维护命令集 1.查看ssh登陆是否成功 masterha_check_ssh --global_conf=/etc/masterha/masterha_default.conf --conf=/etc/masterha/app1.conf 2.查看复制是否建立好 masterha_check_repl --global_conf=/etc/masterha/masterha_default.conf --conf=/etc/masterha/app1.conf 3.启动mha noh

管理oracle 11g RAC 常用命令

1).检查集群状态: [[email protected] ~]$ crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online 2).所有 Oracle 实例 —(数据库状态): [[email protected] ~]$ srvctl status da

mha日常维护命令

mha日常维护命令 http://m.blog.chinaunix.net/uid-28437434-id-3959021.html?/13033.shtml 1.查看ssh登陆是否成功masterha_check_ssh --conf=/etc/masterha/app1.cnf 2.查看复制是否建立好masterha_check_repl --conf=/etc/masterha/app1.cnf 3.启动mhanohup masterha_manager --conf=/etc/maste

WLAN的常用维护命令

常用维护命令       1:dis wlan ap name ap-名称-----------------查看AP状态         2:dis wlan ap name ap-名称verbose-----------------查看AP的详细信息         3:display wlan clien ap ap-名称----------------查看单个AP关联的用户数       dis wlan client mac-address MAC-地址 verbose         

2.goldengate日常维护命令(转载)

goldengate日常维护命令 发表于 2013 年 7 月 4 日 由 Asysdba 1.查看进程状态 GGSCI (PONY) 2> info all 2.查看进程详细状态,有助于排错 GGSCI (PONY) 2> view report +进程名称 3.查看告警日志信息 GGSCI (PONY) 2> view ggsevt 4. 查看延时,以及文件抽取应用情况 GGSCI> lag <进程名称> 可以查看详细的延时信息. 例如: GGSCI (db4) 1

svn服务配置和日常维护命令

Subversion独立服务和与apache整合服务. 一 .Svn独立服务安装 操作系统: Redhat Linux AS3  AS 4   ContOS AS 4 安装包获取: 下载[url]http://subversion.tigris.org/downloads/subversion- 1.4.0.tar.gz[/url]和[url]http://subversion.tigris.org/downloads /subversion-deps-1.4.0.tar.gz[/url]. 编

Mongodb集群部署以及集群维护命令

Mongodb集群部署以及集群维护命令 http://lipeng200819861126-126-com.iteye.com/blog/1919271 mongodb分布式集群架构及监控配置 http://freeze.blog.51cto.com/1846439/884925/ 见文中: 七.监控配置:      早在去年已经出现MongoDB和Redis的Cacti模板,使用它,你可以对你的MongoDB和Redis服务进行流量监控.cacti的模板一直在更新,若企业已经用到nosql这种