【oracle 11G Grid 】Crsctl start cluster 和 crsctl start crs 有区别么?
q:Crsctl start cluster 是
11.2新特性和 crsctl start crs
有啥区别呢?
Crsctl start/stop crs管理本地节点的clusterware
stack的启停,包含启动ohasd进程,这个命令只能用来管理本地节点
。
[[email protected] ~]# crsctl start crs -h
Usage:
crsctl start crs[-excl [-nocrs]|-nowait]
Start OHAS onthis server
where
-excl
Start Oracle Clusterware in exclusivemode
-nocrs
Start Oracle Clusterware in exclusivemode without starting CRS
-nowait
Do not wait for OHAS to start
crsctl start/stop cluster - Manage start/stop the Oracle Clusterware stack onlocal node if you do not specify either -all or -n and nodes remote if option-n or -all be specified ,NOT includingthe
OHASD process. You can‘t start/stop clusterware stack without OHASD processrunning.
crsctl strat/stop cluster既可以管理本地
clusterware stack,也可以管理整个集群
指定–all
启动集群中所有节点的集群件,即启动整个集群。
-n
启动指定节点的集群件
但是不包含OHASD进程。You
can‘t start/stop clusterware stack without OHASDprocess running.
[[email protected] ~]# crsctl start cluster -h
Usage:
crsctl startcluster [[-all]|[-n <server>[...]]]
Start CRS stack
where
Default
Start local server
-all
Start all servers
-n
Start named servers
server [...] One or more blank-separated server names
Despite crsctl start/stop crs manage entire Oracle Clusterware stack on localnode crsctl start/stop crs not allow you to manage remote nodes, unlike crsctlstart/stop cluster that allows you to manage all the nodes, but if the processOASH
is runing.
crsctl start/stop crs 只能管理本地节点的clusterware stack,并不允许我们管理远程节点。
但是当远程或者本地节点OHASD process运行时(Oracle
High AvailabilityServices服务必须可用),才能使用crsctl start/stop crs管理所有节点
我们来做一个实验验证下
我们先把节点2的crs停掉,确保本地已经没有OHASD进程。
[[email protected] ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High AvailabilityServices-managed resources on ‘vmrac2‘
CRS-2673: Attempting to stop ‘ora.crsd‘ on ‘vmrac2‘
。。。。。
CRS-2673: Attempting to stop ‘ora.DATANEW.dg‘ on ‘vmrac2‘
。。。。。
CRS-2677: Stop of ‘ora.gipcd‘ on ‘vmrac2‘ succeeded
。。。。。
CRS-2793: Shutdown of Oracle High AvailabilityServices-managed resources on ‘vmrac2‘ has completed
CRS-4133: Oracle High Availability Services has beenstopped.
这里可以看到使用 crsctl stop crs已经本地的集群件全部停了下来。
但是为了去确保万无一失,建议在os层面查看下
cluster的进程是否存在
[[email protected] ~]# ps -ef|grep ohasd
root 3747
1 0Jun19 ? 00:00:00 /bin/sh/etc/init.d/init.ohasd run
[[email protected] ~]# ps -ef|grep d.bin
root 3064427369
0 13:08 pts/2 00:00:00 grep d.bin
------到这里可以确认集群已经全宕下来了
[[email protected] ~]# ps -ef|grep ohasd
root 3747
1 0Jun19 ? 00:00:00 /bin/sh/etc/init.d/init.ohasd run
------当然这个脚本存在没有什么关系,如果没有这个sh进程,则
ohasd.bin就无法启动,
这时需要去调查下Snncommd –S96ohasd脚本为什么不能执行
这个后台脚本直接用kill去杀是无法杀掉的,会自动再生一个进程。
[[email protected]rac2 ~]# ps -ef|grep ohasd
root 3747
1 0Jun19 ? 00:00:00 /bin/sh/etc/init.d/init.ohasd run
root 4888
4812 013:39 pts/1 00:00:00 grep ohasd
[[email protected] ~]# kill -9 3747
[[email protected] ~]# ps -ef|grep ohasd
root 4895
1 013:39 ? 00:00:00 /bin/sh/etc/init.d/init.ohasd run
root 4920
4812 013:39 pts/1 00:00:00 grep ohasd
[[email protected] ~]# kill -9 4895
[[email protected] ~]# ps -ef|grep ohasd
root
4933 1 013:40 ? 00:00:00 /bin/sh/etc/init.d/init.ohasd run
root 4958
4812 013:40 pts/1 00:00:00 grep ohasd
具体测试如下:
节点二的集群已经关闭,节点一的还在
节点一操作:
使用crsctl start cluster启动节点2的集群
[[email protected] ~]# crsctl start cluster -n vmrac2
CRS-4405: The following nodes are unknown to Oracle HighAvailability Services:vmrac2
------报错很明显啊,vmrac2节点上ohasd进程不存在,所以节点1无法启动节点2上的集群
[[email protected] ~]# crsctl start cluster -all
CRS-4690: Oracle Clusterware is already running on ‘vmrac1‘
CRS-4000: Command Start failed, or completed with errors.
--------------还是因为vmrac2节点上ohasd进程不存在,所以节点1无法启动节点2上的集群
[[email protected] ~]#
节点二操作:
[[email protected] ~]# crsctl start cluster
CRS-4639: Could not contact Oracle High AvailabilityServices
CRS-4000: Command Start failed, or completed with errors.
------crsctl start cluster缺省代表启动本地节点,还是因为vmrac2节点上ohasd进程不存在,所以无法启动节点2上的集群
[[email protected] ~]# crsctl start cluster -n vmrac1
CRS-4639: Could not contact Oracle High AvailabilityServices
CRS-4000: Command Start failed, or completed with errors.
------还是因为vmrac2节点上ohasd进程不存在,集群节点间无法通信,所以无法启动节点1上的集群(这里只是测试下,实际节点1集群是已经启动的)
[[email protected] ~]# crsctl start cluster -all
CRS-4639: Could not contact Oracle High AvailabilityServices
CRS-4000: Command Start failed, or completed with errors.
------还是因为vmrac2节点上ohasd进程不存在,集群节点间无法通信,所以无法启动节点1上的集群(这里只是测试下,实际节点1集群是已经启动的)
通过上面的测试可以发现要想通过crsctl start cluster来管理远程集群节点,则ohasd
(Oracle High AvailabilityServices Daemon)必须在所有集群节点上运行。如果没有运行,则会报出:
CRS-4639: Could not contact Oracle High AvailabilityServices
CRS-4000: Command Start failed, or completed with errors.
类似的错误。
这里我们再稍微讨论下为什么必须要求ohasd进程,crsctl
start cluster才能管理各个远程节点
int socket(int domain, inttype,int protocol)
domain:说明我们网络程序所在的主机采用的通讯协族(AF_UNIX和AF_INET等).
AF_UNIX只能够用于单一的Unix系统进程间通信,而AF_INET是针对Internet的,因而可以允许在远程主机之间通信
type:我们网络程序所采用的通讯协议(SOCK_STREAM,SOCK_DGRAM等)
SOCK_STREAM表明我们用的是TCP协议,这样会提供按顺序的,可靠,双向,面向连接的比特流.
SOCK_DGRAM 表明我们用的是UDP协议,这样只会提供定长的,不可靠,无连接的通信.
socket()系统调用,带有三个参数:
1、参数domain指明通信域,如PF_UNIX(unix域),PF_INET(IPv4),
PF_INET6(IPv6)等
2、type指明通信类型,最常用的如SOCK_STREAM(面向连接可靠方式,
比如TCP)、SOCK_DGRAM(非面向连接的非可靠方式,比如UDP)等。
3、参数protocol指定需要使用的协议。虽然可以对同一个协议
家族(protocol
family)(或者说通信域(domain))指定不同的协议
参数,但是通常只有一个。对于TCP参数可指定为IPPROTO_TCP,对于
UDP可以用IPPROTO_UDP。你不必显式制定这个参数,使用0则根据前
两个参数使用默认的协议。
这里对 crsctl startcluster启动跟踪下,trace文件如下:
。。。。。
6009 socket(PF_INET6,SOCK_DGRAM,
IPPROTO_IP) = 3
6009 bind(3, {sa_family=AF_INET6, sin6_port=htons(0), inet_pton(AF_INET6,"::1",
&sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
6009 getsockname(3, {sa_family=AF_INET6, sin6_port=htons(19527),inet_pton(AF_INET6,
"::1", &sin6_addr), sin6_flowinfo=0,sin6_scope_id=0}, [15812988179826343964]) = 0
6009 getpeername(3, 0x7fff0257b028, [15812988179826343964])
= -1 ENOTCONN(Transport endpoint is not connected)
6009 getsockopt(3, SOL_SOCKET, SO_SNDBUF, [168803484727246848],
[4]) = 0
6009 getsockopt(3, SOL_SOCKET, SO_RCVBUF, [168803484727246848],
[4]) = 0
6009 fcntl(3, F_SETFD, FD_CLOEXEC) =
0
6009 fcntl(3, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
6009 times({tms_utime=2, tms_stime=2, tms_cutime=0, tms_cstime=0})
=465828655
6009 access("/var/tmp/.oracle", F_OK) =
0
6009 chmod("/var/tmp/.oracle", 01777) =
0
6009 socket(PF_FILE, SOCK_STREAM, 0) =
4
6009 access("/var/tmp/.oracle/sOHASD_UI_SOCKET", F_OK) = 0
6009 connect(4, {sa_family=AF_FILE, path="/var/tmp/.oracle/sOHASD_UI_SOCKET"...},
110)= -1 ECONNREFUSED (Connection refused)
6009 access("/var/tmp/.oracle/sOHASD_UI_SOCKET", F_OK) = 0
6009 nanosleep({0, 100000000}, {16, 140733232680592}) = 0
6009 close(4) =
0
6009 socket(PF_FILE, SOCK_STREAM, 0) =
4
。。。。。。。
------这里我们再节点以上观察下是谁会使用这个socket文件
[email protected] ~]# lsof/var/tmp/.oracle/sOHASD_UI_SOCKET
COMMAND PID USER FD TYPE DEVICE
SIZE NODE NAME
ohasd.bin 29191 root 634u unix
0xffff81005b939700 9176933 /var/tmp/.oracle/sOHASD_UI_SOCKET
[[email protected] ~]# ls -l/var/tmp/.oracle/sOHASD_UI_SOCKET
srwxrwxrwx 1 root root 0 Jun 19 13:27/var/tmp/.oracle/sOHASD_UI_SOCKET
那节点二上的状况如呢?
[[email protected] ~]# ls -l/var/tmp/.oracle/sOHASD_UI_SOCKET
srwxrwxrwx 1 root root 0 Jun 19 13:27/var/tmp/.oracle/sOHASD_UI_SOCKET
[[email protected] ~]# lsof/var/tmp/.oracle/sOHASD_UI_SOCKET
这时我使用crsctl start crs继续观察:
[[email protected] ~]# crsctl start crs
lsof/var/tmp/.oracle/sOHASD_UI_SOCKETCRS-4123: Oracle High Availability Serviceshas been started.
[[email protected] ~]# ps -ef|grep 6560
root 6560 1 214:53
? 00:00:01/u02/app/11.2.0.3/grid/bin/ohasd.bin reboot
root 6877 4812 0
14:54 pts/1 00:00:00 grep 6560
此时迅速在节点2的另一个窗口观察/var/tmp/.oracle/sOHASD_UI_SOCKET这个socket
[[email protected] ~]# lsof/var/tmp/.oracle/sOHASD_UI_SOCKET
[[email protected] ~]# lsof/var/tmp/.oracle/sOHASD_UI_SOCKET
[[email protected] ~]# lsof/var/tmp/.oracle/sOHASD_UI_SOCKET
COMMAND PID USER FD TYPE DEVICE
SIZE NODE NAME
ohasd.bin
6560 root 631u unix 0xffff8100792f71c0 151906805
/var/tmp/.oracle/sOHASD_UI_SOCKET
这里可以观察到 ohasd进程会使用/var/tmp/.oracle/sOHASD_UI_SOCKET这个socket文件来建立集群间节点通信,这也就解释了,为什么没有ohasd进程,我就不能使用
crsctl start cluster 命令来管理集群中远程节点。
【oracle 11G Grid 】Crsctl start cluster 和 crsctl start crs 有区别么?