11G RAC 节点2 主机down(两个节点RAC)

--节点2 数据库日志

Mon Jul 01 06:38:22 2019
SUCCESS: diskgroup SAS_ARCH was dismounted
Mon Jul 01 06:38:22 2019
Shutting down instance (abort)
License high water mark = 1923
USER (ospid: 82381): terminating the instance
Mon Jul 01 06:38:22 2019
opiodr aborting process unknown ospid (12589) as a result of ORA-1092
Mon Jul 01 06:38:22 2019
opiodr aborting process unknown ospid (45276) as a result of ORA-1092
Mon Jul 01 06:38:22 2019
opiodr aborting process unknown ospid (107399) as a result of ORA-1092
Instance terminated by USER, pid = 82381
Mon Jul 01 06:38:24 2019
Instance shutdown complete

--主机日志

Jul 1 06:35:01 test2 auditd[16253]: Audit daemon rotating log files
Jul 1 06:38:19 test2 init: oracle-ohasd main process (15639) killed by TERM signal
Jul 1 06:38:19 test2 init: oracle-tfa main process (15638) killed by TERM signal
Jul 1 06:38:19 test2 init: tty (/dev/tty2) main process (16997) killed by TERM signal
Jul 1 06:38:19 test2 init: tty (/dev/tty3) main process (16999) killed by TERM signal
Jul 1 06:38:19 test2 init: tty (/dev/tty4) main process (17004) killed by TERM signal
Jul 1 06:38:19 test2 init: tty (/dev/tty5) main process (17006) killed by TERM signal
Jul 1 06:38:19 test2 init: tty (/dev/tty6) main process (17008) killed by TERM signal
Jul 1 06:38:19 test2 gnome-session[17110]: WARNING: Failed to send buffer
Jul 1 06:38:19 test2 gnome-session[17110]: WARNING: Failed to send buffer
Jul 1 06:38:23 test2 ntpd[90741]: Deleting interface #15 bond0:1, 10.1.11.103#123, interface stats: received=1410, sent=0, dropped=0, active_time=56169415 secs
Jul 1 06:38:39 test2 pulseaudio[17164]: pid.c: Failed to open PID file ‘/var/lib/gdm/.pulse/45593399e441b14e2757581a00000028-runtime/pid‘: No such file or directory
Jul 1 06:38:39 test2 pulseaudio[17164]: pid.c: Failed to open PID file ‘/var/lib/gdm/.pulse/45593399e441b14e2757581a00000028-runtime/pid‘: No such file or directory
Jul 1 06:38:46 test2 ntpd[90741]: Deleting interface #14 bond1:1, 169.254.7.117#123, interface stats: received=0, sent=0, dropped=0, active_time=56169467 secs
Jul 1 06:38:51 test2 abrtd: Got signal 15, exiting
Jul 1 06:38:51 test2 xinetd[45495]: Exiting...
Jul 1 06:38:51 test2 acpid: exiting
Jul 1 06:38:51 test2 ntpd[90741]: ntpd exiting on signal 15
Jul 1 06:38:53 test2 init: Disconnected from system bus
Jul 1 06:38:53 test2 rtkit-daemon[17166]: Demoting known real-time threads.
Jul 1 06:38:53 test2 rtkit-daemon[17166]: Demoted 0 threads.
Jul 1 06:38:53 test2 auditd[16253]: The audit daemon is exiting.
Jul 1 06:38:53 test2 kernel: type=1305 audit(1561934333.370:37053744): audit_pid=0 old=16253 auid=4294967295 ses=4294967295 res=1
Jul 1 06:38:53 test2 kernel: type=1305 audit(1561934333.475:37053745): audit_enabled=0 old=1 auid=4294967295 ses=4294967295 res=1
Jul 1 06:38:53 test2 kernel: Kernel logging (proc) stopped.
Jul 1 06:38:53 test2 rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="16275" x-info="http://www.rsyslog.com"] exiting on signal 15.

---节点2 GRID 日志 /u01/11.2.0/grid/log/test2 下面的alertbapdb2.log
2019-07-01 06:34:49.606:
[client(75150)]CRS-0009:log file "/u01/11.2.0/grid/log/test2/client/olsnodes.log" reopened
2019-07-01 06:34:49.606:
[client(75150)]CRS-0019:file rotation terminated. log file: "/u01/11.2.0/grid/log/test2/client/olsnodes.log"
2019-07-01 06:38:33.151:
[/u01/11.2.0/grid/bin/orarootagent.bin(106660)]CRS-5822:Agent ‘/u01/11.2.0/grid/bin/orarootagent_root‘ disconnected from server. Details at (:CRSAGF00117:) {0:5:52057} in /u01/11.2.0/grid/log/test2/agent/crsd/orarootagent_root//orarootagent_root.log.
LFI-01523: rename() failed.

2019-07-01 06:34:49.606:
[client(75150)]CRS-0009:log file "/u01/11.2.0/grid/log/test2/client/olsnodes.log" reopened
2019-07-01 06:34:49.606:
[client(75150)]CRS-0019:file rotation terminated. log file: "/u01/11.2.0/grid/log/test2/client/olsnodes.log"
2019-07-01 06:38:33.151:
[/u01/11.2.0/grid/bin/orarootagent.bin(106660)]CRS-5822:Agent ‘/u01/11.2.0/grid/bin/orarootagent_root‘ disconnected from server. Details at (:CRSAGF00117:) {0:5:52057} in /u01/11.2.0/grid/log/test2/agent/crsd/orarootagent_root//orarootagent_root.log.
2019-07-01 06:38:33.887:
[ctssd(104917)]CRS-2405:The Cluster Time Synchronization Service on host test2 is shutdown by user
2019-07-01 06:38:33.892:
[mdnsd(103640)]CRS-5602:mDNS service stopping by request.
2019-07-01 06:38:45.860:
[cssd(103758)]CRS-1603:CSSD on node test2 shutdown by user.
2019-07-01 06:38:45.970:
[ohasd(103446)]CRS-2767:Resource state recovery not attempted for ‘ora.cssdmonitor‘ as its target state is OFFLINE
2019-07-01 06:38:46.064:
[cssd(103758)]CRS-1660:The CSS daemon shutdown has completed
2019-07-01 06:38:49.592:
[gpnpd(103651)]CRS-2329:GPNPD on node test2 shutdown.
2019-07-01 09:28:04.022:
[ohasd(17090)]CRS-2112:The OLR service started on node test2.
2019-07-01 09:28:04.069:
[ohasd(17090)]CRS-1301:Oracle High Availability Service started on node test2.

rac是通过几个必要条件进行通信,时间,磁盘心跳,链路心跳,缺一不可。

---节点1 日志

Mon Jul 01 06:38:24 2019
Reconfiguration started (old inc 16, new inc 18)
List of instances:
1 (myinst: 1)
Global Resource Directory frozen
* dead instance detected - domain 0 invalid = TRUE
Communication channels reestablished
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Mon Jul 01 06:38:25 2019
LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Mon Jul 01 06:38:25 2019
LMS 3: 2 GCS shadows cancelled, 1 closed, 0 Xw survived
Mon Jul 01 06:38:25 2019
Mon Jul 01 06:38:25 2019
LMS 2: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Mon Jul 01 06:38:36 2019
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Post SMON to start 1st pass IR
Mon Jul 01 06:38:36 2019
Instance recovery: looking for dead threads
Beginning instance recovery of 1 threads
Mon Jul 01 06:38:52 2019
parallel recovery started with 32 processes
Started redo scan
Completed redo scan
read 12123 KB redo, 6138 data blocks need recovery
Mon Jul 01 06:38:55 2019
Submitted all GCS remote-cache requests
Post SMON to start 1st pass IR
Fix write in gcs resources
Mon Jul 01 06:39:07 2019
Reconfiguration complete
Mon Jul 01 06:39:32 2019
Started redo application at
Thread 2: logseq 218275, block 1708335

---原因:
2019-07-01 06:38:33.887:
[ctssd(104917)]CRS-2405:The Cluster Time Synchronization Service on host test2 is shutdown by user

主机test2上的集群时间同步服务由用户关闭

主机 BIOS 时间不一致;

[[email protected] ~]$ su - root
Password:
[[email protected] ~]# hwclock
Mon 01 Jul 2019 11:27:27 AM CST -0.485777 seconds
[[email protected] ~]# date
Mon Jul 1 10:44:03 CST 2019

[[email protected] ~]# hwclock
Mon 01 Jul 2019 10:42:33 AM CST -0.219479 seconds
[[email protected] ~]# date
Mon Jul 1 10:42:36 CST 2019

--同步方式

--节点1 cat /etc/ntp.conf

server pbsntp01.sx.com iburst
server pbsntp02.sx.com iburst

--节点2 修改后 cat /etc/ntp.conf
server 10.0.10.2 iburst
#server pbsntp02.sx.com iburst

原文地址:https://www.cnblogs.com/ss-33/p/11113335.html

时间: 2024-10-08 16:24:22

11G RAC 节点2 主机down(两个节点RAC)的相关文章

二叉树中任意两个节点的最近公共祖先节点

1.二叉树是个搜索二叉树 2.二叉树带有指向parent的指针 可转换成两个链表的相交节点 3.普通二叉树 保存从根节点分别到这两个节点的路径到list1和list2中 从list1和list2中找第一个不相等的节点即为最近公共祖先节点 template<class T> BinaryTreeNode<T>*  BinaryTree<T>::lastCommnParent(BinaryTreeNode<T>*& node1, BinaryTreeNo

求树中两个节点的最低公共祖先

情形1:树是搜索二叉树 思路:从树的根节点开始遍历,如果根节点的值大于其中一个节点,小于另外一个节点,则根节点就是最低公共祖先.否则如果根节点的值小于两个节点的值,则递归求根节点的右子树,如果大于两个节点的值则递归求根的左子树.如果根节点正好是其中的一个节点,那么说明这两个节点在一条路径上,所以最低公共祖先则是根节点的父节点 public static BinaryTreeNode getLowestCommonAncestor(BinaryTreeNode rootParent,BinaryT

树中两个节点的最低公共祖先

树是二叉查找树的情况 题目来自LeetCode:https://leetcode.com/problems/lowest-common-ancestor-of-a-binary-search-tree/ Lowest Common Ancestor of a Binary Search Tree Total Accepted: 3402 Total Submissions: 8709 My Submissions Question Solution Given a binary search t

二叉树中两个节点的最近公共父节点

这是京东周六的笔试题目   当时不在状态,现在想来肯定是笔试就被刷掉了,权当做个纪念吧.  这个问题可以分为三种情况来考虑: 情况一:root未知,但是每个节点都有parent指针此时可以分别从两个节点开始,沿着parent指针走向根节点,得到两个链表,然后求两个链表的第一个公共节点,这个方法很简单,不需要详细解释的. 情况二:节点只有左.右指针,没有parent指针,root已知思路:有两种情况,一是要找的这两个节点(a, b),在要遍历的节点(root)的两侧,那么这个节点就是这两个节点的最

数据结构:单向链表系列6--交换相邻两个节点1(交换数据域)

给定一个单向链表,编写函数交换相邻 两个元素 输入: 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 输出: 2 -> 1 -> 4 -> 3 -> 6 -> 5 -> 7 输入: 1 -> 2 -> 3 -> 4 -> 5 -> 6 输出: 2 -> 1 -> 4 -> 3 -> 6 -> 5 通过观察发现:当输入的与元素个数是单数的时候,最后一位不参与交换

Oracle 11g 两个节点RAC 搭建单实例DG详细步骤以及注意事项

环境介绍: OS: 都是 [[email protected] ~]# uname -a Linux java3 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux 主库:  数据库版本:11.2.0.3.0  两个节点的RAC  节点一:192.168.15.26  节点二:192.168.15.27 standby 数据库版本:  11.2.0.3.0      IP 192.16

redhat+11g+rac 安装数据库软件时只有一个节点可选

在安装数据库软件时,只能检测到一个节点 解决办法:重启rac1节点

Oracle 学习之RAC(八) 向集群中添加节点

我们前面安装了两个节点的RAC. 我们现在将第三个节点11grac3,添加到我们集群中.第三个节点一定要按照集群环境准备一文描述,准备环境.请参考 http://lqding.blog.51cto.com/9123978/1684159 安装前检查 11grac1-> pwd /u01/app/11.2.0/grid/bin 11grac1-> ./cluvfy stage -pre nodeadd -n 11grac3 -fixup -verbose 检查结果 Performing pre-

oracle12c rac搭建时主机名无效问题的解决

在windows 2012 64位企业版上搭建oracle 12c  rac集群,hosts文件如下:#add for rac config11.14.72.83 rac111.14.72.84 rac1-vip192.168.0.1 rac1-priv11.14.72.85 rac211.14.72.86 rac2-vip192.168.0.2 rac2-priv11.14.72.87 scanip 两台机器互ping都没问题,预检查批处理也完全通过,但是在安装集群件grid时却提示说对方节点