ES三节点重启后报错no known master node

问题

一直在研究ES的监控怎么做,想偷点懒,不去通过API获取然后计算,就想找个现成的插件或者监控软件,只要装个agent就可以,然后就找到了x-pack,插件装好了之后,需要重启ES集群,线上的ES集群我想着既然是集群一台一台重启应该不会有问题的,太高估了,重启一台后,整个集群挂了......

操作过程

1、系统

[[email protected]172-0-0-233 bin]$ cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core) 

2、ES版本

[[email protected]172-0-0-233 bin]$ ./elasticsearch --version
Version: 5.0.2, Build: f6b4951/2016-11-24T10:07:18.101Z, JVM: 1.8.0_131

3、杀进程

ps -ef | grep pid
kill -9 pid

这样操作完就后悔了,不是每个服务都是这么杀的,不知道这步操作对集群挂了有没有一定的影响。

4、报错信息

[2019-10-17T08:43:39,084][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [lang-painless]
[2019-10-17T08:43:39,084][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [percolator]
[2019-10-17T08:43:39,084][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [reindex]
[2019-10-17T08:43:39,084][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [transport-netty3]
[2019-10-17T08:43:39,084][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [transport-netty4]
[2019-10-17T08:43:39,084][INFO ][o.e.p.PluginsService     ] [node-1] no plugins loaded
[2019-10-17T08:43:41,612][INFO ][o.e.n.Node               ] [node-1] initialized
[2019-10-17T08:43:41,613][INFO ][o.e.n.Node               ] [node-1] starting ...
[2019-10-17T08:43:41,812][INFO ][o.e.t.TransportService   ] [node-1] publish_address {172.0.0.16:9300}, bound_addresses {172.30.36.146:9300}
[2019-10-17T08:43:41,817][INFO ][o.e.b.BootstrapCheck     ] [node-1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks

[2019-10-17T08:44:11,833][WARN ][o.e.n.Node               ] [node-1] timed out while waiting for initial discovery state - timeout: 30s
[2019-10-17T08:44:11,839][INFO ][o.e.h.HttpServer         ] [node-1] publish_address {172.0.0.16:9200}, bound_addresses {172.30.36.146:9200}
[2019-10-17T08:44:11,839][INFO ][o.e.n.Node               ] [node-1] started
[2019-10-17T08:44:12,001][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,001][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,003][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,010][DEBUG][o.e.a.a.c.s.TransportClusterStateAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,010][DEBUG][o.e.a.a.c.s.TransportClusterStateAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,228][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,758][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,759][DEBUG][o.e.a.a.c.s.TransportClusterStateAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,760][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,814][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,814][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,815][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,815][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,817][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,817][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,817][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,820][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,820][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,821][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,822][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,822][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,823][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,824][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,826][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,827][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,827][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,828][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,828][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,830][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:12,830][DEBUG][o.e.a.a.i.c.TransportCreateIndexAction] [node-1] no known master node, scheduling a retry
[2019-10-17T08:44:42,012][DEBUG][o.e.a.a.c.s.TransportClusterStateAction] [node-1] timed out while retrying [cluster:monitor/state] after failure (timeout [30s])
[2019-10-17T08:44:42,012][DEBUG][o.e.a.a.c.s.TransportClusterStateAction] [node-1] timed out while retrying [cluster:monitor/state] after failure (timeout [30s])
[2019-10-17T08:44:42,013][WARN ][r.suppressed             ] path: /_cluster/state/metadata, params: {metric=metadata}
org.elasticsearch.discovery.MasterNotDiscoveredException
    at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:214) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:350) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:240) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.service.ClusterService$NotifyTimeout.run(ClusterService.java:957) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:458) [elasticsearch-5.0.2.jar:5.0.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
[2019-10-17T08:44:42,013][WARN ][r.suppressed             ] path: /_cluster/state/metadata, params: {metric=metadata}
org.elasticsearch.discovery.MasterNotDiscoveredException
    at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:214) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:350) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:240) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.service.ClusterService$NotifyTimeout.run(ClusterService.java:957) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:458) [elasticsearch-5.0.2.jar:5.0.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
[2019-10-17T08:44:42,760][DEBUG][o.e.a.a.c.s.TransportClusterStateAction] [node-1] timed out while retrying [cluster:monitor/state] after failure (timeout [30s])
[2019-10-17T08:44:42,761][WARN ][r.suppressed             ] path: /_cluster/state/metadata, params: {metric=metadata}
org.elasticsearch.discovery.MasterNotDiscoveredException
    at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:214) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:350) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:240) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.cluster.service.ClusterService$NotifyTimeout.run(ClusterService.java:957) [elasticsearch-5.0.2.jar:5.0.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:458) [elasticsearch-5.0.2.jar:5.0.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]

5、配置文件

cluster.name: lile
node.name: node-1
bootstrap.memory_lock: true
network.host: 172.0.0.16
http.port: 9200
discovery.zen.ping.unicast.hosts: ["172.0.0.16","172.0.0.17","172.0.0.18"]
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs

三、解决办法

各种重启都没有,在网上查到的,都是重启就好了,但是使劲的重启也没好。但是当discovery.zen.minimum_master_nodes这个值设置为1的时候,可以启动成功,但是三台都成了master了。后来看到有个这个参数,加上然后全部重启就好了。

discovery.zen.ping_timeout: 60s

四、分析原因

还没细究,感觉是集群互相查找的时间太短了,没有找到对方,因为得2台才能形成集群

原文地址:https://www.cnblogs.com/lemon-le/p/11707138.html

时间: 2024-11-13 09:52:16

ES三节点重启后报错no known master node的相关文章

关于系统重启后报错Give root password for maintenance

几台虚拟机需要重启,重启后连接不上,通过管理软件看到 Your system appears to have shut down uncleanlypress Y within 1 seconds to force file system integrity check...checking root filesystem/contains a file system with errors,check forced./:Unattached inode 813065 /:UNEXPECTED

ORACLE 12C 三节点 RAC 安装报错 [INS-32025]

测试安装三节点Oracle 12C RAC,笔记本配置双核I5 16g内存,每台虚机分配4G 内存. 在数据库软件安装过程中,主实例异常重启,重新安装数据库软件,但是遇到报错.报错如下: [INS-32025] The chosen installation conflicts with software already installed in the given oracle home. 处理方法如下: 1.修改inventory文件 [[email protected] ContentsX

tomcat启动后报错Bad version number in .class file (unable to load class oracle.jdbc.OracleDriver)

对于tomcat启动后报错: 错误原因:tomcat使用的jdk和eclipce的编译用的jdk版本不同. 解决办法: 1.首先确定tomcat的jdk版本: 2.点开tomcat查看jdk版本. 使用的是jdk1.8 3.然后再查看java下的jdk 保证编译环境和tomcat使用的jdk版本一致. 4.如果两个都使用的默认的jdk,全部换成自己的jdk试一试. 5.重启tomcat问题得到解决.

rac 11g_第二个节点重启后无法启动实例:磁盘组dismount问题

原创作品,出自 "深蓝的blog" 博客,欢迎转载,转载时请务必注明以下出处,否则追究版权法律责任. 深蓝的blog:http://blog.csdn.net/huangyanlong/article/details/41480075 rac第二个节点重启后无法启动实例:磁盘组dismount问题 实验案例: 实验环境:CentOS 6.4.Oracle 11.2.0.1 现象重演:1. 重启第二节点服务器2. 手工启动第二节点实例,报错[[email protected] ~]# s

在eclipse中导入jquery包后报错的解决办法

eclipse导入jquery包后报错,处理步骤如下:  1.关闭Eclipse,打开对应项目的.project文件,去掉如下内容:         org.eclipse.wst.jsdt.core.javascriptValidator       2.删除项目中原来的jquery支持包,重启重新复制一份,因为原来的文件已被eclipse项目标记为错误了,之后报错解决.

Exchange2013 重装后报错 【IIS-W3SVC-WP 2280】

Exchange2013 重装后报错:IIS-W3SVC-WP 2280 模块DLL D:\Exchange Server\V15\Bin\kerbauth.dll为能加载.返回的数据为错误信息. 该错误频繁写入,大约15s一次,错误原因在于重装Exchange后修改了安装路径,而应用程序的配置文件确并未进行修改,修正方法如下: C:\Windows\System32\inetsrv\config\applicationHost.config 搜索kerbauth,将路径修该为正确的安装路径即可

linux 启动network后报错:device eth0 does not seem to be present, delaying initialization

问题背景: 在vsphere client中部署ovf模板后启动linux 的network后提示:device eth0 does not seem to be present, delaying initialization 设备eth0没有准备就绪,延迟初始化,如图所示: 问题原因是导出的ovf模板中的MAC地址为源系统的MAC(配置文件为源系统(导出ovf模板的系统)的配置文件),用ovf部署后的系统MAC地址已经变化,所以导致初始化失败,解决方法如下: 1. vi /etc/sysco

rac_第二个节点重启后无法启动实例:磁盘组dismount问题

原创作品,出自 "深蓝的blog" 博客,欢迎转载,转载时请务必注明以下出处,否则追究版权法律责任. 深蓝的blog:http://blog.csdn.net/huangyanlong/article/details/41480075 rac第二个节点重启后无法启动实例:磁盘组dismount问题 实验案例: 实验环境:CentOS 6.4.Oracle 11.2.0.1 现象重演: 1. 重启第二节点服务器 2. 手工启动第二节点实例,报错 [[email protected] ~]

ceph搭建配置-三节点

主机名 IP 磁盘 角色 ceph01 10.10.20.55     ceph02 10.10.20.66     chph03 10.10.20.77     systemctl stop [email protected]systemctl stop [email protected]systemctl stop [email protected] [[email protected] ~]# parted /dev/sdb mklabel gptInformation: You may