ceph 集群版本:
ceph -v
ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185)
ceph -w 查看服务状态:
mds cluster is degraded monmap e1: 3 mons at {ceph-6-11=172.16.6.11:6789/0,ceph-6-12=172.16.6.12:6789/0,ceph-6-13=172.16.6.13:6789/0} election epoch 454, quorum 0,1,2 ceph-6-11,ceph-6-12,ceph-6-13 fsmap e1928: 1/1/1 up {0=ceph-6-13=up:rejoin}, 2 up:standby osdmap e4107: 90 osds: 90 up, 90 in flags sortbitwise,require_jewel_osds pgmap v24380658: 5120 pgs, 4 pools, 14837 GB data, 5031 kobjects 44476 GB used, 120 TB / 163 TB avail 5120 active+clean
服务日志:
fault with nothing to send, going to standby2017-05-08 00:21:32.423571 7fb859159700 1 heartbeat_map is_healthy ‘MDSRank‘ had timed out after 152017-05-08 00:21:32.423578 7fb859159700 1 mds.beacon.ceph-6-12 _send skipping beacon, heartbeat map not healthy2017-05-08 00:21:33.006114 7fb85e264700 1 heartbeat_map is_healthy ‘MDSRank‘ had timed out after 152017-05-08 00:21:34.902990 7fb858958700 -1 mds.ceph-6-12 *** got signal Terminated ***2017-05-08 00:21:36.423632 7fb859159700 1 heartbeat_map is_healthy ‘MDSRank‘ had timed out after 152017-05-08 00:21:36.423640 7fb859159700 1 mds.beacon.ceph-6-12 _send skipping beacon, heartbeat map not healthy2017-05-08 00:21:36.904448 7fb85c260700 1 mds.0.1929 rejoin_joint_start2017-05-08 00:21:36.906440 7fb85995a700 1 heartbeat_map reset_timeout ‘MDSRank‘ had timed out after 152017-05-08 00:21:36.906502 7fb858958700 1 mds.ceph-6-12 suicide. wanted state up:rejoin2017-05-08 00:21:37.906842 7fb858958700 1 mds.0.1929 shutdown: shutting down rank 02017-05-08 01:04:36.411123 7f2886f60180 0 set uid:gid to 167:167 (ceph:ceph)2017-05-08 01:04:36.411140 7f2886f60180 0 ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185), process ceph-mds, pid 11320282017-05-08 01:04:36.411734 7f2886f60180 0 pidfile_write: ignore empty --pid-file2017-05-08 01:04:37.291720 7f2880f40700 1 mds.ceph-6-12 handle_mds_map standby2017-05-08 01:04:44.618574 7f2880f40700 1 mds.0.1955 handle_mds_map i am now mds.0.19552017-05-08 01:04:44.618588 7f2880f40700 1 mds.0.1955 handle_mds_map state change up:boot --> up:replay2017-05-08 01:04:44.618602 7f2880f40700 1 mds.0.1955 replay_start2017-05-08 01:04:44.618627 7f2880f40700 1 mds.0.1955 recovery set is
表现现象:
此时cephfs 挂载到系统的文件夹,可以进入,无法创建文件,仅能查看目录;
故障排查解决:
参考文档
http://tracker.ceph.com/issues/19118
http://tracker.ceph.com/issues/18730
查看信息发现,是新版本的一个bug,近期我们做了一个版本升级,从10.2.5升级到10.2.7 ,升级完成不到一周:
基本原因分析,当cephfs 存储有大量数据的时候,多个主节点要同步状并进行数据交换,mds 节点有消息监测,默认设置的是15秒超时,如果15没有收到消息,就将节点踢出集群。默认的超时时间较短,会导致压力大,返回数据慢的节点异常,被反复踢出集群,刚被踢出集群,心跳又发现节点是活着的,又会将节点加入集群,加入集群后一会又被踢出,如此反复。此时ceph集群会报“mds cluster is degraded”。服务日志报“heartbeat_map is_healthy ‘MDSRank‘ had timed out after 15”
解决办法:
解决办法1:
此办法为应急办法,留一个mds 节点工作,其它节点服务暂时关闭,仅剩余一个节点独立工作,不再有mds 之间的心跳监测,此问题可以规避。此步骤完成后可以按照解决办法2进行处理,彻底解决。
解决办法2:增大超时时间阀值,修改到300秒,参数如下:
在所有的mds 节点执行,
mds beacon grace 描述: 多久没收到标识消息就认为 MDS 落后了(并可能替换它)。 类型: Float 默认值: 15
参考文档:
http://docs.ceph.org.cn/cephfs/mds-config-ref/
修改参数方法:
可以写入ceph 配置文件,此方法我们没有测试成功;
查看现配置:
[[email protected] ~]# ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-6-11.asok config show |grep mds|grep beacon_grace "mds_beacon_grace": "15",
使用在线配置命令直接修改成功:
[[email protected] ~]# ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-6-11.asok config set mds_beacon_grace 300{ "success": "mds_beacon_grace = ‘300‘ (unchangeable) "}
验证:
[[email protected] ~]# ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-6-11.asok config show |grep mds|grep beacon_grace "mds_beacon_grace": "300", # << === 参数已经修改成功
参数修改完成后,可开启所有已关闭mds 节点,在集群中任意关闭一个mds 主节点,状态可以同步到其它节点,其它主节点会接管服务响应,cephfs 使用不受影响。