ip改变引起的ceph monitor异常及osd盘崩溃的总结

公司搬家,所有服务器的ip改变。对ceph服务器配置好ip后启动,发现monitor进程启动失败,monitor进程总是试图绑定到以前的ip地址,那当然不可能成功了。开始以为服务器的ip设置有问题,在改变hostname、ceph.conf等方法无果后,逐步分析发现,是monmap中的ip地址还是以前的ip,ceph通过读取monmap来启动monitor进程,所以需要修改monmap。方法如下:

#Add the new monitor locations
# monmaptool --create --add mon0 192.168.32.2:6789 --add osd1 192.168.32.3:6789   --add osd2 192.168.32.4:6789 --fsid 61a520db-317b-41f1-9752-30cedc5ffb9a   --clobber monmap

#Retrieve the monitor map
# ceph mon getmap -o monmap.bin

#Check new contents
# monmaptool --print monmap.bin

#Inject the monmap
# ceph-mon -i mon0 --inject-monmap monmap.bin
# ceph-mon -i osd1 --inject-monmap monmap.bin
# ceph-mon -i osd2 --inject-monmap monmap.bin

再启动monitor,一切正常。

但出现了上一篇文章中描述的一块osd盘挂掉的情况。查了一圈,只搜到ceph的官网上说是ceph的一个bug。无力修复,于是删掉这块osd,再重装:

# service ceph stop osd.4
#不必执行ceph osd crush remove osd.4
# ceph auth del osd.4
# ceph osd rm 4

# umount /cephmp1
# mkfs.xfs -f /dev/sdc
# mount /dev/sdc /cephmp1
#此处执行create无法正常安装osd
# ceph-deploy osd prepare osd2:/cephmp1:/dev/sdf1
# ceph-deploy osd activate osd2:/cephmp1:/dev/sdf1

完成后重启该osd,成功运行。ceph会自动平衡数据,最后的状态是:

[[email protected] ~]# ceph -s
    cluster 61a520db-317b-41f1-9752-30cedc5ffb9a
     health HEALTH_WARN 9 pgs incomplete; 9 pgs stuck inactive; 9 pgs stuck unclean; 3 requests are blocked > 32 sec
     monmap e3: 3 mons at {mon0=192.168.32.2:6789/0,osd1=192.168.32.3:6789/0,osd2=192.168.32.4:6789/0}, election epoch 76, quorum 0,1,2 mon0,osd1,osd2
     osdmap e689: 6 osds: 6 up, 6 in
      pgmap v189608: 704 pgs, 5 pools, 34983 MB data, 8966 objects
            69349 MB used, 11104 GB / 11172 GB avail
                 695 active+clean
                   9 incomplete

出现了9个pg的incomplete状态。

[[email protected] ~]# ceph health detail
HEALTH_WARN 9 pgs incomplete; 9 pgs stuck inactive; 9 pgs stuck unclean; 3 requests are blocked > 32 sec; 1 osds have slow requests
pg 5.95 is stuck inactive for 838842.634721, current state incomplete, last acting [1,4]
pg 5.66 is stuck inactive since forever, current state incomplete, last acting [4,0]
pg 5.de is stuck inactive for 808270.105968, current state incomplete, last acting [0,4]
pg 5.f5 is stuck inactive for 496137.708887, current state incomplete, last acting [0,4]
pg 5.11 is stuck inactive since forever, current state incomplete, last acting [4,1]
pg 5.30 is stuck inactive for 507062.828403, current state incomplete, last acting [0,4]
pg 5.bc is stuck inactive since forever, current state incomplete, last acting [4,1]
pg 5.a7 is stuck inactive for 499713.993372, current state incomplete, last acting [1,4]
pg 5.22 is stuck inactive for 496125.831204, current state incomplete, last acting [0,4]
pg 5.95 is stuck unclean for 838842.634796, current state incomplete, last acting [1,4]
pg 5.66 is stuck unclean since forever, current state incomplete, last acting [4,0]
pg 5.de is stuck unclean for 808270.106039, current state incomplete, last acting [0,4]
pg 5.f5 is stuck unclean for 496137.708958, current state incomplete, last acting [0,4]
pg 5.11 is stuck unclean since forever, current state incomplete, last acting [4,1]
pg 5.30 is stuck unclean for 507062.828475, current state incomplete, last acting [0,4]
pg 5.bc is stuck unclean since forever, current state incomplete, last acting [4,1]
pg 5.a7 is stuck unclean for 499713.993443, current state incomplete, last acting [1,4]
pg 5.22 is stuck unclean for 496125.831274, current state incomplete, last acting [0,4]
pg 5.de is incomplete, acting [0,4]
pg 5.bc is incomplete, acting [4,1]
pg 5.a7 is incomplete, acting [1,4]
pg 5.95 is incomplete, acting [1,4]
pg 5.66 is incomplete, acting [4,0]
pg 5.30 is incomplete, acting [0,4]
pg 5.22 is incomplete, acting [0,4]
pg 5.11 is incomplete, acting [4,1]
pg 5.f5 is incomplete, acting [0,4]
2 ops are blocked > 8388.61 sec
1 ops are blocked > 4194.3 sec
2 ops are blocked > 8388.61 sec on osd.0
1 ops are blocked > 4194.3 sec on osd.0
1 osds have slow requests

查了一圈无果。一个有同样遭遇的人的一段话:

I already tried "ceph pg repair 4.77", stop/start OSDs, "ceph osd lost", "ceph pg force_create_pg 4.77".
Most scary thing is "force_create_pg" does not work. At least it should be a way to wipe out a incomplete PG
without destroying a whole pool.

以上方法尝试了一下,都不行。暂时无法解决,感觉有点坑。

PS:常用pg操作

[[email protected] ~]# ceph pg map 5.de
osdmap e689 pg 5.de (5.de) -> up [0,4] acting [0,4]
[[email protected] ~]# ceph pg 5.de query
[[email protected] ~]# ceph pg scrub 5.de
instructing pg 5.de on osd.0 to scrub
[[email protected] ~]# ceph pg 5.de mark_unfound_lost revert
pg has no unfound objects
#ceph pg dump_stuck stale
#ceph pg dump_stuck inactive
#ceph pg dump_stuck unclean
[[email protected] ~]# ceph osd lost 1
Error EPERM: are you SURE?  this might mean real, permanent data loss.  pass --yes-i-really-mean-it if you really do.
[[email protected] ~]#
[[email protected] ~]# ceph osd lost 4 --yes-i-really-mean-it
osd.4 is not down or doesn‘t exist
[[email protected] ~]# service ceph stop osd.4
=== osd.4 ===
Stopping Ceph osd.4 on osd2...kill 22287...kill 22287...done
[[email protected] ~]# ceph osd lost 4 --yes-i-really-mean-it
marked osd lost in epoch 690
[[email protected] mnt]# ceph pg repair 5.de
instructing pg 5.de on osd.0 to repair
[[email protected] mnt]# ceph pg repair 5.de
instructing pg 5.de on osd.0 to repair
时间: 2024-10-07 13:01:41

ip改变引起的ceph monitor异常及osd盘崩溃的总结的相关文章

Ceph Monitor的数据管理

转自:https://www.ustack.com/blog/ceph-monitor-2/ Monitor管理了Ceph的状态信息,维护着Ceph中各个成员的关系,这些信息都是存放在leveldb中的,但是这些数据是如何生成的?又是如何消亡的.本文旨在展现Ceph monitor中数据的生老病死,带领读者走入Monitor的世界. 数据概览 首先我们分析的版本是0.94.7的版本,也就是目前Hammer最新的版本.对于leveldb中的数据,我们需要来一个感性的认识,请看下面数据,由于数据太多

电脑IP改变后oracle em无法登陆的解决办法(亲测)

以下方法为本人亲测 情况:假设电脑初次安装oracle时的ip是192.168.133.110 那么进入em的地址就是http://192.168.133.110:1158/em/console/logon/logon 假设电脑的IP改变为192.168.88.66 那么进入em的地址应该是http://192.168.88.66:1158/em/console/logon/logon 但是测试后,输入上述地址无法进入em,原因是电脑ip改变了,oracle需要修改配置文件才可以访问,一般修改配

Ceph monitor故障恢复探讨

1 问题 一般来说,在实际运行中,ceph monitor的个数是2n+1(n>=0)个,在线上至少3个,只要正常的节点数>=n+1,ceph的paxos算法能保证系统的正常运行.所以,对于3个节点,同时只能挂掉一个.一般来说,同时挂掉2个节点的概率比较小,但是万一挂掉2个呢? 如果ceph的monitor节点超过半数挂掉,paxos算法就无法正常进行仲裁(quorum),此时,ceph集群会阻塞对集群的操作,直到超过半数的monitor节点恢复. If there are not enoug

ceph osd盘挂掉,无法修复

由于网络更换ip段,导致ceph启动时monitor进程无法启动,解决了这个问题后,ceph能够启动,但一块osd盘坏掉了,错误日志如下: 2014-12-24 10:53:30.353262 7f3fbbd78800 0 ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578), process ceph-osd, pid 9794 2014-12-24 10:53:30.359829 7f3fbbd78800 0 filesto

ceph存储扩容(新盘新建存储池)

1.换上新盘后先使用命令做raid2.ceph创建新存储池(注:需要在存储安装完成后操作)2.1修改/etc/ceph/ceph查看是否关闭自动生成crushmap 若已经关闭,则无需其他操作:若没有关闭,关闭后需重启ceph的服务使其生效[[email protected] ~]# systemctl restart ceph.target 2.2新建目录扩区当前集群应用的crushmap[[email protected] ~]# mkdir /root/cluster/[[email pr

Ceph Monitor基础架构与模块详解

转自:https://www.ustack.com/blog/ceph-monitor/ Ceph rados cluster离不开Monitor,如果没有Monitor,则Ceph将无法执行一条简单的命令.Monitor由于其特殊性,了解它,对于我们深入理解Ceph,保证Ceph的稳定性,有很大帮助. Monitor 基本架构介绍 Monitor的基本架构图: Monitor的主要任务就是维护集群视图的一致性,在维护一致性的时候使用了Paxos协议,并将其实例化到数据库中,方便后续的访问.所以

ceph同步数据过程OSD进程异常退出记录

做的操作: ceph集群扩容了几个节点. 异常现象: ceph集群同步时,总是OSD进程异常的宕掉(同步一段时间数据后). ceph版本:  9.2.1 日志: 7月 25 09:25:57 ceph6 ceph-osd[26051]: 0> 2017-07-25 09:25:57.471502 7f46fe478700 -1 common/HeartbeatMap.cc: In function 'bool ceph::HeartbeatMap::_ch 7月 25 09:25:57 ceph

centos7 mailx当ip改变后发送邮件

配置证书服务查看:https://www.cnblogs.com/yunweis/p/8149242.html 开启25端口: 先查看25端口情况: firewall-cmd --query-port=25/tcp 添加端口: firewall-cmd --add-port=25/tcp --permanent 重载防火墙: firewall-cmd --reload 发送邮件: echo "邮件内容" | mail -s "主题" [email protected

ceph 创建和删除osd

1.概述 本次主要是使用ceph-deploy工具和使用ceph的相关命令实现在主机上指定磁盘创建和删除osd,本次以主机172.16.1.96(主机名hadoop96)为例,此主机系统盘为/dev/sda, 其他盘有/dev/sdb./dev/sdc和/dev/sdd,这几个盘都是裸磁盘,目的是使用这几个盘的组合创建osd. 磁盘情况如下图: 2.创建osd 使用ceph-deploy(工具安装在hadoop95上)创建osd,这里创建两个osd,其中一个数据和日志在同一个磁盘上,另外的osd