1、报错一
[[email protected] ceph]# ceph -s
cluster:
id: dfb110f9-e0e0-4544-9f13-9141750ee9f6
health: HEALTH_WARN
Degraded data redundancy: 192 pgs undersized
services:
mon: 3 daemons, quorum ct,c1,c2
mgr: ct(active), standbys: c2, c1
osd: 2 osds: 2 up, 2 in
data:
pools: 3 pools, 192 pgs
objects: 0 objects, 0 B
usage: 2.0 GiB used, 2.0 TiB / 2.0 TiB avail
pgs: 102 active+undersized
90 stale+active+undersized
查看obs状态,c2没有连接上
[[email protected] ceph]# ceph osd status
+----+------+-------+-------+--------+---------+--------+---------+-----------+
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state |
+----+------+-------+-------+--------+---------+--------+---------+-----------+
| 0 | ct | 1026M | 1022G | 0 | 0 | 0 | 0 | exists,up |
| 1 | c1 | 1026M | 1022G | 0 | 0 | 0 | 0 | exists,up |
+----+------+-------+-------+--------+---------+--------+---------+-----------+
解决方法:
在c2重启osd即可解决[[email protected] ~]# systemctl restart ceph-osd.target
2、报错二
[[email protected] ceph]# ceph -s
cluster:
id: 44d72edb-4085-4cfc-8652-eb670472f169
health: HEALTH_WARN
clock skew detected on mon.c1, mon.c2
services:
mon: 3 daemons, quorum ct,c1,c2
mgr: c1(active), standbys: c2, ct
osd: 3 osds: 1 up, 1 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 1.0 GiB used, 1023 GiB / 1024 GiB avail
pgs:
解决方法:
(1)控制节点重启NTP服务[[email protected] ceph]# systemctl restart ntpd
(2)计算节点重新同步控制节点时间[[email protected] ~]# ntpdate 192.168.100.10
(3)在控制节点重启mon服务[[email protected] ceph]# systemctl restart ceph-mon.target
原文地址:https://blog.51cto.com/14557736/2476285
时间: 2024-10-09 21:10:08