ceph健康检查报错

1、报错一

[[email protected] ceph]# ceph -s
  cluster:
    id:     dfb110f9-e0e0-4544-9f13-9141750ee9f6
    health: HEALTH_WARN
            Degraded data redundancy: 192 pgs undersized

  services:
    mon: 3 daemons, quorum ct,c1,c2
    mgr: ct(active), standbys: c2, c1
    osd: 2 osds: 2 up, 2 in

  data:
    pools:   3 pools, 192 pgs
    objects: 0  objects, 0 B
    usage:   2.0 GiB used, 2.0 TiB / 2.0 TiB avail
    pgs:     102 active+undersized
             90  stale+active+undersized
查看obs状态,c2没有连接上
[[email protected] ceph]# ceph osd status
+----+------+-------+-------+--------+---------+--------+---------+-----------+
| id | host |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+------+-------+-------+--------+---------+--------+---------+-----------+
| 0  |  ct  | 1026M | 1022G |    0   |     0   |    0   |     0   | exists,up |
| 1  |  c1  | 1026M | 1022G |    0   |     0   |    0   |     0   | exists,up |
+----+------+-------+-------+--------+---------+--------+---------+-----------+

解决方法:
在c2重启osd即可解决

[[email protected] ~]# systemctl restart ceph-osd.target

2、报错二

[[email protected] ceph]# ceph -s
  cluster:
    id:     44d72edb-4085-4cfc-8652-eb670472f169
    health: HEALTH_WARN
            clock skew detected on mon.c1, mon.c2

  services:
    mon: 3 daemons, quorum ct,c1,c2
    mgr: c1(active), standbys: c2, ct
    osd: 3 osds: 1 up, 1 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   1.0 GiB used, 1023 GiB / 1024 GiB avail
    pgs: 

解决方法:
(1)控制节点重启NTP服务

[[email protected] ceph]# systemctl restart ntpd
(2)计算节点重新同步控制节点时间
[[email protected] ~]# ntpdate 192.168.100.10
(3)在控制节点重启mon服务
[[email protected] ceph]# systemctl restart ceph-mon.target

原文地址:https://blog.51cto.com/14557736/2476285

时间: 2024-10-09 21:10:08

ceph健康检查报错的相关文章

Ceph安装QEMU报错:User requested feature rados block device configure was not able to find it

CentOS6.3中,要想使用Ceph的block device,需要安装更高版本的QEMU. 安装好ceph后,安装qemu-1.5.2 # tar -xjvf qemu-1.5.2.tar.bz2 # cd qemu-1.5.2 # ./configure --enable-rbd 一定要加上--enable-rbd选项,这样qemu才能支持rbd协议. 这一步可能会报错: ERROR: User requested feature rados block device configure

mha 复制检查报错“There is no alive server. We can't do failover”

安装mha所参考的文章: http://linzhijian.blog.51cto.com/1047212/1906434 http://www.cnblogs.com/xiaoboluo768/p/5984530.html 参考以上文章搭建mha0.57+centos7+mariadb10.1.22 配置文件内容: 验证: 1.验证ssh成功 2.验证复制状态失败 解决思路: 1.远程测试数据库是否可以连接,可以连接 答案:未解决 2.肯定不能度娘了 在谷歌上查询到wubx大师回答的如上错误的

ceph集群报错:HEALTH_ERR 1 pgs inconsistent; 1 scrub errors

报错信息如下: [[email protected] ~]# ceph health detail HEALTH_ERR 1 pgs inconsistent; 1 scrub errors; pg 2.37c is active+clean+inconsistent, acting [75,6,35] 1 scrub errors 报错信息总结: 问题PG:2.37c OSD编号:75,6,35 执行常规修复: ceph pg repair 2.37c 查看修复结果: [[email prot

nginx检查报错:nginx: [emerg] "server" directive is not allowed here in

想检查一个配置文件是否正确,-c 指定之后发现有报错,如下: [[email protected]2:~# nginx -t -c /etc/nginx/conf.d/default.conf nginx: [emerg] "server" directive is not allowed here in /etc/nginx/conf.d/default.conf:1 nginx: configuration file /etc/nginx/conf.d/default.conf t

ceph安装各种报错

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate –mark-init sysvinit –mount /dev/sdb1 说实话这个问题很二我将OSD单独放到一个分区sdb执行的命令是:ceph-deploy osd prepare network:/dev/sdb1ceph-deploy osd activate network:/dev/sdb1上面的是错误的正确的是ceph-

oracle rac 安装 PRVG-13606 ntp 同步报错解决过程

oracle ntp 检查报错 ./runcluvfy.sh stage -pre crsinst -n oracle57,oracle58 -verbose [10:06:58]Verifying Network Time Protocol (NTP) ...FAILED[10:06:58]PRVG-1063 : configuration files for more than one time synchronization service[10:06:58]were found on n

CEPH -S集群报错TOO MANY PGS PER OSD

背景 集群状态报错,如下: # ceph -s cluster 1d64ac80-21be-430e-98a8-b4d8aeb18560 health HEALTH_WARN <-- 报错的地方 too many PGs per OSD (912 > max 300) monmap e1: 1 mons at {node1=109.105.115.67:6789/0} election epoch 4, quorum 0 node1 osdmap e49: 2 osds: 2 up, 2 in

ceph升级到10.2.3 版本启动服务报错:Unknown lvalue &#39;TasksMax&#39; in section &#39;Service&#39;

#### ceph软件包升级完成,执行命令重启服务 sudo systemctl restart [email protected]"$HOSTNAME" #### 故障现象 服务可以启动,启动后显示有报错信息: Nov 23 17:14:45 ceph-6-12 systemd[1]:        [/usr/lib/systemd/system/[email protected]:18] Unknown lvalue 'TasksMax' in section 'Service'

Oracle EBS-SQL (MRP-7):检查MRP计划运行报错原因之超大数据查询2.sql

The following scripts can be used to check for huge line numbers: -- PO Requisitions select * from PO_REQUISITION_LINES_ALL where LINE_NUM > 1000000000; -- PO Lines select * from PO_LINES_ALL where LINE_NUM > 1000000000; -- Receiving Supply SELECT *