pg 1.277 is active+clean+inconsistent, acting 故障

ceph health detail 报错
ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
pg 1.277 is active+clean+inconsistent, acting [12,1,10]
1 scrub errors

解决办法
[[email protected] nova]# ceph pg repair 1.277
instructing pg 1.277 on osd.145 to repair
[[email protected] nova]# service [email protected] restart

验证
[[email protected] ceph-145]# ceph health detail
HEALTH_OK

原文地址:http://blog.51cto.com/swq499809608/2065071

时间: 2024-11-10 16:02:59

pg 1.277 is active+clean+inconsistent, acting 故障的相关文章

pg inconsistent

ceph 状态突然 error [[email protected] ~]# ceph health detail HEALTH_ERR 1 pgs inconsistent; 1 scrub errors; pg 2.37c is active+clean+inconsistent, acting [75,6,35] 1 scrub errors 报错信息总结: 问题PG:2.37c OSD编号:75,6,35 执行常规修复: ceph pg repair 2.37c这时会出现osd节点各别重

ceph修改pg inconsistent

异常情况 1.收到异常情况如下: HEALTH_ERR 37 scrub errors; Possible data damage: 1 pg inconsistent 2.查看详细信息 #ceph health detail HEALTH_ERR 37 scrub errors; Possible data damage: 1 pg inconsistent OSD_SCRUB_ERRORS 37 scrub errors PG_DAMAGED Possible data damage: 1

ceph ( pgs inconsistent) pgs不一致 异常状态处理方式

问题描述: 在某些情况下,osd出现异常,导致pgs出现不一致状态# ceph health detailHEALTH_ERR 1 pgs inconsistent; 1 scrub errorspg 6.89 is active+clean+inconsistent, acting [12,1,10]1 scrub errors 可以看到,pg 6.89处于不一致状态 解决方式:#ceph pg repair 6.89instructing pg 6.89 on osd.12 to repai

ceph集群报错:HEALTH_ERR 1 pgs inconsistent; 1 scrub errors

报错信息如下: [[email protected] ~]# ceph health detail HEALTH_ERR 1 pgs inconsistent; 1 scrub errors; pg 2.37c is active+clean+inconsistent, acting [75,6,35] 1 scrub errors 报错信息总结: 问题PG:2.37c OSD编号:75,6,35 执行常规修复: ceph pg repair 2.37c 查看修复结果: [[email prot

(7)ceph 2 pgs inconsistent故障

[[email protected] ~]# ceph health detailHEALTH_ERR 2 scrub errors; Possible data damage: 2 pgs inconsistentOSD_SCRUB_ERRORS 2 scrub errorsPG_DAMAGED Possible data damage: 2 pgs inconsistentpg 3.3e is active+clean+inconsistent, acting [11,17,4]pg 3.4

交换机死机,导致ceph ( requests are blocked ) 异常解决方法

问题描述: 万兆交换机死机后,导致在交换机上的ceph 的cluster网络会中断,用户正在对数据块的访问没有完成导致请求被blocked,同时部分pg会处于不同步状态,因此交换机重启后,通过ceph health会发现ceph集群不在OK 状态 health HEALTH_ERR 1 pgs inconsistent; 1 pgs repair; 2 requests are blocked > 32 sec; 1 scrub errorspg 6.89 is active+clean+inc

Ceph源码解析:Scrub故障检测

转载请注明出处 陈小跑 http://www.cnblogs.com/chenxianpao/p/5878159.html 本文只梳理了大致流程,细节部分还没搞的太懂,有时间再看,再补充,有错误请指正,谢谢. Ceph 的主要一大特点是强一致性,这里主要指端到端的一致性.众所周知,传统存储路径上从应用层到内核的文件系统.通用块层.SCSI层到最后的HBA和磁盘控制器,每层都有发生错误的可能性,因此传统的端到端解决方案会以数据块校验为主来解决.而在 Ceph 方面,更是加入了 Ceph 自己的客户

理解 OpenStack + Ceph (9): Ceph 的size/min_size/choose/chooseleaf/scrubbing/repair 等概念

本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 (5)Ceph 与 OpenStack 集成的实现 (6)QEMU-KVM 和 Ceph RBD 的 缓存机制总结 (7)Ceph 的基本操作和常见故障排除方法 (8)基本的性能测试工具和方法 (9) pool 的size 和 min_size,choose 和 chooseleaf,pg scru

ceph运维命令合集

一.集群 1.启动一个ceph进程 启动mon进程 [[email protected] ~]#203.109 service ceph start mon.ceph-mon1 启动msd进程 [[email protected] ~]#203.109 service ceph start mds.ceph-mds1 启动osd进程 [[email protected] ~]#203.109 service ceph start osd.0 2.查看机器的监控状态 [[email protect