ceph add a osd

Proper Way to Remove an OSD
  Run ‘ceph osd tree‘ to list OSD devices and look for the number of the failed OSD or the OSD to be removed.
  #  ceph osd tree
  We are going to remove OSD.8 for example. Login to the OSD node first and start the following steps.
  #  ceph osd crush reweight osd.8 0
  Once the command is executed, use ‘ceph -w‘ or ‘ceph -s‘ to monitor status. When it backed to healthy state (all the PG must be in active+clean state), we can start to remove the OSD.
  #  ceph osd out 8
  #  systemctl stop [email protected]
  #  ceph osd crush remove osd.8
  #  ceph auth del osd.8
  #  ceph osd rm 8
  #  umount /var/lib/ceph/osd/ceph-8
  You may wipe out the disk by running
  #  sgdisk --zap-all -- /dev/vdb
  #  sgdisk --clear --mbrtogpt /dev/vdb
  or just by going back to ceph admin node to run
  #  ceph-deploy disk zap Ceph01:vdb
  If this is a failed hard drive, you may need to turn off the node for replacing it with a good one.
  If the ceph.conf contains the OSD we have just removed, modify the ceph.conf appropriately and then inject the updated ceph.conf to all the Ceph nodes.

Add OSD
  The simplest way to add OSD is using ‘ceph-deploy‘ command as below,
  #  ceph-deploy disk zap Ceph01:/dev/vdb
  #  ceph-deploy --overwrite-conf osd create Ceph01:vdb:[journal]
  You may also add OSD from OSD node.
  #  ceph-disk prepare /dev/vdb [journal]
  #  ceph-disk activate /dev/vdb1
  Note: Avoid to adopt the procedure of ‘Adding OSD Manually‘ on official Ceph website, there will be ownership issue on block device (e.q., /dev/sdb is not owned by ceph:ceph).

时间: 2024-10-11 07:46:51

ceph add a osd的相关文章

ceph集群osd故障修复实例演示

集群安装方式:1: ceph-deploy 方式安装ceph集群,模拟osd磁盘损坏: 分别采用如下两种方式修复: 1:使用ceph-deploy 方式修复故障osd: 2:手动修复故障osd: #######使用ceph-deploy方式修复过程演示######## 1:停止osd/etc/init.d/ceph stop osd.3 2:查看osd磁盘挂载情况:[[email protected] ceph]# lsblk NAME   MAJ:MIN RM  SIZE RO TYPE MO

ceph 删除了osd但是osd目录保存完整如何恢复

1. 这里假设有一个osd被删除了 执行下列步骤删除: ceph osd out osd.0 service ceph stop osd.0 ceph osd crush remove osd.0 ceph auth del osd.0 ceph osd rm 0 当执行以上步骤后osd.0被删除掉了 此时数据目录还在,如下: 2. 使用这个目录恢复osd 在源osd.0的节点执行: ceph osd creat ceph auth add osd.0 osd 'allow *' mon 'al

ceph集群osd存储盘Input/output erro

描述: ceph集群osd硬盘损坏引起的写入错误. 日志信息: 2017-12-13 03:40:38.596764 7f5e32df2700 -1 filestore(/var/lib/ceph/osd/ceph-44) FileStore::_do_copy_range: write error at 1118208~-5, (5) Input/output error os/filestore/FileStore.cc: In function 'int FileStore::_do_co

Ceph:删除OSD

1.查看OSD的分布信息: # ceph osd tree ID WEIGHT  TYPE NAME                     UP/DOWN REWEIGHT PRIMARY-AFFINITY  -1 0.21950 root default                                                     -2 0.04390     host hz-01-ops-tc-ceph-01                            

Ceph osd异常退出故障处理

如果一个硬盘故障导致osd节点出现如下的down状态,且一直无法恢复( reweight列等于0,表示osd已经out此集群) [[email protected] ~]# ceph osd tree # id    weight  type name       up/down reweight -1      4       root default -2      1               host os-node5 0       1                       o

ip改变引起的ceph monitor异常及osd盘崩溃的总结

公司搬家,所有服务器的ip改变.对ceph服务器配置好ip后启动,发现monitor进程启动失败,monitor进程总是试图绑定到以前的ip地址,那当然不可能成功了.开始以为服务器的ip设置有问题,在改变hostname.ceph.conf等方法无果后,逐步分析发现,是monmap中的ip地址还是以前的ip,ceph通过读取monmap来启动monitor进程,所以需要修改monmap.方法如下: #Add the new monitor locations # monmaptool --cre

【ceph故障排查】ceph集群添加了一个osd之后,该osd的状态始终为down

背景 ceph集群添加了一个osd之后,该osd的状态始终为down. 错误提示 状态查看如下 1.查看osd tree [[email protected] Asia]# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.05388 root default -2 0.01469 host node1 0 0.00490 osd.0 up 1.00000 1.00000 1 0.00490 osd.

ceph 添加删除磁盘mon osd mds

一.删除节点包含(mon osd mds)的节点 1.摘除mon [[email protected] ~]# ceph mon remove ceph01 removed mon.bgw-os-node153 at 10.240.216.153:6789/0, there are now 2 monitors 2.摘除此节点上所有的osd 1).查看此节点的osd [[email protected] ~]# ceph osd tree -4      1.08                

Ceph集群中如何摘除一个包含mon、osd和mds的节点

步骤如下: 1.摘除mon [[email protected] ~]# ceph mon remove bgw-os-node153 removed mon.bgw-os-node153 at 10.240.216.153:6789/0, there are now 2 monitors 2.摘除此节点上所有的osd 1).查看此节点的osd [[email protected] ~]# ceph osd tree -4      1.08                    host bg