ceph扩容 在fuel openstack里挂载存储增加OSD 环境介绍: 单独部署了ceph节点,挂载后端存储,增加osd节点,扩容ceph 扩容前的容量 [email protected]:~# ceph -s cluster 32bf310c-358b-47bc-afc7-25b961477c84 health HEALTH_WARN too many PGs per OSD (480 > max 300) monmap e1: 1 mons at {node-99=192.168.1.4:6789/0} election epoch 1, quorum 0 node-99 osdmap e313: 4 osds: 4 up, 4 in pgmap v34509: 640 pgs, 10 pools, 127 MB data, 86 objects 8851 MB used, 2542 GB / 2653 GB avail 640 active+clean [email protected]:~# 创建 OSD 。如果未指定 UUID , OSD 启动时会自动生成一个。下列命令会输出 OSD 号,后续步骤你会 用到。 ceph osd create [{uuid} [{id}]] 可以直接执行 不需要带 UUUID 和 ID 在mon_host节点执行 这个命令 [email protected]:~# cat /etc/ceph/ceph.conf [global] fsid = 32bf310c-358b-47bc-afc7-25b961477c84 可以通过这个配置看到mon_host mon_initial_members = node-99 mon_host = 192.168.1.4 [email protected]:~# ceph osd create 4 [email protected]:~# 4这个数字是自动生成的 我前面已经用了 osd.0 osd.1 osd.2 osd.3 登录需要创建挂载存储创建OSD的主机 [email protected]:~# ssh node-92 Warning: Permanently added ‘node-92,192.168.0.9‘ (ECDSA) to the list of known hosts. Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-92-generic x86_64) * Documentation: https://help.ubuntu.com/ Last login: Tue Jun 6 17:24:08 2017 from 192.168.0.6 [email protected]:~# 我这里是node-92 具体看你自己的情况 [email protected]:~# mkdir /var/lib/ceph/osd/ceph-4 [email protected]:~# 创建挂载目录 注意路径和ceph-4 我们生成的是 4 [email protected]:~# mount -o user_xattr /dev/mapper/mpath1-part1 /var/lib/ceph/osd/ceph-4 [email protected]:~# 挂载存储盘到ceph-4,请挂载前先格式化存储盘,建议 ext4 参考命令:mkfs.ext4 /dev/mapper/ mpath1-part1 [email protected]:~# ceph-osd -i 4 --mkfs --mkkey 初始化osd数据目录 [email protected]:~# ceph auth add osd.4 osd ‘allow *‘ mon ‘allow rwx‘ -i /var/lib/ceph/osd/ ceph-4/keyring added key for osd.4 这是执行结果和上面不是同一行 [email protected]:~# ^C [email protected]:~# 注册 OSD 认证密钥 [email protected]:~# ceph osd crush add osd.4 1.0 host=node-92 把 OSD 加入 CRUSH 图,这样它才开始收数据。 osd.4是名字 1.0是weight [email protected]:~# start ceph-osd id=4 自动osd.4 注意命令格式 这是ubuntu的格式 ^[email protected]:~# ceph -s cluster 32bf310c-358b-47bc-afc7-25b961477c84 health HEALTH_WARN 10 pgs stuck unclean recovery 2/258 objects degraded (0.775%) recovery 2/258 objects misplaced (0.775%) too many PGs per OSD (382 > max 300) monmap e1: 1 mons at {node-99=192.168.1.4:6789/0} election epoch 1, quorum 0 node-99 osdmap e320: 5 osds: 5 up, 5 in; 10 remapped pgs pgmap v34544: 640 pgs, 10 pools, 127 MB data, 86 objects 11112 MB used, 5410 GB / 5677 GB avail 2/258 objects degraded (0.775%) 2/258 objects misplaced (0.775%) 630 active+clean 5 active+remapped 5 active [email protected]:~# 和前面做对比。扩容成功。 重点请注意: 在没有重启系统前,一切正常,重启系统后,会出错。需要设置自动挂载OSD 和自动OSD 参考如下 [email protected]:~# echo "mount -o user_xattr /dev/mapper/mpath1-part1 /var/lib/ceph/osd/ceph-4 " >> /etc/rc.local [email protected]:~# echo "start ceph-osd id=4" >> /etc/rc.local [email protected]:~# 重启系统测试下: [email protected]:~# init 6 [email protected]:~# Connection to node-92 closed by remote host. Connection to node-92 closed. [email protected]:~#
时间: 2024-11-10 01:29:26