Ceph:添加新的OSD节点
一、Ceph新的OSD节点上的操作
1.1 配置ceph的yum源
cat /etc/yum.repos.d/ceph-aliyun.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority =1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority =1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1
1.2 配置ceph的安装包
yum -y install ceph ceph-radosgw
二、管理节点上的操作
# ssh-copy-id hz01-dev-ops-wanl-01
# ceph-deploy disk list hz01-dev-ops-wanl-01
# ceph-deploy disk zap hz01-dev-ops-wanl-01:vdb
# ceph-deploy osd prepare hz01-dev-ops-wanl-01:vdb
# ceph-deploy osd activate hz01-dev-ops-wanl-01:vdb1
# cd /my-cluster
# ceph-deploy admin hz01-dev-ops-wanl-01
查看ceph的状态:
2.1 查看ceph的状态:ceph -s
cluster e2ca994a-00c4-477f-9390-ea3f931c5062
health HEALTH_OK
monmap e1: 3 mons at {hz-01-ops-tc-ceph-02=172.16.2.231:6789/0,hz-01-ops-tc-ceph-03=172.16.2.172:6789/0,hz-01-ops-tc-ceph-04=172.16.2.181:6789/0}
election epoch 14, quorum 0,1,2 hz-01-ops-tc-ceph-03,hz-01-ops-tc-ceph-04,hz-01-ops-tc-ceph-02
osdmap e45: 5 osds: 5 up, 5 in
flags sortbitwise,require_jewel_osds
pgmap v688: 64 pgs, 1 pools, 0 bytes data, 0 objects
170 MB used, 224 GB / 224 GB avail
64 active+clean
2.2 查看ceph osd的状态:ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.21950 root default
-2 0.04390 host hz-01-ops-tc-ceph-01
0 0.04390 osd.0 up 1.00000 1.00000
-3 0.04390 host hz-01-ops-tc-ceph-02
1 0.04390 osd.1 up 1.00000 1.00000
-4 0.04390 host hz-01-ops-tc-ceph-03
2 0.04390 osd.2 up 1.00000 1.00000
-5 0.04390 host hz-01-ops-tc-ceph-04
3 0.04390 osd.3 up 1.00000 1.00000
-6 0.04390 host hz01-dev-ops-wanl-01
4 0.04390 osd.4 up 1.00000 1.00000
原文地址:http://blog.51cto.com/molewan/2061112
时间: 2024-10-08 09:02:28