本例子ceph L版本采用的是filestore,而不是bluestore.
一、查看class类型,只有一个hdd,。Luminous 为每个OSD添加了一个新的属性:设备类。默认情况下,OSD将根据Linux内核公开的硬件属性自动将其设备类设置为HDD、SSD或NVMe(如果尚未设置)。这些设备类在ceph osd tree 中列出(实验环境无ssd硬盘,生产环境有ssd可以直接认到并自动创建ssd class,不需要第二步到第四步) , 修改前集群拓扑:
[[email protected] ceph-install]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.76163 root default -9 0.25388 rack rack01 -3 0.25388 host ceph1 0 hdd 0.07809 osd.0 up 1.00000 1.00000 1 hdd 0.07809 osd.1 up 1.00000 1.00000 6 hdd 0.09769 osd.6 up 1.00000 1.00000 -10 0.25388 rack rack02 -5 0.25388 host ceph2 2 hdd 0.07809 osd.2 up 1.00000 1.00000 3 hdd 0.07809 osd.3 up 1.00000 1.00000 7 hdd 0.09769 osd.7 up 1.00000 1.00000 -11 0.25388 rack rack03 -7 0.25388 host ceph3 4 hdd 0.07809 osd.4 up 1.00000 1.00000 5 hdd 0.07809 osd.5 up 1.00000 1.00000 8 hdd 0.09769 osd.8 up 1.00000 1.00000 |
二、将osd.6 osd.7 osd.8 从class hdd解开
[[email protected] ceph-install]# ceph osd crush rm-device-class osd.6 done removing class of osd(s): 6 [[email protected] ceph-install]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.76163 root default -9 0.25388 rack rack01 -3 0.25388 host ceph1 6 0.09769 osd.6 up 1.00000 1.00000 0 hdd 0.07809 osd.0 up 1.00000 1.00000 1 hdd 0.07809 osd.1 up 1.00000 1.00000 -10 0.25388 rack rack02 -5 0.25388 host ceph2 2 hdd 0.07809 osd.2 up 1.00000 1.00000 3 hdd 0.07809 osd.3 up 1.00000 1.00000 7 hdd 0.09769 osd.7 up 1.00000 1.00000 -11 0.25388 rack rack03 -7 0.25388 host ceph3 4 hdd 0.07809 osd.4 up 1.00000 1.00000 5 hdd 0.07809 osd.5 up 1.00000 1.00000 8 hdd 0.09769 osd.8 up 1.00000 1.00000 [[email protected] ceph-install]# ceph osd crush rm-device-class osd.7 [[email protected] ceph-install]# ceph osd crush rm-device-class osd.8 |
三、将osd.6 osd.7 osd.8 加入到class ssd
[[email protected] ceph-install]# ceph osd crush set-device-class ssd osd.6 [[email protected] ceph-install]# ceph osd crush set-device-class ssd osd.7 [[email protected] ceph-install]# ceph osd crush set-device-class ssd osd.8 [[email protected] ceph]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.76163 root default -9 0.25388 rack rack01 -3 0.25388 host ceph1 0 hdd 0.07809 osd.0 up 1.00000 1.00000 1 hdd 0.07809 osd.1 up 1.00000 1.00000 6 ssd 0.09769 osd.6 up 1.00000 1.00000 -10 0.25388 rack rack02 -5 0.25388 host ceph2 2 hdd 0.07809 osd.2 up 1.00000 1.00000 3 hdd 0.07809 osd.3 up 1.00000 1.00000 7 ssd 0.09769 osd.7 up 1.00000 1.00000 -11 0.25388 rack rack03 -7 0.25388 host ceph3 4 hdd 0.07809 osd.4 up 1.00000 1.00000 5 hdd 0.07809 osd.5 up 1.00000 1.00000 8 ssd 0.09769 osd.8 up 1.00000 1.00000 |
四、查看class类型, 已经有2个class
[root[email protected] ceph]# ceph osd crush class ls [ "hdd", "ssd" ] |
五、创建个ssd规则
[[email protected] ceph]#ceph osd crush rule create-replicated rule-ssd default host ssd [[email protected] ceph]# ceph osd crush rule ls replicated_rule rule-ssd |
六、创建一个使用该rule-ssd规则的存储池:
[[email protected] ceph]#ceph osd pool create ssdpool 64 64 rule-ssd |
查看pool
[[email protected] ceph]# ceph osd pool ls detail | grep ssdpool pool 15 ‘ssdpool‘ replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 316 flags hashpspool stripe_width 0 |
更新client.cinder权限
[[email protected] ceph]#ceph auth caps client.cinder mon ‘allow r‘ osd ‘allow rwx pool=ssdpool,allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images‘ |
查看认证账号
[[email protected] ceph]# ceph auth list installed auth entries: mds.ceph1 key: AQDvL21d035tKhAAg6jY/iSoo511H+Psbp8xTw== caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow rwx osd.0 key: AQBzKm1dmT3FNhAAmsEpJv9I6CkYmD2Kfk3Wrw== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.1 key: AQCxKm1dfLZdIBAAVD/B9RdlTr3ZW7d39PuZ4g== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.2 key: AQCKK21dKPAbFhAA8yQ8v3/+kII5gAsNga/M+w== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.3 key: AQCtK21dHMZiBBAAoz7thWgs4sFHgPBTkd4pGw== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.4 key: AQDEK21dKL4XFhAAsx39rOmszOtVHfx/W/UMQQ== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.5 key: AQDZK21duaoQBBAAB1Vu1c3L8JNGj6heq6p2yw== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.6 key: AQAqG7Nd1dvbGxAA/H2w7FAVSWI2wSaU2TSCOw== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.7 key: AQCnIrRdAJHSFRAA+oDUal2jQR5Z3OxlB2UjZw== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.8 key: AQC8IrRdJb8ZMhAAm1SSjGFhl2PuwwpGaIdouQ== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * client.admin key: AQC6mmJdfBzyHhAAE1GazlHqH2uD35vpL6Do1w== caps: [mds] allow * caps: [mgr] allow * caps: [mon] allow * caps: [osd] allow * client.bootstrap-mds key: AQC7mmJdCG1wJBAAVmRYWiDqFSRCHVQhEUdGqQ== caps: [mon] allow profile bootstrap-mds client.bootstrap-mgr key: AQC8mmJdVUCSIhAA8foLa1zmMmzNyBAkregvBw== caps: [mon] allow profile bootstrap-mgr client.bootstrap-osd key: AQC9mmJd+n5JIxAAYpyAJRVbRnZBJBdpSPCAAA== caps: [mon] allow profile bootstrap-osd client.bootstrap-rgw key: AQC+mmJdC+mxIBAAVVDJiKRyS+4vdX2r8nMOLA== caps: [mon] allow profile bootstrap-rgw client.cinder key: AQDOdW5do2jzEhAA/v/VYEBHOUk440mpP6GMBg== caps: [mon] allow r caps: [osd] allow rwx pool=ssdpool,allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images client.glance key: AQAVdm5dojfsLxAAAtt+eX7psQC7pXpisqsvBg== caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images mgr.ceph1 key: AQAjMG1deO05IxAALhbrB66XWKVCjWXraUwL0w== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow * mgr.ceph2 key: AQAkMG1dhl5COBAALHSHl0MXA5xvrQCCXzBR0g== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow * mgr.ceph3 key: AQAmMG1dJ1fJFBAAF0is+UiuKZjwGRkBWg6W4A== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow * |
七 修改openstack cinder-volume增加配置,并创建volume
在/etc/cinder/cinder.conf添加以下内容,调用ceph2个pool,一个hdd,一个ssd
[DEFAULT] enabled_backends = lvm,ceph,ssd [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 rbd_user = cinder rbd_secret_uuid = fcb30733-4a1a-4635-ba07-9d89cf54a530 volume_backend_name=ceph [ssd] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = ssdpool rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 rbd_user = cinder rbd_secret_uuid = fcb30733-4a1a-4635-ba07-9d89cf54a530 volume_backend_name=ssd |
重启cinder-volume服务
systemctl restart openstack-cinder-volume.service |
创建新的cinder-type
cinder type-create ssd cinder type-key ssd set volume_backend_name=ssd |
查询cinder-volume 是否启动成功
[[email protected] cinder]# openstack volume service list +------------------+-----------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+-----------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller | nova | enabled | up | 2019-10-26T15:16:16.000000 | | cinder-volume | [email protected] | nova | enabled | down | 2019-03-03T09:20:58.000000 | | cinder-volume | [email protected] | nova | enabled | up | 2019-10-26T15:16:19.000000 | | cinder-volume | [email protected] | nova | enabled | up | 2019-10-26T15:16:19.000000 | | cinder-volume | [email protected] | nova | enabled | up | 2019-10-26T15:16:14.000000 | +------------------+-----------------+------+---------+-------+----------------------------+ |
创建volume
[[email protected] cinder]# openstack volume create --type ssd --size 1 disk20191026 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-10-26T15:17:46.000000 | | description | None | | encrypted | False | | id | ecff02cc-7d5c-42cc-986e-06e9552426db | | migration_status | None | | multiattach | False | | name | disk20191026 | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | ssd | | updated_at | None | | user_id | f8b392b9ca95447c91913007d05ccc4f | +---------------------+--------------------------------------+ [[email protected] cinder]# openstack volume list | grep disk20191026 | ecff02cc-7d5c-42cc-986e-06e9552426db | disk20191026 | available | 1 | | |
在ceph检查volume是否在ssdpool创建的
[[email protected] ceph]# rbd -p ssdpool ls volume-ecff02cc-7d5c-42cc-986e-06e9552426db |
以上编号UID对应的
备注:
修改ceph配置就创建新的osd会用到以下命令:
ceph-deploy --overwrite-conf config push ceph1 ceph2 ceph3 |
ceph-deploy osd create ceph1 --data /dev/sde --journal /dev/sdf1 |
本例子的ceph.conf如下
[[email protected] ceph]# cat /etc/ceph/ceph.conf [global] fsid = 6bbab2f3-f90c-439d-86d7-9c0f3603303c mon_initial_members = ceph1, ceph2, ceph3 mon_host = 172.16.3.61,172.16.3.62,172.16.3.63 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx mon clock drift allowed = 10 mon clock drift warn backoff = 30 osd pool default pg num = 64 osd pool default pgp num = 64 osd_crush_update_on_start = false |
原文地址:https://www.cnblogs.com/cloud-datacenter/p/12231275.html