ceph SSD HDD分离与openstack调用

本例子ceph L版本采用的是filestore,而不是bluestore.

一、查看class类型,只有一个hdd,。Luminous 为每个OSD添加了一个新的属性:设备类。默认情况下,OSD将根据Linux内核公开的硬件属性自动将其设备类设置为HDD、SSD或NVMe(如果尚未设置)。这些设备类在ceph osd tree 中列出(实验环境无ssd硬盘,生产环境有ssd可以直接认到并自动创建ssd class,不需要第二步到第四步) , 修改前集群拓扑:


[[email protected] ceph-install]# ceph osd tree

ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF

-1 0.76163 root default

-9 0.25388 rack rack01

-3 0.25388 host ceph1

0 hdd 0.07809 osd.0 up 1.00000 1.00000

1 hdd 0.07809 osd.1 up 1.00000 1.00000

6 hdd 0.09769 osd.6 up 1.00000 1.00000

-10 0.25388 rack rack02

-5 0.25388 host ceph2

2 hdd 0.07809 osd.2 up 1.00000 1.00000

3 hdd 0.07809 osd.3 up 1.00000 1.00000

7 hdd 0.09769 osd.7 up 1.00000 1.00000

-11 0.25388 rack rack03

-7 0.25388 host ceph3

4 hdd 0.07809 osd.4 up 1.00000 1.00000

5 hdd 0.07809 osd.5 up 1.00000 1.00000

8 hdd 0.09769 osd.8 up 1.00000 1.00000

二、将osd.6 osd.7 osd.8 从class hdd解开


[[email protected] ceph-install]# ceph osd crush rm-device-class osd.6

done removing class of osd(s): 6

[[email protected] ceph-install]# ceph osd tree

ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF

-1 0.76163 root default

-9 0.25388 rack rack01

-3 0.25388 host ceph1

6 0.09769 osd.6 up 1.00000 1.00000

0 hdd 0.07809 osd.0 up 1.00000 1.00000

1 hdd 0.07809 osd.1 up 1.00000 1.00000

-10 0.25388 rack rack02

-5 0.25388 host ceph2

2 hdd 0.07809 osd.2 up 1.00000 1.00000

3 hdd 0.07809 osd.3 up 1.00000 1.00000

7 hdd 0.09769 osd.7 up 1.00000 1.00000

-11 0.25388 rack rack03

-7 0.25388 host ceph3

4 hdd 0.07809 osd.4 up 1.00000 1.00000

5 hdd 0.07809 osd.5 up 1.00000 1.00000

8 hdd 0.09769 osd.8 up 1.00000 1.00000

[[email protected] ceph-install]# ceph osd crush rm-device-class osd.7

[[email protected] ceph-install]# ceph osd crush rm-device-class osd.8

三、将osd.6 osd.7 osd.8 加入到class ssd


[[email protected] ceph-install]# ceph osd crush set-device-class ssd osd.6

[[email protected] ceph-install]# ceph osd crush set-device-class ssd osd.7

[[email protected] ceph-install]# ceph osd crush set-device-class ssd osd.8

[[email protected] ceph]# ceph osd tree

ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF

-1 0.76163 root default

-9 0.25388 rack rack01

-3 0.25388 host ceph1

0 hdd 0.07809 osd.0 up 1.00000 1.00000

1 hdd 0.07809 osd.1 up 1.00000 1.00000

6 ssd 0.09769 osd.6 up 1.00000 1.00000

-10 0.25388 rack rack02

-5 0.25388 host ceph2

2 hdd 0.07809 osd.2 up 1.00000 1.00000

3 hdd 0.07809 osd.3 up 1.00000 1.00000

7 ssd 0.09769 osd.7 up 1.00000 1.00000

-11 0.25388 rack rack03

-7 0.25388 host ceph3

4 hdd 0.07809 osd.4 up 1.00000 1.00000

5 hdd 0.07809 osd.5 up 1.00000 1.00000

8 ssd 0.09769 osd.8 up 1.00000 1.00000

四、查看class类型, 已经有2个class


[root[email protected] ceph]# ceph osd crush class ls

[

"hdd",

"ssd"

]

五、创建个ssd规则


[[email protected] ceph]#ceph osd crush rule create-replicated rule-ssd default host ssd

[[email protected] ceph]# ceph osd crush rule ls

replicated_rule

rule-ssd

六、创建一个使用该rule-ssd规则的存储池:


[[email protected] ceph]#ceph osd pool create ssdpool 64 64 rule-ssd

查看pool


[[email protected] ceph]# ceph osd pool ls detail | grep ssdpool

pool 15 ‘ssdpool‘ replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 316 flags hashpspool stripe_width 0

更新client.cinder权限

[[email protected] ceph]#ceph auth caps client.cinder mon ‘allow r‘ osd ‘allow rwx pool=ssdpool,allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images‘

查看认证账号


[[email protected] ceph]# ceph auth list

installed auth entries:

mds.ceph1

key: AQDvL21d035tKhAAg6jY/iSoo511H+Psbp8xTw==

caps: [mds] allow

caps: [mon] allow profile mds

caps: [osd] allow rwx

osd.0

key: AQBzKm1dmT3FNhAAmsEpJv9I6CkYmD2Kfk3Wrw==

caps: [mgr] allow profile osd

caps: [mon] allow profile osd

caps: [osd] allow *

osd.1

key: AQCxKm1dfLZdIBAAVD/B9RdlTr3ZW7d39PuZ4g==

caps: [mgr] allow profile osd

caps: [mon] allow profile osd

caps: [osd] allow *

osd.2

key: AQCKK21dKPAbFhAA8yQ8v3/+kII5gAsNga/M+w==

caps: [mgr] allow profile osd

caps: [mon] allow profile osd

caps: [osd] allow *

osd.3

key: AQCtK21dHMZiBBAAoz7thWgs4sFHgPBTkd4pGw==

caps: [mgr] allow profile osd

caps: [mon] allow profile osd

caps: [osd] allow *

osd.4

key: AQDEK21dKL4XFhAAsx39rOmszOtVHfx/W/UMQQ==

caps: [mgr] allow profile osd

caps: [mon] allow profile osd

caps: [osd] allow *

osd.5

key: AQDZK21duaoQBBAAB1Vu1c3L8JNGj6heq6p2yw==

caps: [mgr] allow profile osd

caps: [mon] allow profile osd

caps: [osd] allow *

osd.6

key: AQAqG7Nd1dvbGxAA/H2w7FAVSWI2wSaU2TSCOw==

caps: [mgr] allow profile osd

caps: [mon] allow profile osd

caps: [osd] allow *

osd.7

key: AQCnIrRdAJHSFRAA+oDUal2jQR5Z3OxlB2UjZw==

caps: [mgr] allow profile osd

caps: [mon] allow profile osd

caps: [osd] allow *

osd.8

key: AQC8IrRdJb8ZMhAAm1SSjGFhl2PuwwpGaIdouQ==

caps: [mgr] allow profile osd

caps: [mon] allow profile osd

caps: [osd] allow *

client.admin

key: AQC6mmJdfBzyHhAAE1GazlHqH2uD35vpL6Do1w==

caps: [mds] allow *

caps: [mgr] allow *

caps: [mon] allow *

caps: [osd] allow *

client.bootstrap-mds

key: AQC7mmJdCG1wJBAAVmRYWiDqFSRCHVQhEUdGqQ==

caps: [mon] allow profile bootstrap-mds

client.bootstrap-mgr

key: AQC8mmJdVUCSIhAA8foLa1zmMmzNyBAkregvBw==

caps: [mon] allow profile bootstrap-mgr

client.bootstrap-osd

key: AQC9mmJd+n5JIxAAYpyAJRVbRnZBJBdpSPCAAA==

caps: [mon] allow profile bootstrap-osd

client.bootstrap-rgw

key: AQC+mmJdC+mxIBAAVVDJiKRyS+4vdX2r8nMOLA==

caps: [mon] allow profile bootstrap-rgw

client.cinder

key: AQDOdW5do2jzEhAA/v/VYEBHOUk440mpP6GMBg==

caps: [mon] allow r

caps: [osd] allow rwx pool=ssdpool,allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images

client.glance

key: AQAVdm5dojfsLxAAAtt+eX7psQC7pXpisqsvBg==

caps: [mon] allow r

caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images

mgr.ceph1

key: AQAjMG1deO05IxAALhbrB66XWKVCjWXraUwL0w==

caps: [mds] allow *

caps: [mon] allow profile mgr

caps: [osd] allow *

mgr.ceph2

key: AQAkMG1dhl5COBAALHSHl0MXA5xvrQCCXzBR0g==

caps: [mds] allow *

caps: [mon] allow profile mgr

caps: [osd] allow *

mgr.ceph3

key: AQAmMG1dJ1fJFBAAF0is+UiuKZjwGRkBWg6W4A==

caps: [mds] allow *

caps: [mon] allow profile mgr

caps: [osd] allow *

七 修改openstack cinder-volume增加配置,并创建volume

在/etc/cinder/cinder.conf添加以下内容,调用ceph2个pool,一个hdd,一个ssd


[DEFAULT]

enabled_backends = lvm,ceph,ssd

[ceph]

volume_driver = cinder.volume.drivers.rbd.RBDDriver

rbd_pool = volumes

rbd_ceph_conf = /etc/ceph/ceph.conf

rbd_flatten_volume_from_snapshot = false

rbd_max_clone_depth = 5

rbd_store_chunk_size = 4

rados_connect_timeout = -1

glance_api_version = 2

rbd_user = cinder

rbd_secret_uuid = fcb30733-4a1a-4635-ba07-9d89cf54a530

volume_backend_name=ceph

[ssd]

volume_driver = cinder.volume.drivers.rbd.RBDDriver

rbd_pool = ssdpool

rbd_ceph_conf = /etc/ceph/ceph.conf

rbd_flatten_volume_from_snapshot = false

rbd_max_clone_depth = 5

rbd_store_chunk_size = 4

rados_connect_timeout = -1

glance_api_version = 2

rbd_user = cinder

rbd_secret_uuid = fcb30733-4a1a-4635-ba07-9d89cf54a530

volume_backend_name=ssd

重启cinder-volume服务


systemctl restart openstack-cinder-volume.service

创建新的cinder-type


cinder type-create ssd

cinder type-key ssd set volume_backend_name=ssd

查询cinder-volume 是否启动成功


[[email protected] cinder]# openstack volume service list

+------------------+-----------------+------+---------+-------+----------------------------+

| Binary | Host | Zone | Status | State | Updated At |

+------------------+-----------------+------+---------+-------+----------------------------+

| cinder-scheduler | controller | nova | enabled | up | 2019-10-26T15:16:16.000000 |

| cinder-volume | [email protected] | nova | enabled | down | 2019-03-03T09:20:58.000000 |

| cinder-volume | [email protected] | nova | enabled | up | 2019-10-26T15:16:19.000000 |

| cinder-volume | [email protected] | nova | enabled | up | 2019-10-26T15:16:19.000000 |

| cinder-volume | [email protected] | nova | enabled | up | 2019-10-26T15:16:14.000000 |

+------------------+-----------------+------+---------+-------+----------------------------+

创建volume


[[email protected] cinder]# openstack volume create --type ssd --size 1 disk20191026

+---------------------+--------------------------------------+

| Field | Value |

+---------------------+--------------------------------------+

| attachments | [] |

| availability_zone | nova |

| bootable | false |

| consistencygroup_id | None |

| created_at | 2019-10-26T15:17:46.000000 |

| description | None |

| encrypted | False |

| id | ecff02cc-7d5c-42cc-986e-06e9552426db |

| migration_status | None |

| multiattach | False |

| name | disk20191026 |

| properties | |

| replication_status | None |

| size | 1 |

| snapshot_id | None |

| source_volid | None |

| status | creating |

| type | ssd |

| updated_at | None |

| user_id | f8b392b9ca95447c91913007d05ccc4f |

+---------------------+--------------------------------------+

[[email protected] cinder]# openstack volume list | grep disk20191026

| ecff02cc-7d5c-42cc-986e-06e9552426db | disk20191026 | available | 1 | |

在ceph检查volume是否在ssdpool创建的


[[email protected] ceph]# rbd -p ssdpool ls

volume-ecff02cc-7d5c-42cc-986e-06e9552426db

以上编号UID对应的

备注:

修改ceph配置就创建新的osd会用到以下命令:


ceph-deploy --overwrite-conf config push ceph1 ceph2 ceph3

ceph-deploy osd create ceph1 --data /dev/sde --journal /dev/sdf1

本例子的ceph.conf如下


[[email protected] ceph]# cat /etc/ceph/ceph.conf

[global]

fsid = 6bbab2f3-f90c-439d-86d7-9c0f3603303c

mon_initial_members = ceph1, ceph2, ceph3

mon_host = 172.16.3.61,172.16.3.62,172.16.3.63

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

mon clock drift allowed = 10

mon clock drift warn backoff = 30

osd pool default pg num = 64

osd pool default pgp num = 64

osd_crush_update_on_start = false

原文地址:https://www.cnblogs.com/cloud-datacenter/p/12231275.html

时间: 2024-10-17 17:47:56

ceph SSD HDD分离与openstack调用的相关文章

openstack 调用API 实现云主机的IO 控制,CGroup 策略

# vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Righ

理解 OpenStack + Ceph (5):OpenStack 与 Ceph 之间的集成 [OpenStack Integration with Ceph]

理解 OpenStack + Ceph 系列文章: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 基础数据结构 (5)Ceph 与 OpenStack 的集成 1. Glance 与 Ceph RBD 集成 1.1 代码 Kilo 版本中,glance-store 代码被从 glance 代码中分离出来了,地址在 https://github.com/openstack/glance_store. Glance 中与 Ceph 相关的配置项

多线程场景设计利器:分离方法的调用和执行——命令模式总结

前言 个人感觉,该模式主要还是在多线程程序的设计中比较常用,尤其是一些异步任务执行的过程.但是本文还是打算先在单线程程序里总结它的用法,至于多线程环境中命令模式的用法,还是想在多线程的设计模式里重点总结. 实现思路 其实思路很简单,就是把方法的请求调用和具体执行过程分开,让客户端不知道该请求是如何.何时执行的.那么如何分开呢? 其实没什么复杂的,就是使用 OO 思想,把对方法的请求封装为对象即可,然后在设计一个请求的接受者对象,当然还要有一个请求的发送者对象,请求本身也是一个对象.最后,请求要如

基于ceph快照快速回滚openstack上的虚拟机

查看虚拟机ID 1 2 [[email protected] ~]# nova list --all | grep wyl | dc828fed-1c4f-4e5d-ae84-795a0e71eecc | wyl | ac33d3bc8fe54f52a2cc822adec7fe62 | ACTIVE | - | Running | provider=192.168.16.13 | 1 2 创建快照 格式: 1 2 3 rbd snap create {pool-name}/{image-name

Openstack 之使用外部ceph存储

  上面左边是我的个人微信,如需进一步沟通,请加微信.  右边是我的公众号"Openstack私有云",如有兴趣,请关注. 继上篇<Ceph 之 块设备.文件系统.对象存储的使用>,可以独立于openstack单独部署一套ceph集群,给openstack使用,这样openstack本身部署的时候不要启用ceph,在使用块设备的相关组建上配置使用外部ceph集群,可以有更灵活的架构选择,比如虚拟机nova块设备使用一个快速固态硬盘存储池,cinder-backup卷备份使用

Openstack对接两套Ceph

环境说明openpstack-Pike对接cephRBD单集群,配置简单,可参考openstack官网或者ceph官网:1.Openstack官网参考配置:https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/ceph-rbd-volume-driver.html2.Ceph官网参考配置:https://docs.ceph.com/docs/master/install/install-ceph-d

使用Ceph作为OpenStack的后端存储

概述 libvirt配置了librbd的QEMU接口,通过它可以在OpenStack中使用Ceph块存储.Ceph块存储是集群对象,这意味着它比独立的服务器有更好的性能. 在OpenStack中使用Ceph块设备,必须首先安装QEMU,libvirt和OpenStack,下图描述了 OpenStack和Ceph技术层次结构: http://my.oschina.net/JerryBaby/blog/376580 我翻译的官方文档,仅供参考 ^ ^. 系统规划 OpenStack集群: 控制节点:

Openstack+KVM+Ceph+Docker 集成云计算中

对于以基础架构即服务形式部署和设计云计算产品的公司而言,数据复制和存储机制仍然是确保为客户提供完整性和服务连续性的实际前提条件.云计算提供了一种模型,其中数据的位置没有其他基础架构模型中那么重要(比如在一些模型中,公司直接拥有昂贵的存储硬件).Ceph 是一个开源.统一.分布式的存储系统,提供了一种便捷方式来部署包含商用硬件.低成本且可大规模扩展的存储平台.了解如何创建一个 Ceph 集群(从单一点实现对象.块和文件存储).Ceph 的算法和复制机制,以及如何将它与您的云数据架构和模型相集成.作

《理解 OpenStack + Ceph》-来自-[爱.知识]-收集-分享

企业IT技术分享(2016-06-29) 来自(QQ群:企业私有云平台实战 454544014)收集整理! 理解 OpenStack + Ceph (1):Ceph + OpenStack 集群部署和配置 http://www.cnblogs.com/sammyliu/p/4804037.html 理解 OpenStack + Ceph (2):Ceph 的物理和逻辑结构 [Ceph Architecture] http://www.cnblogs.com/sammyliu/p/4836014.