openstack K版本和ceph对接

本次环境:

openstack(K版本):  控制和计算各一台,并且安装到dashboard,可以正常创建虚拟机(搭建过程建官方http://docs.openstack.org/kilo/install-guide/install/yum/content/)

ceph: 共3台,两台节点一台desploy部署机(搭建过程建官方http://ceph.com/)

下面在控制节点安装cinder,在控制节点上操作:

##创建数据库并且授权

[[email protected] ~]# mysql

Welcome to the MariaDB monitor.  Commands end with ; or \g.

Your MariaDB connection id is 2439

Server version: 5.5.47-MariaDB MariaDB Server

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE cinder;

Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘localhost‘ \

->   IDENTIFIED BY ‘awcloud‘;

Query OK, 0 rows affected (0.15 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘%‘ \

->   IDENTIFIED BY ‘awcloud‘;

Query OK, 0 rows affected (0.01 sec)

##创建用户、端点等信息

[[email protected] ~]# source admin-openrc.sh

[[email protected] ~]# openstack user create --password-prompt cinder

[[email protected] ~]# openstack role add --project service --user cinder admin

[[email protected] ~]# openstack service create --name cinder \

>   --description "OpenStack Block Storage" volume

[[email protected] ~]# openstack service create --name cinderv2 \

>   --description "OpenStack Block Storage" volumev2

[[email protected] ~]# openstack endpoint create \

>   --publicurl http://controller:8776/v2/%\(tenant_id\)s \

>   --internalurl http://controller:8776/v2/%\(tenant_id\)s \

>   --adminurl http://controller:8776/v2/%\(tenant_id\)s \

>   --region RegionOne \

>   volume

[[email protected] ~]# openstack endpoint create \

>   --publicurl http://controller:8776/v2/%\(tenant_id\)s \

>   --internalurl http://controller:8776/v2/%\(tenant_id\)s \

>   --adminurl http://controller:8776/v2/%\(tenant_id\)s \

>   --region RegionOne \

>   volumev2

安装cinder服务

[[email protected] ~]# yum install openstack-cinder python-cinderclient python-oslo-db -y

修改配置文件

[[email protected] ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bk

[[email protected] ~]# vim /etc/cinder/cinder.conf

[[email protected] ~]# egrep -v "^#|^$" /etc/cinder/cinder.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 192.168.8.199

verbose = True

[BRCD_FABRIC_EXAMPLE]

[CISCO_FABRIC_EXAMPLE]

[database]

connection = mysql://cinder:[email protected]/cinder

[fc-zone-manager]

[keymgr]

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = cinder

password = awcloud

[matchmaker_redis]

[matchmaker_ring]

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = guest

rabbit_password = guest

[profiler]

[oslo_concurrency]

lock_path = /var/lock/cinder

重启服务

[[email protected] ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

[[email protected] ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

###为contronller节点配置实现接管ceph

[[email protected] ~]#  yum install python-rbd ceph-common -y

[[email protected] ~]# yum install python-rbd ceph-common -y

把验证文件和ceph的配置文件拷贝到控制节点

[[email protected] ceph]# scp ceph.client.admin.keyring ceph.conf 192.168.8.199:/etc/ceph/

此时在controller节点执行ceph命令是否成功

[[email protected] ~]# ceph -s

cluster 3155ed83-9e92-43da-90f1-c7715148f48f

health HEALTH_OK

monmap e1: 1 mons at {node1=192.168.8.35:6789/0}

election epoch 2, quorum 0 node1

osdmap e47: 2 osds: 2 up, 2 in

pgmap v1325: 64 pgs, 1 pools, 0 bytes data, 0 objects

80896 kB used, 389 GB / 389 GB avail

64 active+clean

##为cinder、nova、glance创建volume

[[email protected] ~]# ceph osd pool create vms

[[email protected] ~]# ceph osd pool create volumes 50

pool ‘volumes‘ created

[[email protected] ~]# ceph osd pool create images 50

pool ‘images‘ created

[[email protected] ~]# ceph osd pool create backups 50

pool ‘backups‘ created

[[email protected] ~]# ceph osd pool create vms 50

pool ‘vms‘ created

[[email protected] ~]#

为ceph客户端做认证

[[email protected] ~]# ceph auth get-or-create client.cinder mon ‘allow r‘ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images‘

rbd_children, allow rwx pool=images‘

ceph auth get-or-create client.cinder-backup mon ‘allow r‘ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=backups‘

[[email protected] ~]# ceph auth get-or-create client.glance mon ‘allow r‘ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=images‘

[[email protected] ~]# ceph auth get-or-create client.cinder-backup mon ‘allow r‘ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=backups‘

[[email protected] ~]#

##创建用户的认证文件

[[email protected] ceph]# ceph auth get-or-create client.glance|tee /etc/ceph/ceph.client.glance.keyring

[client.glance]

key = AQANyXRXb5l7CRAA2yVyM92BIm+U3QDseZGqow==

[[email protected] ceph]# chown glance:glance /etc/ceph/ceph.client.glance.keyring

[[email protected] ceph]# ceph auth get-or-create client.cinder | sudo tee /etc/ceph/ceph.client.cinder.keyring

[client.cinder]

key = AQDkyHRXvOTwARAAbRha/MtmqPcJm0RF9jcrsQ==

[[email protected] ceph]# sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

[[email protected] ceph]# ceph auth get-or-create client.cinder-backup |sudo tee /etc/ceph/ceph.client.cinder-backup.keyring

[client.cinder-backup]

key = AQAVyXRXQDKFBRAAtY9DuiGGRSTBDu0MRckXbA==

[[email protected] ceph]#  chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

[[email protected] ceph]#

[[email protected] ceph]#

##把/etc/ceph/ceph.client.cinder.keyring用户认证文件拷贝到计算节点

[[email protected] ceph]# scp /etc/ceph/ceph.client.cinder.keyring compute:/etc/ceph/

##在compute节点创建libvirt的key

[[email protected] ~]# uuidgen

457eb676-33da-42ec-9a8c-9293d545c337

cat > secret.xml <<EOF

<secret ephemeral=‘no‘ private=‘no‘>

<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>

<usage type=‘ceph‘>

<name>client.cinder secret</name>

</usage>

</secret>

EOF

[[email protected] ~]# sudo virsh secret-define --file secret.xml

[[email protected] ~]# sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key)

###为glance对接ceph

vi /etc/glance/glance-api.conf

[DEFAULT]

...

default_store=rbd

rbd_store_user=glance

rbd_store_pool=images

show_image_direct_url=True

[[email protected] ceph]# systemctl restart openstack-glance-api.service

[[email protected] ceph]# systemctl restart openstack-glance-registry.service

##为cinder和ceph对接

[[email protected] ceph]# vim /etc/cinder/cinder.conf

[DEFAULT]

volume_driver=cinder.volume.drivers.rbd.RBDDriver

rbd_pool=volumes

rbd_ceph_conf=/etc/ceph/ceph.conf

rbd_flatten_volume_from_snapshot=false

rbd_max_clone_depth=5

glance_api_version=2

rbd_user=cinder

rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337

[[email protected] ceph]# systemctl restart openstack-cinder-api.service

[[email protected] ceph]# systemctl restart openstack-glance-registry.service

##为cinder backup对接ceph

[DEFAULT]

backup_driver=cinder.backup.drivers.ceph

backup_ceph_conf=/etc/ceph/ceph.conf

backup_ceph_user=cinder-backup

backup_ceph_chunk_size=134217728

backup_ceph_pool=backups

backup_ceph_stripe_unit=0

backup_ceph_stripe_count=0

restore_discard_excess_bytes=true

[[email protected] ceph]# systemctl restart openstack-cinder-backup.service

为nova对接ceph

[[email protected] ~]# vim  /etc/nova/nova.conf

[DEFAULT]

libvirt_images_type=rbd

libvirt_images_rbd_pool=vms

libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf

rbd_user=cinder

rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337

libvirt_inject_password=false

libvirt_inject_key=false

libvirt_inject_partition=-2

libvirt_live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"

[[email protected] ~]# systemctl restart openstack-nova-compute.service

至此已经全部完成!下面进行验证操作:

创建一个虚拟机磁盘作用在云硬盘上,若出报错,

tailf  /var/log/cinder/volume.log

2016-06-24 03:21:00.458 58907 ERROR oslo_messaging.rpc.dispatcher [req-41df406d-44b9-4e59-b317-faafcdd880c7 9d20f58520ad43658dceda03cf4e266c dce7915317f14e6aacad0b6ef84c4483 - - -] Exception during message handling: [Errno 13] Permission denied: ‘/var/lock/cinder‘

查看是否有这个目录

[[email protected] cinder]# ll /var/lock/cinder

ls: cannot access /var/lock/cinder: No such file or directory

##创建此目录

[[email protected] cinder]# mkdir /var/lock/cinder -p

[[email protected] cinder]# chown cinder.cinder /var/lock/cinder/

创建一台云主机,使用cinder命令验证

[[email protected] images]# rbd ls volumes

volume-8a1ff9c3-0dbd-41d7-a46b-ebaa45bc2230

现在创建的虚拟机已经在ceph集群中了。

参考文档:

http://docs.ceph.com/docs/master/rbd/rbd-openstack/

http://docs.openstack.org/kilo/install-guide/install/yum/content/cinder-install-controller-node.html

时间: 2024-10-14 19:59:11

openstack K版本和ceph对接的相关文章

OpenStack Newton版本Ceph集成部署记录

2017年2月,OpenStack Ocata版本正式release,就此记录上一版本 Newton 结合Ceph Jewel版的部署实践.宿主机操作系统为CentOS 7.2 . 初级版: 192.168.0.0/24 与 192.168.1.0/24 为Ceph使用,分别为南北向网络(Public_Network)和东西向网络(Cluster_Network). 10.0.0.0/24 为 OpenStack 管理网络. 172.16.0.0/24 为用于 OpenStack Neutron

ceph对接openstack

ceph对接openstack环境一.使用rbd方式提供存储如下数据:(1)image:保存glanc中的image:(2)volume存储:保存cinder的volume:保存创建虚拟机时选择创建新卷: 3)vms的存储:保存创建虚拟机时不选择创建新卷: 二.实施步骤:(1)客户端也要有cent用户:1 useradd cent && echo "123" | passwd --stdin cent2 echo -e 'Defaults:cent !requirett

openstack M版本部署

系统解决方案 一.环境需求 1.网卡 em1 em2 em3 em4 controller1 172.16.16.1 172.16.17.1 none none controller1 172.16.16.2 172.16.17.2 none none compute1 172.16.16.3 172.16.17.3 none none compute2 172.16.16.4 172.16.17.4 none none compute3 172.16.16.5 172.16.17.5 none

OpenStack Kilo版本新功能分析

OpenStack Kilo版本已经于2015年4月30日正式Release,这是OpenStack第11个版本,距离OpenStack项目推出已经整整过去了5年多的时间.在这个阶段OpenStack得到不断的增强,同时OpenStack社区也成为即Linux之后的第二大开源社区,参与的人数.厂商众多,也成就了OpenStack今天盛世的局面.虽然OpenStack在今年经历了一些初创型企业的倒闭,但是随着国内的传统行业用户对OpenStack越来越重视,我们坚信OpenStack明天会更好.

OpenStack Kilo版加CEPH部署手册

OpenStack Kilo版加CEPH部署手册 作者: yz联系方式: QQ: 949587200日期: 2015-7-13版本: Kilo 转载地址: http://mp.weixin.qq.com/s__biz=MzAxOTAzMDEwMA==&mid=209225237&idx=1&sn=357afdabafc03e8fb75eb4e1fb9d4bf9&scene=5&ptlang=2052&ADUIN=724042315&ADSESSION

[译] OpenStack Kilo 版本中 Neutron 的新变化

OpenStack Kilo 版本,OpenStack 这个开源项目的第11个版本,已经于2015年4月正式发布了.现在是个合适的时间来看看这个版本中Neutron到底发生了哪些变化了,以及引入了哪些新的关键功能. 1. 扩展 Neutron 开发社区 (Scaling the Neutron development community) 为了更好地扩展 Neutron 开发社区的规模,我们在Kilo开发周期中主要做了两项工作:解耦核心插件以及分离高级服务.这些变化不会直接影响 OpenStac

openstack 之fuel 9 安装 Openstack Mitaka 版本

2015年上半年曾经在原单位安装过openstack juno版本,使用的是Mirantis公司的fuel自动化部署工具,很好用.已经很久没有关注openstack了,版本到现在为止已经到了Newton了,上一个版本是Mitaka,今天我要做的实验就是使用fuel 9 安装Mitaka版本. 我的笔记本的配置是intel i5,4核,8G,win10 64位家庭版. 首先到Mirantis官网下载fuel的安装镜像ISO: https://www.mirantis.com/software/mi

CENTOS7 安装openstack mitaka版本(最新整理完整版附详细截图和操作步骤,添加了cinder和vxlan)

CENTOS7 安装openstack mitaka版本(最新整理完整版附详细截图和操作步骤,添加了cinder和vxlan,附上个节点的配置文件) 实验环境准备: 为了更好的实现分布式mitaka版本的效果.我才有的是VMware的workstations来安装三台虚拟机,分别来模拟openstack的controller节点 compute节点和cinder节点.(我的宿主机配置为 500g 硬盘 16g内存,i5cpu.强烈建议由条件的朋友将内存配置大一点,因为我之前分配的2g太卡.) 注

OpenStack Grizzly版本(Ubuntu 12.04)配置

1.     我们在一天VMware虚拟机上(双网卡)使用Ubuntu 12.04.1 和 OpenVSwitch 1.4.6 来搭建OpenStack的Grizzly版本的平台. 由于OpenVSwitch的版本对Ubuntu系统的内核版本有依赖,建议在安装前请确认二者之间是否兼容.无论是在物理机器中还是VMware 中配置,都需要开启CPU 的虚拟化(Intel VT-x/EPT 或AMD-V/RVI(V).OpenStack-Grizzly-Install-Gui...11.0 KB 2.