openstack
openstack-9块存储服务(此服务可选)
块存储服务(cinder)为实例提供块存储。存储的分配和消耗是由块存储驱动器,或者多后端配置的驱动器决定的。还有很多驱动程序可用:NAS/SAN,NFS,ISCSI,Ceph等。
典型情况下,块服务API和调度器服务运行在控制节点上。取决于使用的驱动,卷服务器可以运行在控制节点、计算节点或单独的存储节点。
For more information, see the Configuration Reference.
块存储服务概览
安装并配置控制节点
先决条件
创建数据库并授权
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘%‘ IDENTIFIED BY ‘cinderpass‘;
获得 admin 凭证来获取只有管理员能执行的命令的访问权限
source admin-ocata.sh
创建一个 cinder 用户
openstack user create --domain default --password-prompt cinder
添加 admin 角色到 cinder 用户上
openstack role add --project service --user cinder admin
创建cinderv2和cinderv3服务实体:
#块设备存储服务要求两个服务实体
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
创建块设备存储服务的 API 入口点
openstack endpoint create --region RegionOne volumev2 public http://192.168.10.233:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://192.168.10.233:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://192.168.10.233:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://192.168.10.233:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://192.168.10.233:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://192.168.10.233:8776/v3/%\(project_id\)s
安全并配置组件
安装软件包
yum install openstack-cinder
编辑 /etc/cinder/cinder.conf
在 [database] 部分,配置数据库访问
[database]
connection = mysql+pymysql://cinder:[email protected]/cinder
在 [DEFAULT]
部分,配置 RabbitMQ
消息队列访问权限
[DEFAULT]
transport_url = rabbit://openstack:[email protected]
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.10.233:5000
auth_url = http://192.168.10.233:35357
memcached_servers = 192.168.10.233:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
在 [DEFAULT 部分,配置 my_ip
来使用控制节点的管理接口的 IP 地址。
[DEFAULT]
my_ip = 192.168.10.201
在 [oslo_concurrency] 部分,配置锁路径
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
初始化块设备服务的数据库:
su -s /bin/sh -c "cinder-manage db sync" cinder
配置计算节点以使用块设备存储
编辑文件 /etc/nova/nova.conf 并添加如下到其中:
[cinder]
os_region_name = RegionOne
完成安装
重启计算 API 服务
systemctl restart openstack-nova-api.service
启动块设备存储服务,并将其配置为开机自启
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
验证控制端
[[email protected] ~]# openstack volume service list
+------------------+-------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller1 | nova | enabled | up | 2019-09-24T07:54:44.000000 |
+------------------+-------------+------+---------+-------+----------------------------+
计算节点服务重启
systemctl restart libvirtd.service openstack-nova-compute.service
安装并配置一个存储节点
先决条件
安装支持的工具包
yum install lvm2
启动 LVM 的 metadata 服务并且设置该服务随系统启动:
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
创建 LVM 物理卷
pvcreate /dev/sdc
Physical volume "/dev/sdc" successfully created
创建 LVM 卷组
[[email protected] ~]# vgcreate cinder-volumes /dev/sdc
Volume group "cinder-volumes" successfully created
You have new mail in /var/spool/mail/root
只有实例可以访问块存储卷组。但是,底层的操作系统管理着与这些卷相关联的设备。默认情况下,LVM 卷扫描工具会扫描 /dev
目录,查找包含卷的块存储设备。如果项目在他们的卷上使用了 LVM,扫描工具便会在检测到这些卷时尝试缓存它们,这可能会在底层操作系统和项目卷上产生各种问题。所以您必须重新配置 LVM,让它扫描仅包含 cinder-volume
卷组的设备。编辑 /etc/lvm/lvm.conf
文件并完成下面的操作
在 devices
部分,添加一个过滤器,只接受 /dev/sdb
设备,拒绝其他所有设备:
devices {
...
filter = [ "a/sdc/", "r/.*/"]
安全并配置组件
安装软件包
yum install openstack-cinder targetcli python-keystone
编辑 /etc/cinder/cinder.conf,同时完成如下动作:
在 [database] 部分,配置数据库访问:
[database]
connection = mysql+pymysql://cinder:[email protected]/cinder
在 [DEFAULT]
部分,配置 RabbitMQ
消息队列访问权限:
[DEFAULT]
transport_url = rabbit://openstack:[email protected]
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.10.233:5000
auth_url = http://192.168.10.233:35357
memcached_servers = 192.168.10.233:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
在 [keystone_authtoken] 中注释或者删除其他选项。
在 [DEFAULT] 部分,配置 my_ip 选项:
[DEFAULT]
my_ip = 192.168.10.254
在 [lvm]
部分中,配置 LVM 后端,包括 LVM 驱动,cinder-volumes
卷组 ,iSCSI 协议和适当的 iSCSI 服务。如果[lvm]
部分不存在,则创建它:
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name=Openstack-lvm
在 [DEFAULT] 部分,启用 LVM 后端:
[DEFAULT]
enabled_backends = lvm
在 [DEFAULT] 区域,配置镜像服务 API 的位置:
[DEFAULT]
glance_api_servers = http://192.168.10.233:9292
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
完成安装
启动块存储卷服务及其依赖的服务,并将其配置为随系统启动:
systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service
验证操作
source admin-ocata.sh
openstack volume service list
创建一个卷
创建前
[[email protected] ~]# vgs
VG #PV #LV #SN Attr VSize VFree
cinder-volumes 1 1 0 wz--n- <500.00g 24.76g
[[email protected] ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
cinder-volumes-pool cinder-volumes twi-a-tz-- 475.00g 0.00 10.42
openstack
创建后
[[email protected] ~]# vgs
VG #PV #LV #SN Attr VSize VFree
cinder-volumes 1 2 0 wz--n- <500.00g 24.76g
[[email protected] ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
cinder-volumes-pool cinder-volumes twi-aotz-- 475.00g 0.00 10.42
volume-165b2120-122d-4ef9-a54f-be40b2b868f7 cinder-volumes Vwi-a-tz-- 1.00g cinder-volumes-pool 0.00
openstack
openstack
使用NFS作后端存储
nfs 服务器
[[email protected] ~]# cat /etc/exports
/data/images *(rw,no_root_squash)
/data/cinderdata *(rw,no_root_squash)
systemctl restart nfs
编辑主配置文件加入以下选项
vim /etc/cinder/cinder.conf
[DEFAULT]
enabled_backends = nfs
[nfs]
volume_backend_name = openstack-NFS #定义名称,后面做关联的时候使用
volume_driver = cinder.volume.drivers.nfs.NfsDriver #驱动
nfs_shares_config = /etc/cinder/nfs_shares #定义 NFS 挂载的配置文件路径
nfs_mount_point_base = $state_path/mnt #定义 NFS 挂载点
创建NFS挂载配置文件
cat /etc/cinder/nfs_shares
node:/data/cinderdata
验证 NFS 自动挂载
[[email protected] ~]# df -TH|tail -1
192.168.10.254:/data/cinderdata nfs4 246G 1.1G 232G 1% /var/lib/cinder/mnt/7571d2e2e11249e13cf8b9a32562e72f
验证 NFS 和 lvm
[[email protected] ~]# cinder service-list
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller1 | nova | enabled | up | 2019-09-25T02:32:39.000000 | - |
| cinder-volume | [email protected] | nova | enabled | up | 2019-09-25T02:32:43.000000 | - |
| cinder-volume | [email protected] | nova | enabled | up | 2019-09-25T02:32:41.000000 | - |
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
openstack
openstack
磁盘类型并关联
创建类型
cinder type-create lvm
cinder type-create nfs
将磁盘类型与磁盘关联
cinder type-key lvm set volume_backend_name=openstack-lvm
cinder type-key nfs set volume_backend_name=openstack-nfs
进入虚拟机对卷进行使用及扩容
查看新卷
[[email protected] ~]# fdisk -l|grep vdb
Disk /dev/vdb: 1073 MB, 1073741824 bytes, 2097152 sectors
格式化
[[email protected] ~]# mkfs.ext4 /dev/vdb
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
挂载并专入一个文件
[[email protected] ~]# mount /dev/vdb /mnt/
[[email protected] ~]# df -TH|grep vdb
/dev/vdb ext4 1.1G 2.7M 951M 1% /mnt
cp /etc/issue /mnt/
扩容
先分离卷(分离卷前一定要先卸载掉此卷)
新增大小
在连接卷
查看新卷
[[email protected] ~]# fdisk -l|grep vdb
Disk /dev/vdb: 2147 MB, 2147483648 bytes, 4194304 sectors
挂载并查看
[[email protected] ~]# mount /dev/vdb /mnt
[[email protected] ~]# df -TH |tail -1
/dev/vdb xfs 1.1G 34M 1.1G 4% /mnt
[[email protected] ~]# ll /mnt
total 4
-rw-r--r-- 1 root root 23 Sep 25 03:05 issue
修复磁盘
[[email protected] ~]# e2fsck -f /dev/vdb
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vdb: 12/65536 files (0.0% non-contiguous), 12956/262144 blocks
[[email protected] ~]# resize2fs /dev/vdb
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/vdb to 524288 (4k) blocks.
The filesystem on /dev/vdb is now 524288 blocks long.
查看大小
[[email protected] ~]# df -TH| tail -1
/dev/vdb ext4 2.1G 3.2M 2.0G 1% /mnt
查看之前的文件
[[email protected] ~]# ll /mnt/
total 20
-rw-r--r-- 1 root root 23 Sep 25 05:11 issue
drwx------ 2 root root 16384 Sep 25 05:11 lost+found
[[email protected] ~]# cat /mnt/issue
\S
Kernel \r on an \
原文地址:https://www.cnblogs.com/fina/p/11596082.html