CEPH块存储管理

一、检查CEPH和集群各参数

1、检查ceph安装状态:

命令:ceph –s 或者 ceph status(显示结果一样)

示例:

[email protected]:/home/ceph/ceph-cluster# ceph -s

cluster 2f54214e-b6aa-44a4-910c-52442795b037

health HEALTH_OK

monmap e1: 1 mons at {node1=192.168.2.13:6789/0}, election epoch 1,quorum 0 node1

osdmap e56: 5 osds: 5 up, 5 in

pgmap v289: 192 pgs, 3 pools, 70988 kB data, 26 objects

376 MB used, 134 GB / 134 GB avail

192 active+clean

2、检查集群健康状态:

命令:ceph –w

示例:

[email protected]:/home/ceph/ceph-cluster# ceph -w

cluster 2f54214e-b6aa-44a4-910c-52442795b037

health HEALTH_OK

monmap e1: 1 mons at {node1=192.168.2.13:6789/0}, election epoch 1,quorum 0 node1

osdmap e56: 5 osds: 5 up, 5 in

pgmap v289: 192 pgs, 3 pools, 70988 kB data, 26 objects

376 MB used, 134 GB / 134 GB avail

192 active+clean

2016-09-08 09:40:18.084097 mon.0 [INF] pgmap v323: 192 pgs: 192active+clean; 8 bytes data, 182 MB used, 134 GB / 134 GB avail

3、检查cephmonitor仲裁状态

命令:ceph quorum_status --formatjson-pretty

示例:

[email protected]:/home/ceph/ceph-cluster# ceph quorum_status --formatjson-pretty

{ "election_epoch": 1,

"quorum": [

0],

"quorum_names": [

"node1"],

"quorum_leader_name": "node1",

"monmap": { "epoch": 1,

"fsid": "2f54214e-b6aa-44a4-910c-52442795b037",

"modified": "0.000000",

"created": "0.000000",

"mons": [

{ "rank": 0,

"name":"node1",

"addr":"192.168.2.13:6789\/0"}]}}

4、导出cephmonitor信息

命令:ceph mon dump

示例:

[email protected]:/home/ceph/ceph-cluster# ceph mon dump

dumped monmap epoch 1

epoch 1

fsid 2f54214e-b6aa-44a4-910c-52442795b037

last_changed 0.000000

created 0.000000

0: 192.168.2.13:6789/0 mon.node1

5、检查集群使用状态

命令:ceph df

示例:

[email protected]:/home/ceph/ceph-cluster# ceph df

GLOBAL:

SIZE     AVAIL     RAW USED     %RAW USED

134G      134G         376M          0.27

POOLS:

NAME         ID     USED      %USED     MAX AVAIL     OBJECTS

data         0           0         0        45882M           0

metadata     1           0         0        45882M           0

rbd          2      70988k      0.05        45882M          26

6、检查ceph monitor、osd和pg(配置组)状态

命令:ceph mon stat、ceph osd stat、ceph pg stat

示例:

[email protected]:/home/ceph/ceph-cluster# ceph mon stat

e1: 1 mons at {node1=192.168.2.13:6789/0}, election epoch 1, quorum0 node1

[email protected]:/home/ceph/ceph-cluster# ceph osd stat

osdmap e56: 5 osds: 5 up, 5 in

[email protected]:/home/ceph/ceph-cluster# ceph pg stat

v289: 192 pgs: 192 active+clean; 70988 kB data, 376 MB used, 134 GB/ 134 GB avail

7、列表PG

命令:ceph pg dump

示例:

[email protected]:/home/ceph/ceph-cluster# ceph pg dump

dumped all in format plain

version 289

stamp 2016-09-08 08:44:35.249418

last_osdmap_epoch 56

last_pg_scan 1

full_ratio 0.95

nearfull_ratio 0.85

……………

8、列表ceph存储池

命令:ceph osd lspools

示例:

[email protected]:/home/ceph/ceph-cluster# ceph osd lspools

0 data,1 metadata,2 rbd,

9、检查OSD的CRUSH map

命令:ceph osd tree

示例:

[email protected]:/home/ceph/ceph-cluster# ceph osd tree

# id      weight   type name     up/down       reweight

-1  0.15       root default

-2  0.06              host node2

0   0.03                     osd.0      up   1

3   0.03                     osd.3      up   1

-3  0.06              host node3

1   0.03                     osd.1      up   1

4   0.03                     osd.4      up   1

-4  0.03              host node1

2   0.03                     osd.2      up   1

10、列表群集的认证秘钥:

命令:ceph auth list

示例:

[email protected]:/home/ceph/ceph-cluster# ceph auth list

installed auth entries:

osd.0

key:AQCM089X8OHnIhAAnOnRZMuyHVcXa6cnbU2kCw==

caps: [mon] allow profile osd

caps: [osd] allow *

osd.1

key:AQCU089X0KSCIRAAZ3sAKh+Fb1EYV/ROkBd5mA==

caps: [mon] allow profile osd

caps: [osd] allow *

osd.2

key:AQAb1c9XWIuxEBAA3PredgloaENDaCIppxYTbw==

caps: [mon] allow profile osd

caps: [osd] allow *

osd.3

key:AQBF1c9XuBOpMBAAx8ELjaH0b1qwqKNwM17flA==

caps: [mon] allow profile osd

caps: [osd] allow *

osd.4

key:AQBc1c9X4LXCEBAAcq7UVTayMo/e5LBykmZZKg==

caps: [mon] allow profile osd

caps: [osd] allow *

client.admin

key:AQAd089XMI14FRAAdcm/woybc8fEA6dH38AS6g==

caps: [mds] allow

caps: [mon] allow *

caps: [osd] allow *

client.bootstrap-mds

key:AQAd089X+GahIhAAgC+1MH1v0enAGzKZKUfblg==

caps: [mon] allow profile bootstrap-mds

client.bootstrap-osd

key: AQAd089X8B5wHBAAnrM0MQK3to1iBitDzk+LYA==

caps: [mon] allow profile bootstrap-osd

二、块存储高级管理

1、创建块设备

命令:rbd create {image-name} --size{megabytes} --pool {pool-name} --image-format 2

注意:--image-format 2 用于指定format类型为2,不加则默认为1类型,保护快照功能仅支持2类型。1类型为淘汰类型,一般用2类型,这里演示用

示例:

[email protected]:/home/ceph/ceph-cluster# rbd create zhangbo --size 2048--pool rbd

2、列出块设备

命令:rbd ls {pool-name}

示例:

[email protected]:/home/ceph/ceph-cluster# rbd ls rbd

zhangbo

3、检索块信息

命令:rbd –image {image-name } info

rbd info {image-name}

示例:

[email protected]:/home/ceph/ceph-cluster# rbd --image zhangbo  info

rbd image ‘zhangbo‘:

size 2048 MB in 512 objects

order 22 (4096 kB objects)

block_name_prefix: rb.0.5e56.2ae8944a

format: 1

[email protected]:/home/ceph/ceph-cluster# rbd info zhangbo

rbd image ‘zhangbo‘:

size 2048 MB in 512 objects

order 22 (4096 kB objects)

block_name_prefix: rb.0.5e56.2ae8944a

format: 1

4、更改块大小

命令:rbd resize –image {image-name}–size {megabytes}

示例:

[email protected]:/home/ceph/ceph-cluster# rbd resize --image zhangbo--size 4096

Resizing image: 100% complete...done.

[email protected]:/home/ceph/ceph-cluster# rbd info zhangbo

rbd image ‘zhangbo‘:

size 4096 MB in 1024 objects

order 22 (4096 kB objects)

block_name_prefix: rb.0.5e56.2ae8944a

format: 1

5、删除块设备

命令:rbd rm {image-name}

示例:

[email protected]:/home/ceph/ceph-cluster# rbd rm zhangbo

Removing image: 100% complete...done.

[email protected]:/home/ceph/ceph-cluster# rbd ls

6、映射块设备:

命令:rbd map {image-name} –pool{pool-name} –id {user-name}

示例:

[email protected]:/home/ceph/ceph-cluster# rbd map zhangbo --pool rbd --idadmin

7、查看已映射块设备

命令:rbd showmapped

示例:

[email protected]:/home/ceph/ceph-cluster# rbd showmapped

id pool image   snapdevice

0  rbd  zhangbo -   /dev/rbd0

8、取消映射:

命令:rbd unmap/dev/rbd/{pool-name}/{image-name}

示例:

[email protected]:/home/ceph/ceph-cluster# rbd unmap /dev/rbd/rbd/zhangbo

[email protected]:/home/ceph/ceph-cluster# rbd showmapped

9、格式化:

命令:mkfs.ext4 /dev/rbd0

示例:

[email protected]:/home/ceph/ceph-cluster# mkfs.ext4 /dev/rbd0

mke2fs 1.42.9 (4-Feb-2014)

Discarding device blocks: 完成

文件系统标签=

OS type: Linux

块大小=4096 (log=2)

分块大小=4096 (log=2)

Stride=1024 blocks, Stripe width=1024 blocks

262144 inodes, 1048576 blocks

52428 blocks (5.00%) reserved for the super user

第一个数据块=0

Maximum filesystem blocks=1073741824

32 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912,819200, 884736

Allocating group tables: 完成

正在写入inode表: 完成

Creating journal (32768 blocks): 完成

Writing superblocks and filesystem accounting information: 完成

10、挂载

命令:mount /dev/rbd0 /mnt/{文件夹名}

示例:

[email protected]:/home/ceph/ceph-cluster# mount /dev/rbd0/mnt/ceph-zhangbo/

[email protected]:/home/ceph/ceph-cluster# df -h

文件系统        容量  已用 可用 已用% 挂载点

udev            989M  4.0K 989M    1% /dev

tmpfs           201M  1.1M 200M    1% /run

/dev/sda5        19G  4.0G  14G   23% /

none            4.0K     0 4.0K    0% /sys/fs/cgroup

none            5.0M     0 5.0M    0% /run/lock

none           1001M   76K 1001M   1% /run/shm

none            100M   32K 100M    1% /run/user

/dev/sda1       9.3G   60M 8.8G    1% /boot

/dev/sda6        19G  67M   18G    1% /home

/dev/sdc1        27G  169M  27G    1% /var/lib/ceph/osd/ceph-2

/dev/rbd0       3.9G  8.0M 3.6G    1% /mnt/ceph-zhangbo

11、设置开机自动挂载(开机CEPH自动map和mount rbd块设备)

vim /etc/ceph/rbdmap

{poolname}/{imagename} id=client,keyring=/etc/ceph/ceph.client.keyring

rbd/zhangbo id=admin,keyring=/etc/ceph/ceph.client.admin.keyring

vim /etc/fstab

/dev/rbd/rbd/zhangbo /mnt/ceph-zhangbo xfs defaults,noatime,_netdev

12、块扩容

命令:rbd resize rbd/zhangbo –size 4096

rbd resize –image zhangbo –size 4096

支持文件系统在线扩容:resize2fs /dev/rbd0

13、使用块设备完整操作流程:

1、rbd createzhangbo --size 2048 --pool rbd

2、rbd mapzhangbo --pool rbd --id admin

3、mkfs.ext4/dev/rbd0

4、mount/dev/rbd0 /mnt/ceph-zhangbo/

5、设置开机自动挂载

6、文件系统在线扩容

rbd resize rbd/zhangbo --size 2048

resize2fs /dev/rbd0

7、umount/mnt/ceph-zhangbo

8、rbd unmap/dev/rbd/rbd/zhangbo

9、删除开机自动挂载添加的内容

10、rbd rmzhangbo

三、快照和克隆

1、创建快照:

命令:rbd –pool {pool-name} snapcreate –snap {snap-name} {image-name}

rbd snap create{pool-name}/{image-name}@{snap-name}

示例:

[email protected]:~# rbdsnap create rbd/[email protected]_snap

[email protected]:~# rbdsnap ls rbd/zhangbo

SNAPID NAME            SIZE

2zhangbo_snap 1024 MB

2、快照回滚

命令:rbd –pool {pool-name}snap sellback –snap {snap-name} {iname-name}

rbd snap rollback{pool-name}/{image-name}@{snap-name}

示例:

[email protected]:~# rbdsnap rollback rbd/[email protected]_snap

Rolling back tosnapshot: 100% complete...done.

3、清除快照(删除该块设备下所有的快照)

命令:rbd –pool{pool-name} snap purge {image-name}

rbd snap purge{pool-name}/{image-name}

示例:

[email protected]:~# rbdsnap ls rbd/zhangbo

SNAPID NAME             SIZE

2zhangbo_snap  1024 MB

3zhangbo_snap1 1024 MB

4zhangbo_snap2 1024 MB

5zhangbo_snap3 1024 MB

[email protected]:~# rbdsnap purge rbd/zhangbo

Removing allsnapshots: 100% complete...done.

[email protected]:~# rbdsnap ls rbd/zhangbo

[email protected]:~#

4、删除快照(删除指定快照)

命令:rbd snap rm{pool-name}/{image-name}@(snap-name)

示例:

[email protected]:~# rbdsnap ls rbd/zhangbo

SNAPID NAME             SIZE

10 zhangbo_snap1 1024 MB

11 zhangbo_snap2 1024 MB

12 zhangbo_snap3 1024 MB

[email protected]:~# rbdsnap rm rbd/[email protected]_snap2

[email protected]:~# rbdsnap ls rbd/zhangbo

SNAPID NAME             SIZE

10 zhangbo_snap1 1024 MB

12 zhangbo_snap31024 MB

5、列出快照:

命令:rbd –pool{pool-name} snap ls {image-name}

rbd snap ls{pool-name}/{image-name}

示例:

[email protected]:~# rbdsnap ls rbd/zhangbo

SNAPID NAME             SIZE

16 zhangbo_snap1 1024 MB

17 zhangbo_snap2 1024 MB

18 zhangbo_snap31024 MB

6、保护快照:

命令:rbd –pool {pool-name}snap pretect –image {image-name} –snap {snapshot-name}

rbd snap pretect{pool-name}/{image-name}@{snapshot-name}

示例:

[email protected]:~# rbdsnap protect rbd/[email protected]_snap2

[email protected]:~# rbdsnap rm rbd/[email protected]_snap2

rbd: snapshot‘zhangbo_snap2‘ is protected from removal.

2016-09-0814:05:03.874498 7f35bddad7c0 -1 librbd: removing snapshot from header failed:(16) Device or resource busy

7、取消保护快照

命令:rbd –pool{pool-name} snap unprotect –image {image-name} –snap {snapshot-name}

rbd snapunprotect {pool-name}/{image-name}@{snapshot-name}

示例:

[email protected]:~# rbdsnap unprotect rbd/[email protected]_snap2

[email protected]:~# rbdsnap rm rbd/[email protected]_snap2

[email protected]:~# rbdsnap ls rbd/zhangbo

SNAPID NAME             SIZE

22 zhangbo_snap1 1024 MB

24 zhangbo_snap3 1024 MB

8、快照克隆(需快照保护才能快照克隆)

注意:快照是只读的,而克隆是基于快照可读写的

命令:rbd clone{pool-name}/{parent-image}@{snap-name} {pool-name}/{child-image-name}

示例:

[email protected]:~# rbdclone rbd/[email protected]_snap2 rbd/zhangbo-snap-clone

[email protected]:~# rbdls

zhangbo

zhangbo-snap-clone

9、创建分层快照和克隆

命令:rbd createzhangbo --size 1024 --image-format 2

rbd snap create{pool-name}/{image-name}@{snap-name}

rbd snap pretect{pool-name}/{image-name}@{snapshot-name}

rbd clone{pool-name}/{parent-image}@{snap-name} {pool-name}/{child-image-name}

10、查看快照的克隆:

命令:rbd --pool{pool-name} children --image {image-name} --snap {snap-name}

rbd children{pool-name}/{image-name}@{snapshot-name}

示例:

[email protected]:~# rbdchildren rbd/[email protected]_snap2

rbd/zhangbo-snap-clone

时间: 2024-10-05 18:38:21

CEPH块存储管理的相关文章

Ceph块设备管理与Openstack配置(上)

Oepnstack之CEPH系列是根据Ceph Cookbook整理的笔记,分为以下几个部分: <Ceph简介> <Ceph集群操作> <Ceph块设备管理与Openstack配置> <深入Ceph> <ceph优化与性能测试> 注意:此文对应ceph版本为10.1.2 #ceph -v ceph version 10.1.2(4a2a6f72640d6b74a3bbd92798bb913ed380dcd4) 前言 目前接触到的Mitaka版本O

Ceph 块设备

块是一个字节序列(例如,一个 512 字节的数据块).基于块的存储接口是最常见的存储数据方法,它们基于旋转介质,像硬盘. CD .软盘.甚至传统的 9 磁道磁带.无处不在的块设备接口使虚拟块设备成为与 Ceph 这样的海量存储系统交互的理想之选. Ceph 块设备是精简配置的.大小可调且将数据条带化存储到集群内的多个 OSD . Ceph 块设备利用 RADOS 的多种能力,如快照.复制和一致性. Ceph 的 RADOS 块设备( RBD )使用内核模块或 librbd 库与 OSD 交互.

二十八. Ceph概述 部署Ceph集群 Ceph块存储

client:192.168.4.10 node1 :192.168.4.11 ndoe2 :192.168.4.12 node3 :192.168.4.13 1.实验环境 准备四台KVM虚拟机,其三台作为存储集群节点,一台安装为客户端,实现如下功能: 创建1台客户端虚拟机 创建3台存储集群虚拟机 配置主机名.IP地址.YUM源 修改所有主机的主机名 配置无密码SSH连接 配置NTP时间同步 创建虚拟机磁盘 1.1 五台机器(包括真机)配置yum源 1.1.1 全部搭建ftp服务 1.1.2 配

Ceph块设备介绍与安装配置

一:rbd介绍 块是字节序列(例如,一个512字节的数据块).基于块的存储接口是使用旋转介质(例如硬盘,CD,软盘甚至传统的9-track tape)存储数据的最常用方法.块设备接口的无处不在,使虚拟块设备成为与海量数据存储系统(如Ceph)进行交互的理想候选者. Ceph块设备经过精简配置,可调整大小,并在Ceph集群中的多个OSD上存储条带化数据,ceph块设备利用了RADOS功能,例如快照,复制和一致性. Ceph的RADOS块设备(RBD)使用内核模块或librbd库与OSD进行交互.'

04 ceph 块设备安装,创建,设备映射,挂载,详情,调整,卸载,映射,删除 快照的创建回滚删除

块设备安装,创建,映射,挂载,详情,调整,卸载,曲线映射,删除 在使用Ceph的块设备工作前,确保您的Ceph的存储集群是在主动 + 清洁状态. vim /etc/hosts 172.16.66.144 ceph-client 在admin节点上执行这个快速启动.1. 在admin节点上,用ceph-deploy在你的ceph-client节点安装Ceph ceph-deploy install ceph-client 2. 在admin节点上,用ceph-deploy复制Ceph配置文件和ce

Centos7.4部署ceph块设备

在部署块设备前必须保证Ceph存储集群处于active+clean状态. 一. 环境准备 IP 主机名 角色 10.10.10.20 admin-node ceph-deploy 10.10.10.24 ceph-client client 二.  安装CEPH 在管理节点上,通过 ceph-deploy 把 Ceph 安装到 ceph-client 节点. [[email protected] ceph]# ceph-deploy install ceph-client 在管理节点上,用 ce

Ceph块存储介绍

1.块存储是什么? 块存储简称(RADOS Block Device),是一种有序的字节序块,也是Ceph三大存储类型中最为常用的存储方式,Ceph的块存储时基于RADOS的,因此它也借助RADOS的快照,复制和一致性等特性提供了快照,克隆和备份等操作.Ceph的块设备值一种精简置备模式,可以拓展块存储的大小且存储的数据以条带化的方式存储到Ceph集群中的多个OSD中. 2.访问块存储的方式 访问块存储的方式有两种,分别是KRBD的方式和librbd的方式. 2.1KRBD方式 KRBD是Ker

22_2020年最新部署Ceph集群 Ceph块存储

1. 下载ceph nautilus 版本yum源   地址:https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/   下载三个文件夹里对应 14.2.6-0.el7 的 rpm      aarch64/    14-Jan-2020 23:20      noarch/     14-Jan-2020 23:21      x86_64/     14-Jan-2020 23:241.1 下载aarch64文件夹对应版本的rpm文件:(物理机)]

基于go-ceph创建CEPH块设备及快照

一.代码执行前准备 1.系统中安装了CEPH集群 2.GOPATH目录下存在src/github.com/noahdesu/go-ceph代码库 3.在ubuntu 14.04下还需apt-get librados-dev和librbd-dev两个包 二.代码示例 package main import ( "fmt" "github.com/noahdesu/go-ceph/rados" "github.com/noahdesu/go-ceph/rbd&