Jewel版本Ceph集群功能性能测试

参考文档

http://docs.ceph.com/docs/master/start/quick-start-preflight/#rhel-centos
https://www.linuxidc.com/Linux/2017-09/146760.htm
http://s3browser.com/
http://docs.ceph.org.cn/man/8/rbd/
https://hub.packtpub.com/working-ceph-block-device/#
https://github.com/s3fs-fuse/s3fs-fuse
https://blog.csdn.net/miaodichiyou/article/details/76050361
http://mathslinux.org/?p=717
http://elf8848.iteye.com/blog/2089055

测试目标

使用rbd映射挂载块存储并测试性能
使用rbd-nbd映射挂载条带块存储并测试性能
使用s3brower测试对象存储读写
使用s3fs挂载挂载对象存储
使用对象存储写使用块存储读

一,使用rbd映射挂载块存储并测试性能

1、创建image

[[email protected] cluster]# ceph osd pool create test_pool 100
pool ‘test_pool‘ created
[[email protected] cluster]# rados lspools
rbd
.rgw.root
default.rgw.control
default.rgw.data.root
default.rgw.gc
default.rgw.log
default.rgw.users.uid
default.rgw.users.keys
default.rgw.buckets.index
default.rgw.buckets.data
test_pool
[[email protected] cluster]# rbd list
[[email protected] cluster]# rbd create test_pool/testimage1 --size 40960
[[email protected] cluster]# rbd create test_pool/testimage2 --size 40960
[[email protected] cluster]# rbd create test_pool/testimage3 --size 40960
[[email protected] cluster]# rbd create test_pool/testimage4 --size 40960
[[email protected] cluster]# rbd list
[[email protected] cluster]# rbd list test_pool
testimage1
testimage2
testimage3
testimage4

2、映射image

[[email protected] cluster]# rbd map test_pool/testimage1
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (6) No such device or address
[[email protected] cluster]# dmesg |tail
[113320.926463] rbd: loaded (major 252)
[113320.931044] libceph: mon2 172.20.1.141:6789 session established
[113320.931364] libceph: client4193 fsid 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[113320.936922] rbd: image testimage1: image uses unsupported features: 0x38
[113339.870548] libceph: mon1 172.20.1.140:6789 session established
[113339.870906] libceph: client4168 fsid 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[113339.877109] rbd: image testimage1: image uses unsupported features: 0x38
[113381.405453] libceph: mon2 172.20.1.141:6789 session established
[113381.405784] libceph: client4202 fsid 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[113381.411625] rbd: image testimage1: image uses unsupported features: 0x38

报错处理方法:disable新特性

[[email protected] cluster]# rbd info test_pool/testimage1
rbd image ‘testimage1‘:
size 40960 MB in 10240 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.10802ae8944a
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
[[email protected] cluster]# rbd feature disable test_pool/testimage1
rbd: at least one feature name must be specified
[[email protected] cluster]# rbd feature disable test_pool/testimage1 fast-diff
[[email protected] cluster]# rbd feature disable test_pool/testimage1 object-map
[[email protected] cluster]# rbd feature disable test_pool/testimage1 exclusive-lock
[[email protected] cluster]# rbd feature disable test_pool/testimage1 deep-flatten
[[email protected] cluster]# rbd info test_pool/testimage1
rbd image ‘testimage1‘:
size 40960 MB in 10240 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.10802ae8944a
format: 2
features: layering
flags:
[[email protected] cluster]# rbd map test_pool/testimage1
/dev/rbd0

同理操作testimage2\3\4,最终如下

[[email protected] cluster]# rbd showmapped
id pool image snap device
0 test_pool testimage1 - /dev/rbd0
1 test_pool testimage2 - /dev/rbd1
2 test_pool testimage3 - /dev/rbd2
3 test_pool testimage4 - /dev/rbd3

备注收缩image大小

[[email protected] ceph-disk0]# rbd resize -p test_pool --image testimage1 -s 10240 --allow-shrink
Resizing image: 100% complete...done.
[[email protected] ceph-disk0]# rbd resize -p test_pool --image testimage2 -s 10240 --allow-shrink
Resizing image: 100% complete...done.
[[email protected] ceph-disk0]# rbd resize -p test_pool --image testimage3 -s 10240 --allow-shrink
Resizing image: 100% complete...done.
[[email protected] ceph-disk0]# rbd resize -p test_pool --image testimage4 -s 10240 --allow-shrink
Resizing image: 100% complete...done.

3、格式化挂载

[[email protected] ceph-disk0]# mkfs.xfs /dev/rbd0
[[email protected] ceph-disk0]# mkfs.xfs -f /dev/rbd0

4、DD测试

[[email protected] ceph-disk0]# dd if=/dev/zero of=/mnt/ceph-disk0/file0 count=1000 bs=4M conv=fsync
1000+0 records in
1000+0 records out
4194304000 bytes (4.2 GB) copied, 39.1407 s, 107 MB/s

二、使用rbd-nbd映射挂载条带块存储并测试性能

1、创建image
根据官网文档条带化测试需要带参数--stripe-unit及--stripe-count
计划测试object-size为4M、4K且count为1时,object-szie为32M且count为8、16时块存储性能

[[email protected] ceph-disk0]# rbd create test_pool/testimage5 --size 10240 --stripe-unit 2097152 --stripe-count 16
[[email protected] ceph-disk0]# rbd info test_pool/testimage5
rbd image ‘testimage5‘:
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.10c52ae8944a
format: 2
features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
stripe unit: 2048 kB
stripe count: 16
[[email protected] ceph-disk0]# rbd create test_pool/testimage6 --size 10240 --stripe-unit 4096 --stripe-count 4
[[email protected] ceph-disk0]# rbd info test_pool/testimage6
rbd image ‘testimage6‘:
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.10c82ae8944a
format: 2
features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
stripe unit: 4096 bytes
stripe count: 4

[[email protected] ceph-disk0]# rbd create test_pool/testimage7 --size 10240 --object-size 32M --stripe-unit 4194304 --stripe-count 4
[[email protected] ceph-disk0]# rbd info test_pool/testimage7
rbd image ‘testimage7‘:
size 10240 MB in 320 objects
order 25 (32768 kB objects)
block_name_prefix: rbd_data.107e238e1f29
format: 2
features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
stripe unit: 4096 kB
stripe count: 4
[[email protected] ceph-disk0]# rbd create test_pool/testimage8 --size 10240 --object-size 32M --stripe-unit 2097152 --stripe-count 16
[[email protected] ceph-disk0]# rbd info test_pool/testimage8
rbd image ‘testimage8‘:
size 10240 MB in 320 objects
order 25 (32768 kB objects)
block_name_prefix: rbd_data.109d2ae8944a
format: 2
features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
stripe unit: 2048 kB
stripe count: 16

[[email protected] ceph-disk0]# rbd create test_pool/testimage11 --size 10240 --object-size 4M
[[email protected] ceph-disk0]# rbd create test_pool/testimage12 --size 10240 --object-size 4K
[[email protected] ceph-disk0]# rbd info test_pool/testimage11
rbd image ‘testimage11‘:
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.10ac238e1f29
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
[[email protected] ceph-disk0]# rbd info test_pool/testimage12
rbd image ‘testimage12‘:
size 10240 MB in 2621440 objects
order 12 (4096 bytes objects)
block_name_prefix: rbd_data.10962ae8944a
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:

2、映射image

[[email protected] mnt]# rbd map test_pool/testimage8
rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (22) Invalid argument
[[email protected] mnt]# dmesg |tail
[118760.024660] XFS (rbd0): Log I/O Error Detected. Shutting down filesystem
[118760.024710] XFS (rbd0): Please umount the filesystem and rectify the problem(s)
[118760.024766] XFS (rbd0): Unable to update superblock counters. Freespace may not be correct on next mount.
[118858.837102] XFS (rbd0): Mounting V5 Filesystem
[118858.872345] XFS (rbd0): Ending clean mount
[173522.968410] rbd: rbd0: encountered watch error: -107
[176701.031429] rbd: image testimage8: unsupported stripe unit (got 2097152 want 33554432)
[176827.317008] rbd: image testimage8: unsupported stripe unit (got 2097152 want 33554432)
[177423.107103] rbd: image testimage8: unsupported stripe unit (got 2097152 want 33554432)
[177452.820032] rbd: image testimage8: unsupported stripe unit (got 2097152 want 33554432)

3、排错发现rbd不支持条带特性需要需要使用rbd-nbd
rbd-nbd支持所有的新特性,后续map时也不需要disable新特性,但是linux内核默认没有nbd模块,需要编译内核安装,可以参考下面链接https://blog.csdn.net/miaodichiyou/article/details/76050361

[[email protected] ~]# wget http://vault.centos.org/7.5.1804/updates/Source/SPackages/kernel-3.10.0-862.2.3.el7.src.rpm
[[email protected] ~]# rpm -ivh kernel-3.10.0-862.2.3.el7.src.rpm
[[email protected] ~]# cd /root/rpmbuild/
[[email protected] rpmbuild]# cd SOURCES/
[[email protected] SOURCES]# tar Jxvf linux-3.10.0-862.2.3.el7.tar.xz -C /usr/src/kernels/
[[email protected] SOURCES]# cd /usr/src/kernels/
[[email protected] kernels]# mv 3.10.0-862.6.3.el7.x86_64 3.10.0-862.6.3.el7.x86_64-old
[[email protected] kernels]# mv linux-3.10.0-862.2.3.el7 3.10.0-862.6.3.el7.x86_64
[[email protected] 3.10.0-862.6.3.el7.x86_64]# cd 3.10.0-862.6.3.el7.x86_64
[[email protected] 3.10.0-862.6.3.el7.x86_64]# mkdir mrproper
[[email protected] 3.10.0-862.6.3.el7.x86_64]# cp ../3.10.0-862.6.3.el7.x86_64-old/Module.symvers ./
[[email protected] 3.10.0-862.6.3.el7.x86_64]# cp /boot/config-3.10.0-862.2.3.el7.x86_64 ./.config
[[email protected] 3.10.0-862.6.3.el7.x86_64]# yum install elfutils-libelf-devel
[[email protected] 3.10.0-862.6.3.el7.x86_64]# make prepare
[[email protected] 3.10.0-862.6.3.el7.x86_64]# make scripts
[[email protected] 3.10.0-862.6.3.el7.x86_64]# make CONFIG_BLK_DEV_NBD=m M=drivers/block
[[email protected] 3.10.0-862.6.3.el7.x86_64]# modinfo nbd
[[email protected] 3.10.0-862.6.3.el7.x86_64]# cp drivers/block/nbd.ko /lib/modules/3.10.0-862.2.3.el7.x86_64/kernel/drivers/block/
[[email protected] 3.10.0-862.6.3.el7.x86_64]# depmod -a
[[email protected] 3.10.0-862.6.3.el7.x86_64]# modprobe nbd
[[email protected] 3.10.0-862.6.3.el7.x86_64]# lsmod |grep nbd
nbd 17554 5

4、使用rbd-nbd映射image

[[email protected] ~]# rbd-nbd map test_pool/testimage17
/dev/nbd0
[[email protected] ~]# rbd info test_pool/testimage17
rbd image ‘testimage17‘:
size 10240 MB in 1280 objects
order 23 (8192 kB objects)
block_name_prefix: rbd_data.112d74b0dc51
format: 2
features: layering, striping
flags:
stripe unit: 1024 kB
stripe count: 8
[[email protected] ~]# mkfs.xfs /dev/nbd0
meta-data=/dev/nbd0 isize=512 agcount=4, agsize=655360 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[[email protected] ~]# mount /dev/nbd0 /mnt/ceph-8M/
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 100G 3.5G 96G 4% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 12M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda1 497M 150M 348M 31% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/0
/dev/sdb1 95G 40G 56G 42% /var/lib/ceph/osd/ceph-0
/dev/rbd0 10G 7.9G 2.2G 79% /mnt/ceph-disk0
/dev/rbd1 10G 7.9G 2.2G 79% /mnt/ceph-4M
/dev/nbd0 10G 33M 10G 1% /mnt/ceph-8M

5、dd测试性能
object-size为8M

[[email protected] ~]# dd if=/dev/zero of=/mnt/ceph-8M/file0-1 count=800 bs=10M conv=fsync
800+0 records in
800+0 records out
8388608000 bytes (8.4 GB) copied, 50.964 s, 165 MB/s
[[email protected] ~]# dd if=/dev/zero of=/mnt/ceph-8M/file0-1 count=80 bs=100M conv=fsync
80+0 records in
80+0 records out
8388608000 bytes (8.4 GB) copied, 26.3178 s, 319 MB/s

object-size为32M

[[email protected] ceph-32M]# rbd info test_pool/testimage18
rbd image ‘testimage18‘:
size 40960 MB in 1280 objects
order 25 (32768 kB objects)
block_name_prefix: rbd_data.11052ae8944a
format: 2
features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
stripe unit: 2048 kB
stripe count: 8

[[email protected] ceph-32M]# dd if=/dev/zero of=/mnt/ceph-32M/file0-1 count=2000 bs=10M conv=fsync
2000+0 records in
2000+0 records out
20971520000 bytes (21 GB) copied, 67.4266 s, 311 MB/s
[[email protected] ceph-32M]# dd if=/dev/zero of=/mnt/ceph-32M/file0-1 count=20000 bs=1M conv=fsync
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 61.7757 s, 339 MB/s

6、测试方法汇总
4m cnt=1
4k cnt=1
32M cnt=8,16
dd测试
1M
100M

32M /mnt/ceph-32M-8 /mnt/ceph-32M-16

rbd create test_pool/testimage8 --size 10240 --object-size 32M --stripe-unit 2097152 --stripe-count 16
dd if=/dev/zero of=/mnt/ceph-32M-16/file32M count=80 bs=100M conv=fsync
dd if=/dev/zero of=/mnt/ceph-32M-16/file32M count=8000 bs=1M conv=fsync

rbd create test_pool/testimage19 --size 10240 --object-size 32M --stripe-unit 4194304 --stripe-count 8
dd if=/dev/zero of=/mnt/ceph-32M-8/file32M count=80 bs=100M conv=fsync
dd if=/dev/zero of=/mnt/ceph-32M-8/file32M count=8000 bs=1M conv=fsync

4M /mnt/ceph-4M

rbd create test_pool/testimage11 --size 10240 --object-size 4M
dd if=/dev/zero of=/mnt/ceph-4M/file4M count=80 bs=100M conv=fsync
dd if=/dev/zero of=/mnt/ceph-4M/file4M count=8000 bs=1M conv=fsync

4K /mnt/ceph-4K

rbd create test_pool/testimage12 --size 10240 --object-size 4K
dd if=/dev/zero of=/mnt/ceph-4K/file4K count=80 bs=100M conv=fsync
dd if=/dev/zero of=/mnt/ceph-4K/file4K count=8000 bs=1M conv=fsync

7、dd测试结果汇总

8、使用fio随机写测试
先安装fio

yum install libaio-devel
wget http://brick.kernel.dk/snaps/fio-2.1.10.tar.gz
tar zxf fio-2.1.10.tar.gz
cd fio-2.1.10/
make
make install

32M-8

fio -ioengine=libaio -bs=1m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/nbd4 -name="EBS 1m randwrite test" -iodepth=1 -runtime=60
Run status group 0 (all jobs):
WRITE: io=4096.0MB, aggrb=272729KB/s, minb=272729KB/s, maxb=272729KB/s, mint=15379msec, maxt=15379msec
Disk stats (read/write):
nbd4: ios=0/32280, merge=0/0, ticks=0/36624, in_queue=36571, util=97.61%

fio -ioengine=libaio -bs=100m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/nbd4 -name="EBS 100m randwrite test" -iodepth=1 -runtime=60
Run status group 0 (all jobs):
WRITE: io=4000.0MB, aggrb=326504KB/s, minb=326504KB/s, maxb=326504KB/s, mint=12545msec, maxt=12545msec
Disk stats (read/write):
nbd4: ios=0/31391, merge=0/0, ticks=0/1592756, in_queue=1597878, util=97.04%

32M-16

fio -ioengine=libaio -bs=1m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/nbd3 -name="EBS 1m randwrite test" -iodepth=1 -runtime=60
fio -ioengine=libaio -bs=100m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/nbd3 -name="EBS 100m randwrite test" -iodepth=1 -runtime=60

4M

fio -ioengine=libaio -bs=1m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/rbd1 -name="EBS 1m randwrite test" -iodepth=1 -runtime=60
fio -ioengine=libaio -bs=100m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/rbd1 -name="EBS 100m randwrite test" -iodepth=1 -runtime=60

4K

fio -ioengine=libaio -bs=1m -direct=1 -thread -rw=randwrite -size=400M -filename=/dev/rbd2 -name="EBS 1m randwrite test" -iodepth=1 -runtime=60
fio -ioengine=libaio -bs=100m -direct=1 -thread -rw=randwrite -size=400M -filename=/dev/rbd2 -name="EBS 100m randwrite test" -iodepth=1 -runtime=60

9、fio测试结果汇总

三、使用s3brower测试对象存储读写

1、创建对象存储账号密码

[[email protected] cluster]# radosgw-admin user create --uid=test --display-name="test" --access-key=123456 --secret=123456
[[email protected] cluster]# radosgw-admin user info --uid=test
{
"user_id": "test",
"display_name": "test",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "test",
"access_key": "123456",
"secret_key": "123456"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"temp_url_keys": []
}

2、安装配置s3brower

3、创建bucket上传下载测试

四、使用s3fs挂载挂载对象存储读写

测试对象存储方式写入文件,从rbd方式读目录
1、安装部署
https://github.com/s3fs-fuse/s3fs-fuse/releases

安装
查看README
On CentOS 7:

sudo yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel

Then compile from master via the following commands:

git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh
./configure
make
sudo make install

[[email protected] ~]# wget https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.83.tar.gz
[[email protected] ~]# ls
[[email protected] ~]# tar zxvf v1.83.tar.gz
[[email protected] s3fs-fuse-1.83]# cd s3fs-fuse-1.83/
[[email protected] s3fs-fuse-1.83]# ls
[[email protected] s3fs-fuse-1.83]# vi README.md
[[email protected] s3fs-fuse-1.83]# yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel
[[email protected] s3fs-fuse-1.83]# ./autogen.sh
[[email protected] s3fs-fuse-1.83]# ls
[[email protected] s3fs-fuse-1.83]# ./configure
[[email protected] s3fs-fuse-1.83]# make
[[email protected] s3fs-fuse-1.83]# make install
[[email protected] s3fs-fuse-1.83]# mkdir /mnt/s3
[[email protected] s3fs-fuse-1.83]# vi /root/.passwd-s3fs
[[email protected] s3fs-fuse-1.83]# chmod 600 /root/.passwd-s3fs

2、挂载

[[email protected] ~]# s3fs testbucket /mnt/s3 -o url=http://172.20.1.139:7480 -o umask=0022 -o use_path_request_style
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 95G 75G 21G 79% /var/lib/ceph/osd/ceph-0
/dev/rbd1 10G 7.9G 2.2G 79% /mnt/ceph-4M
/dev/rbd2 10G 814M 9.2G 8% /mnt/ceph-4K
/dev/nbd3 10G 7.9G 2.2G 79% /mnt/ceph-32M-16
/dev/nbd4 10G 33M 10G 1% /mnt/ceph-32M-8
s3fs 256T 0 256T 0% /mnt/s3

3、验证读写

[[email protected] ~]# ls /mnt/s3/images/
kernel-3.10.0-862.2.3.el7.src.rpm nbd.ko test.jpg
[[email protected] ~]# cp /etc/hosts
hosts hosts.allow hosts.deny
[[email protected] ~]# cp /etc/hosts /mnt/s3/images/
[[email protected] ~]# ls /mnt/s3/images/
hosts kernel-3.10.0-862.2.3.el7.src.rpm nbd.ko test.jpg

原文地址:http://blog.51cto.com/jerrymin/2139046

时间: 2024-08-02 08:49:59

Jewel版本Ceph集群功能性能测试的相关文章

基于centos7.3安装部署jewel版本ceph集群实战演练

一.环境准备 安装centos7.3虚拟机三台 由于官网源与网盘下载速度都非常的慢,所以给大家提供了国内的搜狐镜像源:http://mirrors.sohu.com/centos/7.3.1611/isos/x86_64/CentOS-7-x86_64-DVD-1611.iso 在三台装好的虚拟机上分别加三块100G的硬盘.如图所示: 3.配置ip ceph-1 ceph-2 ceph-3 192.168.42.200 192.168.42.201 192.168.42.203 修改可参照下面的

Ceph集群由Jewel版本升级到Luminous版本

参考文档 https://www.virtualtothecore.com/en/upgrade-ceph-cluster-luminous/http://www.chinastor.com/distristor/11033L502017.html 缘起 首先看之前安装版本链接及测试http://blog.51cto.com/jerrymin/2139045http://blog.51cto.com/jerrymin/2139046mon ceph0.ceph2.cphe3osd ceph0.c

ceph集群常用命令

结合网络.官网.手动查询等多方渠道,整理ceph维护管理常用命令,并且梳理常规命令在使用过程中的逻辑顺序.另外整理期间发现ceph 集群的命令体系有点乱,详细情况各自体验. 一:ceph集群启动.重启.停止 1:ceph 命令的选项如下: 选项简写描述 --verbose-v详细的日志. --valgrindN/A(只适合开发者和质检人员)用 Valgrind 调试. --allhosts-a在 ceph.conf 里配置的所有主机上执行,否 则它只在本机执行. --restartN/A核心转储

centos7部署ceph集群(正确)

环境介绍 主机名 系统 ip地址 ceph版本 ceph-node1 CentOS Linux release 7.2.1511 192.168.1.120 jewel ceph-node2 CentOS Linux release 7.2.1511 192.168.1.121 jewel ceph-node3 CentOS Linux release 7.2.1511 192.168.1.128 jewel 准备工作 ◆ 1-7在三台ceph节点上都需要进行操作 ◆ 8只在ceph1操作即可

Openstack之Ceph集群操作

Oepnstack之CEPH系列是根据Ceph Cookbook整理的笔记,分为以下几个部分: 1. <Ceph简介> 2. <Ceph集群操作> 3. <Ceph块设备管理与Openstack配置> 4. <深入Ceph> 5. <ceph优化与性能测试> **注意:此文对应ceph版本为10.1.2** ~~~bash #ceph -v ceph version 10.1.2(4a2a6f72640d6b74a3bbd92798bb913ed

Ceph 之 使用ceph-deploy部署ceph集群

  上面左边是我的个人微信,如需进一步沟通,请加微信.  右边是我的公众号"Openstack私有云",如有兴趣,请关注. 环境: 系统centos7.4 ceph版本 ceph version 10.2.10 测试服务器为kvm虚拟机(openstack虚拟机) 本篇文章是记录下自己的部署过程,服务器使用kvm虚拟机,只测试功能,服务器分配如下 节点 服务 cluster network ceph-1(admin-node) osd.{1,2,},mon.ceph-1 eth0:19

Node.js的集群功能以及在Express的配置

Node.js在v0.6.0版本下内置了集群功能,作为cluster模块,用于nodejs的多核处理,也比较容易通过脚本实现一个负载均衡的集群. 脚本参考了其他人的材料,建立一个server.js(因为虚拟机只有1核,为模拟多线程,所以采用numCPUs+4) var cluster = require('cluster'); var http = require('http'); var numCPUs = require('os').cpus().length; if (cluster.is

Redis集群功能预览

目前Redis Cluster仍处于Beta版本,Redis 3.0将会加入,在此可以先对其主要功能和原理进行一个预览.参考<Redis Cluster - a pragmatic approach to distribution>. 1 没有集群的Redis 没有集群功能的Redis,每个master-slave主从复制都独立于其他结点,sharding需要在客户端如Jedis中控制.可以使用官方提供的Sentinel监控主从的状态,实现自动的Fail-over切换.具体请参见<Red

ceph集群常用命令梳理

结合网络.官网.手动查询等多方渠道,整理ceph维护管理常用命令,并且梳理常规命令在使用过程中的逻辑顺序.另外整理期间发现ceph 集群的命令体系有点乱,详细情况各自体验. 一:ceph集群启动.重启.停止 1:ceph 命令的选项如下: 选项 简写 描述 --verbose -v 详细的日志. --valgrind N/A (只适合开发者和质检人员)用 Valgrind 调试. --allhosts -a 在 ceph.conf 里配置的所有主机上执行,否 则它只在本机执行. --restar