网络配置
10.0.0.100 cephdeploy
10.0.0.110cephmon1
10.0.0.120cephmon2
10.0.0.130cephosd1
10.0.0.140cephosd2
10.0.0.150cephosd3
添加release 和ceph包
wget -q -O- ‘https://download.ceph.com/keys/release.asc‘ | sudo apt-key add -
echo deb http://download.ceph.com/debian-hammer/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
安装deploy
sudo apt-get update && sudo apt-get install ceph-deploy
安装ntp和openssh
sudo apt-get install ntp
sudo apt-get install openssh-server
创建用户
注:{username} 自定义 ,不要用ceph这个来创建用户,ceph用于启动进程,如使用会导致安装失败
ssh [email protected]
sudo useradd -d /home/{username} -m {username}
sudo passwd {username}
确保各 Ceph 节点上新创建的用户都有 sudo 权限。
echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
sudo chmod 0440 /etc/sudoers.d/{username}
允许无密码 SSH 登录
ssh-keygen
ssh-copy-id {username}@cephmon1
ssh-copy-id {username}@cephmon2
ssh-copy-id {username}@cephosd1
ssh-copy-id {username}@cephosd2
ssh-copy-id {username}@cephosd3
编辑 ~/.ssh/config
写入如下内容
Host cephmon1
Hostname cephmon1
User {username}
Host cephmon2
Hostname cephmon2
User {username}
Host cephosd1
Hostname cephosd1
User {username}
Host cephosd2
Hostname cephosd2
User {username}
Host cephosd3
Hostname cephosd3
User {username}
mkdir my-cluster
cd my-cluster
安装失败使用
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys
ceph-deploy purge {ceph-node} [{ceph-node}]
创建集群
ceph-deploy new cephmon1
设置副本个数为2,编辑ceph.conf
osd pool default size = 2
pg 个数
osd pool default pg num = 128
osd pool default pgp num = 128
安装ceph在各节点上
ceph-deploy install cephmon1 cephmon2 cephosd1 cephosd2 cephosd3
配置初始 monitor(s)、并收集所有密钥:
ceph-deploy mon create-initial
完成上述操作后,当前目录里应该会出现这些密钥环:
{cluster-name}.client.admin.keyring
{cluster-name}.bootstrap-osd.keyring
{cluster-name}.bootstrap-mds.keyring
{cluster-name}.bootstrap-rgw.keyring
添加osd 在osd节点上
ssh cephosd1
mkfs.xfs /dev/sdb /dev/sdc
exit
ssh cephosd2
mkfs.xfs /dev/sdb /dev/sdc
exit
ssh cephosd3
mkfs.xfs /dev/sdb /dev/sdc
exit
准备osd
ceph-deploy osd prepare cephosd1:/dev/sdb cephosd1:/dev/sdc
cephosd2:/dev/sdb cephosd2:/dev/sdc cephosd3:/dev/sdb cephosd3:/dev/sdc
激活 OSD
ceph-deploy osd activate cephosd1:/dev/sdb1 cephosd1:/dev/sdc1
cephosd2:/dev/sdb1 cephosd2:/dev/sdc1 cephosd3:/dev/sdb1 cephosd3:/dev/sdc1
用 ceph-deploy 把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点,这样你每次执行 Ceph 命令行时就无需指定 monitor 地址和 ceph.client.admin.keyring 了
ceph-deploy admin cephmon1 cephmon2 cephosd1 cephosd2 cephosd3
或者
ceph-deploy --overwrite-conf admin cephdeploy cephmon1 cephmon2 cephosd1 cephosd2 cephosd3
在每个节点上执行
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
检查集群的健康状况
ceph health
ceph status
ceph osd stat
ceph osd dump
ceph mon dump
ceph quorum_status
ceph mds stat
ceph mds dump
或者 ceph -w
ceph -s
设置pg个数
ceph osd pool set rbd pg_num 128
ceph osd pool set rbd pgp_num 128
重启ceph服务
restart ceph-all
restart ceph-mon-all
restart ceph-osd-all
restart ceph mon <cephserver-id>
restart ceph osd id=<id>
查看osd
ceph osd tree
rbd挂载方式
创建pool
rados mkpool cinder
创建一个200G的存储盘
rbd create cinder --size 10240 -p cinder
执行挂载
sudo rbd map cinder --pool cinder
如果觉得太小,可以重新调整rbd卷大小
rbd resize --size 20480 cinder -p cinder
blockdev --getsize64 /dev/rbd0
resize2fs /dev/rbd0
查看
rbd showmapped
rbd --pool rbd ls
格式化
mkfs.ext4 /dev/rbd0
挂载
mkdir /cinder
mount /dev/rbd0 /cinder
cephfs挂载,不需要安装cephclient
ceph osd pool create cephfs_data 10
ceph osd pool create cephfs_metadata 10
ceph fs new leadorfs cephfs_metadata cephfs_data
ceph mds stat
查看ceph.client.admin.keyring,读取secret
执行挂载
mount -t ceph 10.0.0.110:6789:/ /cinder/ -v -o name=admin,secret=AQCdcz1Xykm5FxAAS1o66IMWJas+Uih5ShTijw==
添加新的osd
ceph-deploy osd prepare [{ceph-node}]
ceph-deploy osd activate [{ceph-node}]
添加元数据服务器
ceph-deploy mds create cephmon1
新加mon
ceph-deploy mon add cephmon2
添加完需要同步配置文件
ceph-deploy --overwrite-conf admin [{ceph-node}]