一、安装前准备:
1.系统:Centos7.4 x64
[[email protected]de1 ~]# cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core)
2.主机:
主机名 |
地址 |
角色 |
ceph-node1 |
10.0.70.40 |
Deploy、mon1、osd*2 |
ceph-node2 |
10.0.70.41 |
mon1、osd*2 |
ceph-node3 |
10.0.70.42 |
mon1、osd*2 |
3.三台主机,每台主机有两个磁盘(磁盘大于100G)
[[email protected] ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 1000G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 999G 0 part ├─cl-root 253:0 0 50G 0 lvm / ├─cl-swap 253:1 0 7.9G 0 lvm [SWAP] └─cl-home 253:2 0 941.1G 0 lvm /home sdb 8:16 0 1000G 0 disk sdc 8:32 0 1000G 0 disk
域名解析
ssh-keygen #配置ceph-1到其他ceph节点的无密码访问 ssh-copy-id [email protected] ssh-copy-id [email protected] ssh-copy-id [email protected]
vim /etc/hosts #拷贝解析到所有站点 10.0.70.40 ceph-node1 10.0.70.41 ceph-node2 10.0.70.42 ceph-node3 scp /etc/hosts [email protected]:/etc/ scp /etc/hosts [email protected]:/etc/
yum源配置
yum clean all rm -rf /etc/yum.repos.d/*.repo wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo sed -i ‘/aliyuncs/d‘ /etc/yum.repos.d/CentOS-Base.repo sed -i ‘/aliyuncs/d‘ /etc/yum.repos.d/epel.repo sed -i ‘s/$releasever/7/g‘ /etc/yum.repos.d/CentOS-Base.repo vim /etc/yum.repos.d/ceph.repo [ceph] name=ceph baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/ gpgcheck=0 [ceph-noarch] name=cephnoarch baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/ gpgcheck=0 scp /etc/yum.repos.d/ceph.repo [email protected]:/etc/yum.repos.d/ #拷贝镜像到所有站点 scp /etc/yum.repos.d/ceph.repo [email protected]:/etc/yum.repos.d/ yum install hdparm ceph ceph-radosgw rdate -y #安装ceph客户端(所有节点) hdparm -W 0 /dev/sda #关闭写缓存 rdate -s time-a.nist.gov #同步时间 echo rdate -s time-a.nist.gov >> /etc/rc.d/rc.local chmod +x /etc/rc.d/rc.local
二、集群部署
1.管理节点部署
yum install ceph-deploy -y mkdir ~/cluster #创建my-cluster目录存放配置文件与秘钥 cd ~/cluster #每次执行ceph-deploy命令最好都在这目录下执行 ceph-deploy new ceph-node1 ceph-node2 ceph-node3 #以node1为MON创建ceph.conf文件与秘钥 echo "osd_pool_default_size" = 2 >> ~/cluster/ceph.conf #设置默认的副本数 echo public_network=10.0.70.20/24 >> ~/cluster/ceph.conf #根据自己的IP配置向ceph.conf中添加public_network echo mon_clock_drift_allowed = 2 >> ~/cluster/ceph.conf #增大mon之间时差允许范围 cat ~/cluster/ceph.conf osd_pool_default_size = 2 public_network=10.0.70.20/24 mon_clock_drift_allowed = 2
开始部署monitor
ceph-deploy mon create-initial
开始部署OSD:
ceph-deploy --overwrite-conf osd prepare ceph-node1:/dev/sdb ceph-node1:/dev/sdc ceph-node2:/dev/sdb ceph-node2:/dev/sdc ceph-node3:/dev/sdb ceph-node3:/dev/sdc --zap-disk ceph-deploy --overwrite-conf osd activate ceph-node1:/dev/sdb1 ceph-node1:/dev/sdc1 ceph-node2:/dev/sdb1 ceph-node2:/dev/sdc1 ceph-node3:/dev/sdb1 ceph-node3:/dev/sdc1
查看集群状态
[[email protected] cluster]# ceph -s cluster 466e0a3e-f351-46f3-94a2-5ea976c26fd8 health HEALTH_WARN 15 pgs peering 2 pgs stuck unclean too few PGs per OSD (21 < min 30) monmap e1: 3 mons at {ceph-node1=10.0.70.40:6789/0,ceph-node2=10.0.70.41:6789/0,ceph-node3=10.0.70.42:6789/0} election epoch 4, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3 osdmap e47: 6 osds: 6 up, 6 in; 9 remapped pgs flags sortbitwise,require_jewel_osds pgmap v125: 64 pgs, 1 pools, 0 bytes data, 0 objects 203 MB used, 5966 GB / 5967 GB avail 49 active+clean 9 remapped+peering 6 peering
查看OSD
[[email protected] cluster]# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 5.82715 root default -2 1.94238 host ceph-node1 0 0.97119 osd.0 up 1.00000 1.00000 1 0.97119 osd.1 up 1.00000 1.00000 -3 1.94238 host ceph-node2 2 0.97119 osd.2 up 1.00000 1.00000 3 0.97119 osd.3 up 1.00000 1.00000 -4 1.94238 host ceph-node3 4 0.97119 osd.4 up 1.00000 1.00000 5 0.97119 osd.5 up 1.00000 1.00000
问题1:
[ceph-1][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.ceph-1 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
解决:
主机名和hosts对应即可
时间: 2024-10-10 05:51:08