前提条件:已经部署好ceph集群
本次实验由于环境有限,ceph集群是部署在k8s的master节点上的
一、创建ceph存储池
在ceph集群的mon节点上执行以下命令:
ceph osd pool create k8s-volumes 64 64
查看下副本数
[root@master ceph]# ceph osd pool get k8s-volumes size size: 3
pg的设置参照以下公式:
Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool_count
结算的结果往上取靠近2的N次方的值。比如总共OSD数量是2,复制份数3,pool数量也是1,那么按上述公式计算出的结果是66.66。取跟它接近的2的N次方是64,那么每个pool分配的PG数量就是64。
二、在k8s的所有节点上安装ceph-common
1、配置国内 yum源地址、ceph源地址
cp -r /etc/yum.repos.d/ /etc/yum-repos-d-bak yum install -y wget rm -rf /etc/yum.repos.d/* wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo yum clean all yum makecache
cat <<EOF > /etc/yum.repos.d/ceph.repo [ceph] name=Ceph packages baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/ enabled=1 gpgcheck=1 priority=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch enabled=1 gpgcheck=1 priority=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS enabled=0 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1 EOF
2、安装ceph-common
yum -y install ceph-common
3、将ceph的mon节点的配置文件/etc/ceph/ceph.conf 放到所有k8s节点的/etc/ceph目录下
4、将ceph的mon节点的文件 /etc/ceph/ceph.client.admin.keyring 放到所有k8s节点的/etc/ceph目录下
5、在k8s的master节点获取秘钥
[root@master ~]# grep key /etc/ceph/ceph.client.admin.keyring |awk ‘{printf "%s", $NF}‘|base64 QVFDQmRvbGNxSHlaQmhBQW45WllIbCtVd2JrTnlPV0xseGQ4RUE9PQ==
6、在k8s的master节点创建ceph的secret
cat <<EOF > /root/ceph-secret.yaml apiVersion: v1 kind: Secret metadata: name: ceph-secret type: "kubernetes.io/rbd" data: key: QVFDQmRvbGNxSHlaQmhBQW45WllIbCtVd2JrTnlPV0xseGQ4RUE9PQ== EOF
kubectl apply -f ceph-secret.yaml
7、由于是用kubeadm部署的k8s集群,kube-controller-manager是以容器方式运行的,里面并没有ceph-common,所以采用 扩展存储卷插件 的方式来实现
7、创建存储类
cat <<EOF > /root/ceph-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ceph-storage-class provisioner: kubernetes.io/rbd parameters: monitors: 192.168.137:6789 adminId: admin adminSecretName: ceph-secret adminSecretNamespace: default pool: k8s-volumes userId: admin userSecretName: ceph-secret EOF
kubectl apply -f ceph-secret.yaml
8、
原文地址:https://www.cnblogs.com/boshen-hzb/p/10548895.html
时间: 2024-11-05 18:47:52