基于ceph rbd 在kubernetes harbor 空间下创建动态存储

[[email protected] ~]# ceph osd pool create harbor 128
Error ETIMEDOUT: crush test failed with -110: timed out during smoke test (5 seconds)
//这个问题 我不知道怎么解决   因为过了一小会  就又好了
[[email protected] ~]# ceph osd pool create harbor 128
pool ‘harbor‘ created
[[email protected]-k8s-ceph ceph]# ceph auth get-or-create client.harbor mon ‘allow r‘ osd ‘allow class-read, allow rwx pool=harbor‘ -o ceph.client.harbor.keyring
[[email protected]-k8s-ceph ceph]# ceph auth get client.harbor
exported keyring for client.harbor
[client.harbor]
    key = AQDoCklen6e4NxAAVXmy/PG+R5iH8fNzMhk6Jg==
    caps mon = "allow r"
    caps osd = "allow class-read, allow rwx pool=harbor"

[[email protected]-k8s-node01 ~]# ceph auth get-key client.admin | base64
QVFDNmNVSmV2eU8yRnhBQVBxYzE5Mm5PelNnZk5acmg5aEFQYXc9PQ==
[[email protected]-k8s-node01 ~]# ceph auth get-key client.harbor | base64

[[email protected]-k8s-master01 ~]# kubectl get nodes
The connection to the server 20.0.0.250:8443 was refused - did you specify the right host or port?
[[email protected]-hk-hk01 ~]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since 日 2020-02-16 17:16:43 CST; 12min ago
  Process: 1168 ExecStart=/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy.pid (code=exited, status=134)
 Main PID: 1168 (code=exited, status=134)

2月 15 20:22:54 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202254 (1184) : Server k8s_api_nodes...ue.
2月 15 20:25:15 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202515 (1183) : Server k8s_api_nodes...ue.
2月 15 20:25:15 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202515 (1184) : Server k8s_api_nodes...ue.
2月 15 20:26:03 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202603 (1184) : Server k8s_api_nodes...ue.
2月 15 20:26:03 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202603 (1183) : Server k8s_api_nodes...ue.
2月 15 20:26:13 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202613 (1183) : Server k8s_api_nodes...ue.
2月 15 20:26:13 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202613 (1184) : Server k8s_api_nodes...ue.
2月 16 17:16:43 bs-hk-hk01 systemd[1]: haproxy.service: main process exited, code=exited, st...n/a
2月 16 17:16:44 bs-hk-hk01 systemd[1]: Unit haproxy.service entered failed state.
2月 16 17:16:44 bs-hk-hk01 systemd[1]: haproxy.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

[[email protected]-hk-hk01 ~]# systemctl start haproxy
[[email protected]-hk-hk01 ~]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 日 2020-02-16 17:30:03 CST; 1s ago
  Process: 4196 ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q (code=exited, status=0/SUCCESS)
 Main PID: 4212 (haproxy)
   CGroup: /system.slice/haproxy.service
           ├─4212 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy....
           ├─4216 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy....
           └─4217 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy....

2月 16 17:30:00 bs-hk-hk01 systemd[1]: Starting HAProxy Load Balancer...
2月 16 17:30:03 bs-hk-hk01 systemd[1]: Started HAProxy Load Balancer.
2月 16 17:30:04 bs-hk-hk01 haproxy[4212]: [WARNING] 046/173004 (4212) : config : ‘option for...de.
2月 16 17:30:04 bs-hk-hk01 haproxy[4212]: [WARNING] 046/173004 (4212) : config : ‘option for...de.
2月 16 17:30:04 bs-hk-hk01 haproxy[4212]: [WARNING] 046/173004 (4212) : Proxy ‘stats‘: in mu...st.
2月 16 17:30:04 bs-hk-hk01 haproxy[4212]: [NOTICE] 046/173004 (4212) : New worker #1 (4216) forked
2月 16 17:30:04 bs-hk-hk01 haproxy[4212]: [NOTICE] 046/173004 (4212) : New worker #2 (4217) forked
Hint: Some lines were ellipsized, use -l to show in full.
[[email protected]-hk-hk01 ~]# systemctl enable haproxy

[[email protected]-k8s-master01 ~]# kubectl get nodes
NAME              STATUS   ROLES    AGE    VERSION
bs-k8s-master01   Ready    master   7d6h   v1.17.2
bs-k8s-master02   Ready    master   7d6h   v1.17.2
bs-k8s-master03   Ready    master   7d6h   v1.17.2
bs-k8s-node01     Ready    <none>   7d6h   v1.17.2
bs-k8s-node02     Ready    <none>   7d6h   v1.17.2
bs-k8s-node03     Ready    <none>   7d6h   v1.17.2
[[email protected]-k8s-master01 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                        READY   STATUS             RESTARTS   AGE
default       rbd-provisioner-75b85f85bd-8ftdm            1/1     Running            11         7d6h
kube-system   calico-node-4jxbp                           1/1     Running            4          7d6h
kube-system   calico-node-7t9cj                           1/1     Running            7          7d6h
kube-system   calico-node-cchgl                           1/1     Running            14         7d6h
kube-system   calico-node-czj76                           1/1     Running            6          7d6h
kube-system   calico-node-lxb2s                           1/1     Running            14         7d6h
kube-system   calico-node-nmg9t                           1/1     Running            8          7d6h
kube-system   coredns-7f9c544f75-bwx9p                    1/1     Running            4          7d6h
kube-system   coredns-7f9c544f75-q58mr                    1/1     Running            3          7d6h
kube-system   dashboard-metrics-scraper-6b66849c9-qtwzx   1/1     Running            2          7d5h
kube-system   etcd-bs-k8s-master01                        1/1     Running            17         7d6h
kube-system   etcd-bs-k8s-master02                        1/1     Running            7          7d6h
kube-system   etcd-bs-k8s-master03                        1/1     Running            32         7d6h
kube-system   kube-apiserver-bs-k8s-master01              1/1     Running            28         7d6h
kube-system   kube-apiserver-bs-k8s-master02              1/1     Running            15         7d6h
kube-system   kube-apiserver-bs-k8s-master03              1/1     Running            62         7d6h
kube-system   kube-controller-manager-bs-k8s-master01     1/1     Running            32         7d6h
kube-system   kube-controller-manager-bs-k8s-master02     1/1     Running            27         7d6h
kube-system   kube-controller-manager-bs-k8s-master03     1/1     Running            31         7d6h
kube-system   kube-proxy-26ffm                            1/1     Running            3          7d6h
kube-system   kube-proxy-298tr                            1/1     Running            5          7d6h
kube-system   kube-proxy-hzsmb                            1/1     Running            3          7d6h
kube-system   kube-proxy-jb4sq                            1/1     Running            4          7d6h
kube-system   kube-proxy-pt94r                            1/1     Running            4          7d6h
kube-system   kube-proxy-wljwv                            1/1     Running            4          7d6h
kube-system   kube-scheduler-bs-k8s-master01              1/1     Running            32         7d6h
kube-system   kube-scheduler-bs-k8s-master02              1/1     Running            21         7d6h
kube-system   kube-scheduler-bs-k8s-master03              1/1     Running            31         7d6h
kube-system   kubernetes-dashboard-887cbd9c6-j7ptq        1/1     Running            22         7d5h
[[email protected]-k8s-master01 harbor]# pwd
/data/k8s/harbor
[[email protected]-k8s-master01 rbd]# kubectl apply -f ceph-harbor-namespace.yaml
namespace/harbor created
[[email protected]-k8s-master01 rbd]# kubectl get namespaces
NAME              STATUS   AGE
default           Active   7d8h
harbor            Active   16s
kube-node-lease   Active   7d8h
kube-public       Active   7d8h
kube-system       Active   7d8h
[[email protected]-k8s-master01 rbd]# cat ceph-harbor-namespace.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-02-16
#FileName:                   ceph-harbor-namespace.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: Namespace
metadata:
  name: harbor
[[email protected]-k8s-master01 rbd]# kubectl apply -f external-storage-rbd-provisioner.yaml
serviceaccount/rbd-provisioner created
clusterrole.rbac.authorization.k8s.io/rbd-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/rbd-provisioner configured
role.rbac.authorization.k8s.io/rbd-provisioner created
rolebinding.rbac.authorization.k8s.io/rbd-provisioner created
deployment.apps/rbd-provisioner created
[[email protected]-k8s-master01 rbd]# kubectl get pods -n harbor -o wide
NAME                               READY   STATUS    RESTARTS   AGE     IP             NODE            NOMINATED NODE   READINESS GATES
rbd-provisioner-75b85f85bd-dhnr4   1/1     Running   0          3m48s   10.209.46.84   bs-k8s-node01   <none>           <none>
[[email protected]-k8s-master01 rbd]# cat external-storage-rbd-provisioner.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-provisioner
  namespace: harbor
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns"]
    verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: harbor
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: rbd-provisioner
  namespace: harbor
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner
  namespace: harbor
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner
subjects:
- kind: ServiceAccount
  name: rbd-provisioner
  namespace: harbor

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rbd-provisioner
  namespace: harbor
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rbd-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: "quay.io/external_storage/rbd-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
      serviceAccount: rbd-provisioner
[[email protected]-k8s-master01 harbor]# kubectl apply -f ceph-harbor-secret.yaml
secret/ceph-harbor-admin-secret created
secret/ceph-harbor-harbor-secret created
[[email protected]-k8s-master01 harbor]# kubectl get secret -n harbor
NAME                          TYPE                                  DATA   AGE
ceph-harbor-admin-secret      kubernetes.io/rbd                     1      23s
ceph-harbor-harbor-secret     kubernetes.io/rbd                     1      23s
default-token-8k9gs           kubernetes.io/service-account-token   3      8m49s
rbd-provisioner-token-mhl29   kubernetes.io/service-account-token   3      5m24s
[[email protected]-k8s-master01 harbor]# cat ceph-harbor-secret.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-02-16
#FileName:                   ceph-harbor-secret.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: Secret
metadata:
  name: ceph-harbor-admin-secret
  namespace: harbor
data:
  key: QVFDNmNVSmV2eU8yRnhBQVBxYzE5Mm5PelNnZk5acmg5aEFQYXc9PQ==
type: kubernetes.io/rbd
---
apiVersion: v1
kind: Secret
metadata:
  name: ceph-harbor-harbor-secret
  namespace: harbor
data:
  key: QVFEb0NrbGVuNmU0TnhBQVZYbXkvUEcrUjVpSDhmTnpNaGs2Smc9PQ==
type: kubernetes.io/rbd
[[email protected]-k8s-master01 harbor]# kubectl apply -f ceph-harbor-storageclass.yaml
storageclass.storage.k8s.io/ceph-harbor created
[[email protected]-k8s-master01 harbor]# kubectl get sc
NAME          PROVISIONER    RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
ceph-harbor   ceph.com/rbd   Retain          Immediate           false                  11s
ceph-rbd      ceph.com/rbd   Retain          Immediate           false                  25h
[[email protected]-k8s-master01 harbor]# cat ceph-harbor-storageclass.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-02-16
#FileName:                   ceph-harbor-storageclass.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-harbor
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: ceph.com/rbd
reclaimPolicy: Retain
parameters:
  monitors: 20.0.0.206:6789,20.0.0.207:6789,20.0.0.208:6789
  adminId: admin
  adminSecretName: ceph-harbor-admin-secret
  adminSecretNamespace: harbor
  pool: harbor
  fsType: xfs
  userId: harbor
  userSecretName: ceph-harbor-harbor-secret
  imageFormat: "2"
  imageFeatures: "layering"
[[email protected]-k8s-master01 harbor]# kubectl apply -f ceph-harbor-pvc.yaml
persistentvolumeclaim/pvc-ceph-harbor created
wp-pv-claim      Bound    pvc-494a130d-018c-4be3-9b31-e951cc4367a5   20Gi       RWO            ceph-rbd       23h
[[email protected]-k8s-master01 harbor]# kubectl get pv -n harbor
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
pvc-494a130d-018c-4be3-9b31-e951cc4367a5   20Gi       RWO            Retain           Bound    default/wp-pv-claim      ceph-rbd                23h
pvc-4df6a301-c9f3-4694-8271-d1d0184c00aa   1Gi        RWO            Retain           Bound    harbor/pvc-ceph-harbor   ceph-harbor             6s
pvc-8ffa3182-a2f6-47d9-a71d-ff8e8b379a16   1Gi        RWO            Retain           Bound    default/ceph-pvc         ceph-rbd                26h
pvc-ac7d3a09-123e-4614-886c-cded8822a078   20Gi       RWO            Retain           Bound    default/mysql-pv-claim   ceph-rbd                23h
[[email protected]-k8s-master01 harbor]# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ceph-pvc         Bound    pvc-8ffa3182-a2f6-47d9-a71d-ff8e8b379a16   1Gi        RWO            ceph-rbd       26h
mysql-pv-claim   Bound    pvc-ac7d3a09-123e-4614-886c-cded8822a078   20Gi       RWO            ceph-rbd       23h
wp-pv-claim      Bound    pvc-494a130d-018c-4be3-9b31-e951cc4367a5   20Gi       RWO            ceph-rbd       23h
[[email protected]-k8s-master01 harbor]# kubectl get pvc -n harbor
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-ceph-harbor   Bound    pvc-4df6a301-c9f3-4694-8271-d1d0184c00aa   1Gi        RWO            ceph-harbor    24s
[[email protected]-k8s-master01 harbor]# cat ceph-harbor-pvc.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-02-16
#FileName:                   ceph-harbor-pvc.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-ceph-harbor
  namespace: harbor
spec:
  storageClassName: ceph-harbor
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

//到此 完成了在harbor 名称空间下创建动态pv

原文地址:https://www.cnblogs.com/zisefeizhu/p/12318246.html

时间: 2024-09-29 17:56:16

基于ceph rbd 在kubernetes harbor 空间下创建动态存储的相关文章

vbox下创建共享存储

1.创建共享盘VBoxManage.exe createhd -filename D:\VM\linux01\ocr_vote.vdi -size 2048 -format VDI -variant Fixed VBoxManage.exe createhd -filename D:\VM\linux01\data01.vdi -size 10240 -format VDI -variant Fixed VBoxManage.exe createhd -filename D:\VM\linux0

kubernetes整合ceph rbd

一.有一个ceph cluster,假设已经准备好了,文档网上一大堆 二.开始集成ceph和kuberntes 2.1 禁用rbd features rbd image有4个 features,layering, exclusive-lock, object-map, fast-diff, deep-flatten因为目前内核仅支持layering,修改默认配置每个ceph node的/etc/ceph/ceph.conf 添加一行rbd_default_features = 1这样之后创建的i

kubernetes挂载ceph rbd和cephfs的方法

[toc] k8s挂载Ceph RBD k8s挂载Ceph RBD有两种方式,一种是传统的PV&PVC的方式,也就是说需要管理员先预先创建好相关PV和PVC,然后对应的deployment或者replication来挂载PVC使用.而在k8s 1.4以后,kubernetes提供了一种更加方便的动态创建PV的方式,即StorageClass.使用StorageClass时无需预先创建固定大小的PV来等待使用者创建PVC使用,而是直接创建PVC即可使用. 需要说明的是,要想让k8s的node节点执

helm3.1安装及结合ceph rbd 部署harbor

[[email protected] ~]# ceph -s cluster: id: 11880418-1a9a-4b55-a353-4b141e2199d8 health: HEALTH_WARN Long heartbeat ping times on back interface seen, longest is 3884.944 msec Long heartbeat ping times on front interface seen, longest is 3888.368 mse

QEMU-KVM 和 Ceph RBD 的 缓存机制总结

QEMU-KVM 的缓存机制的概念很多,Linux/KVM I/O 软件栈的层次也很多,网上介绍其缓存机制的文章很多.边学习变总结一下.本文结合 Ceph 在 QEMU/KVM 虚机中的使用,总结一下两者结合时缓存的各种选项和原理. 1. QEMU/KVM 缓存机制 先以客户机(Guest OS) 中的应用写本地磁盘为例进行介绍.客户机的本地磁盘,其实是 KVM 主机上的一个镜像文件虚拟出来的,因此,客户机中的应用写其本地磁盘,其实就是写到KVM主机的本地文件内,这些文件是保存在 KVM 主机本

Rancher(2),K8S持久性存储Ceph RBD搭建及配置

1.配置host,安装ntp(非必须)2.配置免密ssh3.配置ceph,yum源 vim /etc/yum.repo.d/ceph.cepo [ceph] name=ceph baseurl=http://mirrors.cloud.tencent.com/ceph/rpm-luminous/el7/x86_64/ gpgcheck=0 priority=1 [ceph-noarch] name=cephnoarch baseurl=http://mirrors.cloud.tencent.c

理解 OpenStack + Ceph (3):Ceph RBD 接口和工具 [Ceph RBD API and Tools]

本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 与 OpenStack 集成的实现 (4)TBD Ceph 作为一个统一的分布式存储,其一大特色是提供了丰富的编程接口.我们来看看下面这张经典的图: 其中,librados 是 Ceph 的基础接口,其它的接口比如 RADOSGW, RBD 和 CephFS 都是基于 librados 实现的.本文试着分析下 Ceph 的各种接口库和常用的工具.

Ceph RBD CephFS 存储

Ceph RBD  CephFS 存储 环境准备: (这里只做基础测试, ceph-manager , ceph-mon, ceph-osd 一共三台) 10.6.0.140 = ceph-manager 10.6.0.187 = ceph-mon-1 10.6.0.188 = ceph-osd-1 10.6.0.94 = node-94 注: ceph 对时间要求很严格, 一定要同步所有的服务器时间 一.在 manager 上面修改 /etc/hosts : 10.6.0.187 ceph-m

ceph(2)--Ceph RBD 接口和工具

本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 (5)Ceph 与 OpenStack 集成的实现 (6)QEMU-KVM 和 Ceph RBD 的 缓存机制总结 (7)Ceph 的基本操作和常见故障排除方法 (8)关于Ceph PGs Ceph 作为一个统一的分布式存储,其一大特色是提供了丰富的编程接口.我们来看看下面这张经典的图: 其中,li