k8s实践17:kubernetes对接nfs存储实现pvc动态按需创建分配绑定pv

1.
开始前的想法.
前面测试pv&&pvc的部署和简单配置应用,实现pod应用数据存储到pvc并且和pod解耦的目的.
前面操作是全手动操作,手动创建pv,手动创建pvc,如果集群pod少,这样操作可以.
假如集群有1000个以上的pod,每个pod都需要使用pvc存储数据,如果只能手动去一个个创建pv,pvc,工作量不可想像.
如果可以创建pod的时候,创建pod的用户定义pvc,然后集群能够根据用户的pvc需求创建pv,实现动态的pv&&pvc创建分配.
kubernetes支持对接存储动态创建分配pv&&pvc.
这是本次测试的目的.

2.
测试环境

实验环境,存储用nfs简单部署测试.

3.
nfs部署


参考前面的文档
pod应用数据存储解耦pv&&pvc

4.
storage classes

官方文档:
https://kubernetes.io/docs/concepts/storage/storage-classes/
kubernetes支持用storage classes对接存储,实现动态pv&&pvc创建分配.
kubernetes内置支持对接很多存储类型,比如cephfs,glusterfs等等,具体参考官方文档.
kubernetes内置不支持对接nfs存储类型.需要使用外部的插件.
外部插件参考文档:
https://github.com/kubernetes-incubator/external-storage
nfs插件配置文档:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
nfs-client-provisioner是一个kubernetes的简易NFS的外部provisioner,本身不提供NFS,需要现有的NFS服务器提供存储

5.
nfs存储配置文件

[[email protected] nfs]# ls
class.yaml? deployment.yaml? rbac.yaml? test-claim.yaml? test-pod.yaml

5.1
class.yaml

[[email protected] nfs]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
? name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment‘s env PROVISIONER_NAME‘
parameters:
? archiveOnDelete: "false"

创建一个storageclass
kind: StorageClass

新建的storageclass名字为:managed-nfs-storage?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??
name: managed-nfs-storage
?
provisioner直译为供应者,结合实际这里应该是指storageclass的对接存储类程序名字(个人理解),这个名字必须和deplotment.yaml的PROVISIONER_NAME变量值相同. ? ? ? ? ? ? ? ? ? ? ?
provisioner: fuseim.pri/ifs ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??

[[email protected] nfs]# kubectl apply -f class.yaml
storageclass.storage.k8s.io "managed-nfs-storage" created
[[email protected] nfs]# kubectl get storageclass
NAME? ? ? ? ? ? ? ? ? PROVISIONER? ? ? AGE
managed-nfs-storage?? fuseim.pri/ifs?? 7s

5.2
deployment.yaml

[[email protected] nfs]# cat deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
? name: nfs-client-provisioner
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
? name: nfs-client-provisioner
spec:
? replicas: 1
? strategy:
? ? type: Recreate
? template:
? ? metadata:
? ? ? labels:
? ? ? ? app: nfs-client-provisioner
? ? spec:
? ? ? serviceAccountName: nfs-client-provisioner
? ? ? containers:
? ? ? ? - name: nfs-client-provisioner
? ? ? ? ? image: quay.io/external_storage/nfs-client-provisioner:latest
? ? ? ? ? volumeMounts:
? ? ? ? ? ? - name: nfs-client-root
? ? ? ? ? ? ? mountPath: /persistentvolumes
? ? ? ? ? env:
? ? ? ? ? ? - name: PROVISIONER_NAME
? ? ? ? ? ? ? value: fuseim.pri/ifs
? ? ? ? ? ? - name: NFS_SERVER
? ? ? ? ? ? ? value: 10.10.10.60
? ? ? ? ? ? - name: NFS_PATH
? ? ? ? ? ? ? value: /ifs/kubernetes
? ? ? volumes:
? ? ? ? - name: nfs-client-root
? ? ? ? ? nfs:
? ? ? ? ? ? server: 10.10.10.60
? ? ? ? ? ? path: /ifs/kubernetes
[[email protected] nfs]#

创建sa,名字为:nfs-client-provisioner

apiVersion: v1
kind: ServiceAccount
metadata:
? name: nfs-client-provisioner

pod名字和使用的镜像

containers:
? ? ? ? - name: nfs-client-provisioner
? ? ? ? ? image: quay.io/external_storage/nfs-client-provisioner:latest

pod里挂载的路径

?volumeMounts:
? ? ? ? ? ? - name: nfs-client-root
? ? ? ? ? ? ? mountPath: /persistentvolumes

pod读取的变量,这里需要修改成本地nfs的地址和路径

?env:
? ? ? ? ? ? - name: PROVISIONER_NAME
? ? ? ? ? ? ? value: fuseim.pri/ifs
? ? ? ? ? ? - name: NFS_SERVER
? ? ? ? ? ? ? value: 10.10.10.60
? ? ? ? ? ? - name: NFS_PATH
? ? ? ? ? ? ? value: /ifs/kubernetes

nfs服务的地址和路径,需要修改成本地nfs的地址和路径

?volumes:
? ? ? ? - name: nfs-client-root
? ? ? ? ? nfs:
? ? ? ? ? ? server: 10.10.10.60
? ? ? ? ? ? path: /ifs/kubernetes?

修改后的deployment.yaml文件,只是修改了nfs的地址和目录

[[email protected] nfs]# cat deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
? name: nfs-client-provisioner
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
? name: nfs-client-provisioner
spec:
? replicas: 1
? strategy:
? ? type: Recreate
? template:
? ? metadata:
? ? ? labels:
? ? ? ? app: nfs-client-provisioner
? ? spec:
? ? ? serviceAccountName: nfs-client-provisioner
? ? ? containers:
? ? ? ? - name: nfs-client-provisioner
? ? ? ? ? image: quay.io/external_storage/nfs-client-provisioner:latest
? ? ? ? ? volumeMounts:
? ? ? ? ? ? - name: nfs-client-root
? ? ? ? ? ? ? mountPath: /persistentvolumes
? ? ? ? ? env:
? ? ? ? ? ? - name: PROVISIONER_NAME
? ? ? ? ? ? ? value: fuseim.pri/ifs
? ? ? ? ? ? - name: NFS_SERVER
? ? ? ? ? ? ? value: 192.168.32.130
? ? ? ? ? ? - name: NFS_PATH
? ? ? ? ? ? ? value: /mnt/k8s
? ? ? volumes:
? ? ? ? - name: nfs-client-root
? ? ? ? ? nfs:
? ? ? ? ? ? server: 192.168.32.130
? ? ? ? ? ? path: /mnt/k8s
[[email protected] nfs]# kubectl apply -f deployment.yaml
serviceaccount "nfs-client-provisioner" created
deployment.extensions "nfs-client-provisioner" created
[[email protected] nfs]#
[[email protected] nfs]# kubectl get pod
NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? READY? ?? STATUS? ? RESTARTS?? AGE
nfs-client-provisioner-65bf6bd464-qdzcj?? 1/1? ? ?? Running?? 0? ? ? ? ? 1m
[[email protected] nfs]# kubectl describe pod nfs-client-provisioner-65bf6bd464-qdzcj
Name:? ? ? ? ? ? ?? nfs-client-provisioner-65bf6bd464-qdzcj
Namespace:? ? ? ? ? default
Priority:? ? ? ? ?? 0
PriorityClassName:? <none>
Node:? ? ? ? ? ? ?? k8s-master3/192.168.32.130
Start Time:? ? ? ?? Wed, 24 Jul 2019 14:44:11 +0800
Labels:? ? ? ? ? ?? app=nfs-client-provisioner
? ? ? ? ? ? ? ? ? ? pod-template-hash=65bf6bd464
Annotations:? ? ? ? <none>
Status:? ? ? ? ? ?? Running
IP:? ? ? ? ? ? ? ?? 172.30.35.3
Controlled By:? ? ? ReplicaSet/nfs-client-provisioner-65bf6bd464
Containers:
? nfs-client-provisioner:
? ? Container ID:?? docker://67329cd9ca608223cda961a1bfe11524f2586e8e1ccba45ad57b292b1508b575
? ? Image:? ? ? ? ? quay.io/external_storage/nfs-client-provisioner:latest
? ? Image ID:? ? ?? docker-pullable://quay.io/external_storage/[email protected]:022ea0b0d69834b652a4c53655d78642ae23f0324309097be874fb58d09d2919
? ? Port:? ? ? ? ?? <none>
? ? Host Port:? ? ? <none>
? ? State:? ? ? ? ? Running
? ? ? Started:? ? ? Wed, 24 Jul 2019 14:45:52 +0800
? ? Ready:? ? ? ? ? True
? ? Restart Count:? 0
? ? Environment:
? ? ? PROVISIONER_NAME:? fuseim.pri/ifs
? ? ? NFS_SERVER:? ? ? ? 192.168.32.130
? ? ? NFS_PATH:? ? ? ? ? /mnt/k8s
? ? Mounts:
? ? ? /persistentvolumes from nfs-client-root (rw)
? ? ? /var/run/secrets/kubernetes.io/serviceaccount from nfs-client-provisioner-token-4n4jn (ro)
Conditions:
? Type? ? ? ? ? ? ? Status
? Initialized? ? ?? True
? Ready? ? ? ? ? ?? True
? ContainersReady?? True
? PodScheduled? ? ? True
Volumes:
? nfs-client-root:
? ? Type:? ? ? NFS (an NFS mount that lasts the lifetime of a pod)
? ? Server:? ? 192.168.32.130
? ? Path:? ? ? /mnt/k8s
? ? ReadOnly:? false
? nfs-client-provisioner-token-4n4jn:
? ? Type:? ? ? ? Secret (a volume populated by a Secret)
? ? SecretName:? nfs-client-provisioner-token-4n4jn
? ? Optional:? ? false
QoS Class:? ? ?? BestEffort
Node-Selectors:? <none>
Tolerations:? ?? node.kubernetes.io/not-ready:NoExecute for 300s
? ? ? ? ? ? ? ?? node.kubernetes.io/unreachable:NoExecute for 300s
Events:
? Type? ? Reason? ?? Age?? From? ? ? ? ? ? ? ? ? Message
? ----? ? ------? ?? ----? ----? ? ? ? ? ? ? ? ? -------
? Normal? Scheduled? 2m? ? default-scheduler? ?? Successfully assigned default/nfs-client-provisioner-65bf6bd464-qdzcj to k8s-master3
? Normal? Pulling? ? 2m? ? kubelet, k8s-master3? pulling image "quay.io/external_storage/nfs-client-provisioner:latest"
? Normal? Pulled? ?? 54s?? kubelet, k8s-master3? Successfully pulled image "quay.io/external_storage/nfs-client-provisioner:latest"
? Normal? Created? ? 54s?? kubelet, k8s-master3? Created container
? Normal? Started? ? 54s?? kubelet, k8s-master3? Started container
[[email protected] nfs]#

5.3
rbac.yaml
指定sa:nfs-client-provisioner的权限
nfs-client-provisioner在deployment部署时,已经创建.

[[email protected] nfs]# cat rbac.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
? name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
? name: nfs-client-provisioner-runner
rules:
? - apiGroups: [""]
? ? resources: ["persistentvolumes"]
? ? verbs: ["get", "list", "watch", "create", "delete"]
? - apiGroups: [""]
? ? resources: ["persistentvolumeclaims"]
? ? verbs: ["get", "list", "watch", "update"]
? - apiGroups: ["storage.k8s.io"]
? ? resources: ["storageclasses"]
? ? verbs: ["get", "list", "watch"]
? - apiGroups: [""]
? ? resources: ["events"]
? ? verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
? name: run-nfs-client-provisioner
subjects:
? - kind: ServiceAccount
? ? name: nfs-client-provisioner
? ? namespace: default
roleRef:
? kind: ClusterRole
? name: nfs-client-provisioner-runner
? apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
? name: leader-locking-nfs-client-provisioner
rules:
? - apiGroups: [""]
? ? resources: ["endpoints"]
? ? verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
? name: leader-locking-nfs-client-provisioner
subjects:
? - kind: ServiceAccount
? ? name: nfs-client-provisioner
? ? # replace with namespace where provisioner is deployed
? ? namespace: default
roleRef:
? kind: Role
? name: leader-locking-nfs-client-provisioner
? apiGroup: rbac.authorization.k8s.io
[[email protected] nfs]#
[[email protected] nfs]# kubectl apply -f rbac.yaml
serviceaccount "nfs-client-provisioner" unchanged
clusterrole.rbac.authorization.k8s.io "nfs-client-provisioner-runner" created
clusterrolebinding.rbac.authorization.k8s.io "run-nfs-client-provisioner" created
role.rbac.authorization.k8s.io "leader-locking-nfs-client-provisioner" created
rolebinding.rbac.authorization.k8s.io "leader-locking-nfs-client-provisioner" created
[[email protected] nfs]#

检索下

[[email protected] nfs]# kubectl get clusterrole |grep nfs
nfs-client-provisioner-runner? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2m
[[email protected] nfs]# kubectl get role |grep nfs
leader-locking-nfs-client-provisioner?? 2m
[[email protected] nfs]# kubectl get rolebinding |grep nfs
leader-locking-nfs-client-provisioner?? 2m
[[email protected] nfs]# kubectl get clusterrolebinding |grep nfs
run-nfs-client-provisioner? ? ? ? ? ? ? ? ? ? ? ? ? ?? 2m
[[email protected] nfs]#

6.
测试

使用官方的test-claim.yaml测试

[[email protected] nfs]# cat test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
? name: test-claim
? annotations:
? ? volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
? accessModes:
? ? - ReadWriteMany
? resources:
? ? requests:
? ? ? storage: 1Mi

读取执行test.claim.yaml文件的pv,pvc情况

[[email protected] nfs]# kubectl get pv
No resources found.
[[email protected] nfs]# kubectl get pvc
No resources found.
[[email protected] nfs]#

读取执行

[[email protected] nfs]# kubectl apply -f test-claim.yaml
persistentvolumeclaim "test-claim" created

执行后的pv,pvc情况

[[email protected] nfs]# kubectl get pv
NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? CAPACITY?? ACCESS MODES?? RECLAIM POLICY?? STATUS? ? CLAIM? ? ? ? ? ? ? ? STORAGECLASS? ? ? ? ? REASON? ? AGE
pvc-4fb682ac-ade0-11e9-8401-000c29383c89?? 1Mi? ? ? ? RWX? ? ? ? ? ? Delete? ? ? ? ?? Bound? ?? default/test-claim?? managed-nfs-storage? ? ? ? ? ?? 6s
[[email protected] nfs]# kubectl get pvc
NAME? ? ? ?? STATUS? ? VOLUME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? CAPACITY?? ACCESS MODES?? STORAGECLASS? ? ? ? ? AGE
test-claim?? Bound? ?? pvc-4fb682ac-ade0-11e9-8401-000c29383c89?? 1Mi? ? ? ? RWX? ? ? ? ? ? managed-nfs-storage?? 8s
[[email protected] nfs]#

成功了.对接nfs存储类后,用户可以申请创建pvc,系统自动创建pv并绑定pvc.
检索nfs server的存储目录

[[email protected] k8s]# pwd
/mnt/k8s
[[email protected] k8s]# ls
default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89
[[email protected] k8s]#

检索pod里的挂载目录

[[email protected] nfs]# kubectl exec -it nfs-client-provisioner-65bf6bd464-qdzcj ls /persistentvolumes
default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89
[[email protected] nfs]#

7.
使用官方的test-pod.yaml测试

[[email protected] nfs]# cat test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
? name: test-pod
spec:
? containers:
? - name: test-pod
? ? image: gcr.io/google_containers/busybox:1.24
? ? command:
? ? ? - "/bin/sh"
? ? args:
? ? ? - "-c"
? ? ? - "touch /mnt/SUCCESS && exit 0 || exit 1"
? ? volumeMounts:
? ? ? - name: nfs-pvc
? ? ? ? mountPath: "/mnt"
? restartPolicy: "Never"
? volumes:
? ? - name: nfs-pvc
? ? ? persistentVolumeClaim:
? ? ? ? claimName: test-claim
[[email protected] nfs]#
[[email protected] nfs]# kubectl apply -f test-pod.yaml
pod "test-pod" created
[[email protected] nfs]# kubectl get pod
NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? READY? ?? STATUS? ? ? RESTARTS?? AGE
test-pod? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0/1? ? ?? Completed?? 0? ? ? ? ? 1m

pod启动后,在/mnt目录创建了文件SUCCESS
pvc挂载的pod目录就是/mnt
在nfs server目录可以看到test-pod创建的SUCCESS文件:

[[email protected] default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89]# pwd
/mnt/k8s/default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89
[[email protected] default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89]# ls
SUCCESS

检索nfs-client-provisioner

[[email protected] nfs]# kubectl exec -it nfs-client-provisioner-65bf6bd464-qdzcj ls /persistentvolumes/default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89
SUCCESS

8.
测试之后的一个疑问

删除pod,pvc存储的数据还在,删除pvc之后,pvc目录和存储的数据都丢失.
为了防止用户操作失误,是否可以保留一份备份呢?
答案是可以.

[[email protected] nfs]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
? name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment‘s env PROVISIONER_NAME‘
parameters:
? archiveOnDelete: "false"
[[email protected] nfs]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
? name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment‘s env PROVISIONER_NAME‘
parameters:
? archiveOnDelete: "false"

archiveOnDelete: "false" ??
这个参数可以设置为false和true.
archiveOnDelete字面意思为删除时是否存档,false表示不存档,即删除数据,true表示存档,即重命名路径.

修改测试

[[email protected] nfs]# kubectl get storageclass
NAME? ? ? ? ? ? ? ? ? PROVISIONER? ? ? AGE
managed-nfs-storage? fuseim.pri/ifs? 1m
[[email protected] nfs]# kubectl describe storageclass
Name:? ? ? ? ? ? managed-nfs-storage
IsDefaultClass:? No
Annotations:? ? kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"managed-nfs-storage","namespace":""},"parameters":{"archiveOnDelete":"true"},"provisioner":"fuseim.pri/ifs"}

Provisioner:? ? ? ? ? fuseim.pri/ifs
Parameters:? ? ? ? ? ? archiveOnDelete=true
AllowVolumeExpansion:? <unset>
MountOptions:? ? ? ? ? <none>
ReclaimPolicy:? ? ? ? Delete
VolumeBindingMode:? ? Immediate
Events:? ? ? ? ? ? ? ? <none>

删除pod,pvc

[[email protected] nfs]# kubectl get pod
NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? READY? ? STATUS? ? ? RESTARTS? AGE
test-pod? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0/1? ? ? Completed? 0? ? ? ? ? 6s
[[email protected] nfs]# kubectl get pv,pvc
NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? CAPACITY? ACCESS MODES? RECLAIM POLICY? STATUS? ? CLAIM? ? ? ? ? ? ? ? STORAGECLASS? ? ? ? ? REASON? ? AGE
persistentvolume/pvc-5a12cb0e-adeb-11e9-8401-000c29383c89? 1Mi? ? ? ? RWX? ? ? ? ? ? Delete? ? ? ? ? Bound? ? default/test-claim? managed-nfs-storage? ? ? ? ? ? 17s

NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? STATUS? ? VOLUME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? CAPACITY? ACCESS MODES? STORAGECLASS? ? ? ? ? AGE
persistentvolumeclaim/test-claim? Bound? ? pvc-5a12cb0e-adeb-11e9-8401-000c29383c89? 1Mi? ? ? ? RWX? ? ? ? ? ? managed-nfs-storage? 17s
[[email protected] nfs]# kubectl delete -f test-pod.yaml
pod "test-pod" deleted
[[email protected] nfs]# kubectl delete -f test-claim.yaml
persistentvolumeclaim "test-claim" deleted
[[email protected] nfs]# kubectl get pv,pvc
No resources found.
[[email protected] nfs]#

检索nfs server 存储路径,文件自动做了备份.

?[[email protected] archived-default-test-claim-pvc-5a12cb0e-adeb-11e9-8401-000c29383c89]# pwd
/mnt/k8s/archived-default-test-claim-pvc-5a12cb0e-adeb-11e9-8401-000c29383c89
[[email protected] archived-default-test-claim-pvc-5a12cb0e-adeb-11e9-8401-000c29383c89]# ls
SUCCESS

切记用上archiveOnDelete:true

9.
部署nfs存储之后,用户可以自行申请pvc.
不再需要再一个个手动创建pv对应pvc的申请.
其实还是有点不方便,可以不可以创建pod的时候就自动申请创建pvc,而不再需要再创建pod前先申请pvc然后再挂载进pod呢?
这是statefulset里的volumeClaimTemplates的功能.
下篇再来测试.

原文地址:https://blog.51cto.com/goome/2423200

时间: 2024-10-06 12:25:39

k8s实践17:kubernetes对接nfs存储实现pvc动态按需创建分配绑定pv的相关文章

k8s实践17:监控利器prometheus helm方式部署配置测试

监控利器prometheus helm方式部署配置测试 1.部署helm 部署helm参考方法 后面使用helm部署grafana和prometheus,因此首先需要部署helm,保证helm能正常使用. 部署helm客户端过程见下: [[email protected] helm]# curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh % Total % Receive

k8s实践19:kubernetes二进制部署集群v1.12升级v1.15

1.升级前的版本 [[email protected] ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDa

k8s实践16:kubernetes监测探针简单测试

1.两种探针 readiness probe(就绪探针)监测容器是否就绪?只有pod里的容器就绪,kubelet才会认为pod处于就绪状态.就绪探针的作用是控制哪些pod可以作为svc的后端,如果pod不是就绪状态,就把它从svc load balancer中移除. liveness probe(存活探针)监测容器是否存活?如果容器中的应用出现问题,liveness将检测到容器不健康会通知kubelet,kubelet重启该pod容器. 2.使用探针的三种方式 官网介绍了三种,见下:comman

011.Kubernetes使用共享存储持久化数据

本次实验是以前面的实验为基础,使用的是模拟使用kubernetes集群部署一个企业版的wordpress为实例进行研究学习,主要的过程如下: 1.mysql deployment部署, wordpress deployment部署, wordpress连接mysql时,mysql的 pod ip易变 2.为mysql创建 service,申请固定 service lp 3. wordpress外部可访问,使用 node port类型的 service 4. nodeport类型的 service

使用nfs在k8s集群中实现持久化存储

准备NFS服务192.168.1.244$ yum -y install nfs-utils rpcbind$ systemctl start nfs-server rpcbind$ systemctl enable nfs-server rpcbind$ mkdir -p /data/k8s$ cd /data/k8s$ echo 11111111 > index.html$ vim /etc/exports/data/k8s *(rw,async,no_root_squash)$ syste

k8s实践(七):存储卷和数据持久化(Volumes and Persistent Storage)

环境说明: 主机名 操作系统版本 ip docker version kubelet version 配置 备注 master Centos 7.6.1810 172.27.9.131 Docker 18.09.6 V1.14.2 2C2G master主机 node01 Centos 7.6.1810 172.27.9.135 Docker 18.09.6 V1.14.2 2C2G node节点 node02 Centos 7.6.1810 172.27.9.136 Docker 18.09.

Kubernetes使用NFS作为共享存储

Kubernetes使用NFS作为共享存储 kubernetes管理的容器是封装的,有时候我们需要将容器运行的日志,放到本地来或是共享存储来,以防止容器宕掉,日志还在还可以分析问题.kubernetes的共享存储方案目前比较流行的一般是三个,分别是:nfs,Glusterfs和ceph. 前面写过一篇kubernetes使用GlusterFS的文章,如果有兴趣也可以去实践下:http://blog.51cto.com/passed/2139299 今天要讲的是kubernetes使用nfs作为共

Kubernetes 基于NFS的动态存储申请

部署nfs-provisioner external-storage-nfs 创建工作目录 $ mkdir -p /opt/k8s/nfs/data 下载nfs-provisioner对应的镜像,上传到自己的私有镜像中 $ docker pull fishchen/nfs-provisioner:v2.2.2 $ docker tag fishchen/nfs-provisioner:v2.2.2 192.168.0.107/k8s/nfs-provisioner:v2.2.2 $ docker

k8s无脑系列(三)-NFS存储(简单版本)

k8s无脑系列(三)-NFS存储(简单版本) 1.概念 搞清楚pv,pvc pv = PersistentVolume 持久化存储控制器,面向集群而不是namespace. pvc = PersistentVolumeClaim 对接pod与pv, 关系,官方说明 A PVC to PV binding is a one-to-one mapping, using a ClaimRef which is a bi-directional binding between the Persisten