k8s部署有状态服务zookeeper示例

我们先不考虑配置文件的前提下:

apiVersion: apps/v1
kind: StatefulSet   #####固定hostname,有状态的服务使用这个 statefalset有个问题,就是如果那个pod不是running状态,这个主机名是无法解析的,这样就构成了一个死循环,我sed替换主机名的时候由于pod还不是running状态,她只能获取自己的主机名。无法获取别人的主机名,所以在zookeeper中换成了换成了ip
metadata:
  name: zookeeper
spec:
  serviceName: zookeeper  ####所以生成的3个pod的名字叫zookeeper-0,zookeeper-1,zookeeper-2
  replicas: 3
  revisionHistoryLimit: 10
  selector:  ##statefulset必须有的
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
    spec:
      volumes:
      - name: volume-logs
        hostPath:
          path: /var/log/zookeeper
      containers:
      - name: zookeeper
        image: harbor.test.com/middleware/zookeeper:3.4.10
        imagePullPolicy: IfNotPresent
        livenessProbe:
          tcpSocket:
            port: 2181
          initialDelaySeconds: 30
          timeoutSeconds: 3
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 2
        ports:
        - containerPort: 2181
          protocol: TCP
        - containerPort: 2888
          protocol: TCP
        - containerPort: 3888
          protocol: TCP
        env:
        - name: SERVICE_NAME
          value: "zookeeper"
        - name: MY_POD_NAME  #声明k8s自带的变量,这样在pod创建之后,在其中可以直接echo ${MY_POD_NAME}得到hostname
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        volumeMounts:
        - name: volume-logs
          mountPath: /var/log/zookeeper
      nodeSelector:
        zookeeper: enable
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper #我的cluster名字为这个,在任意一个生成的pod中可以ping zookeeper,相当于zookeeper为生成的3个pod的cluster_name,会发现每次ping出的地址不一定相同,nslookup zookeeper得到的是3个pod的pod ip,共3条记录。
spec:
  ports:
  - port: 2181
  selector:
    app: zookeeper
  clusterIP: None  #此句必须加上
[[email protected] src]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
default       zookeeper-0                                1/1     Running   0          12m     192.168.55.69    host3   <none>           <none>
default       zookeeper-1                                1/1     Running   0          12m     192.168.31.93    host4   <none>           <none>
default       zookeeper-2                                1/1     Running   0          12m     192.168.55.70    host3   <none>           <none>
bash-4.3# nslookup zookeeper
nslookup: can‘t resolve ‘(null)‘: Name does not resolve

Name:      zookeeper
Address 1: 192.168.55.70 zookeeper-2.zookeeper.default.svc.cluster.local
Address 2: 192.168.55.69 zookeeper-0.zookeeper.default.svc.cluster.local
Address 3: 192.168.31.93 zookeeper-1.zookeeper.default.svc.cluster.local
bash-4.3# ping zookeeper-0.zookeeper
PING zookeeper-0.zookeeper (192.168.55.69): 56 data bytes
64 bytes from 192.168.55.69: seq=0 ttl=63 time=0.109 ms
64 bytes from 192.168.55.69: seq=1 ttl=63 time=0.212 ms
^C
--- zookeeper-0.zookeeper ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.109/0.160/0.212 ms
bash-4.3# ping zookeeper-1.zookeeper
PING zookeeper-1.zookeeper (192.168.31.93): 56 data bytes
64 bytes from 192.168.31.93: seq=0 ttl=62 time=0.535 ms
64 bytes from 192.168.31.93: seq=1 ttl=62 time=0.507 ms
64 bytes from 192.168.31.93: seq=2 ttl=62 time=0.587 ms
^C
--- zookeeper-1.zookeeper ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.507/0.543/0.587 ms
bash-4.3# ping zookeeper-2.zookeeper
PING zookeeper-2.zookeeper (192.168.55.70): 56 data bytes
64 bytes from 192.168.55.70: seq=0 ttl=64 time=0.058 ms
64 bytes from 192.168.55.70: seq=1 ttl=64 time=0.081 ms
^C
--- zookeeper-2.zookeeper ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.058/0.069/0.081 ms

k8s自带的常用变量如下:

env:
        - name: MY_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: MY_POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: MY_POD_SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
spec.nodeName : pod所在节点的IP、宿主主机IP

status.podIP :pod IP

我们再看配置文件:

[[email protected] conf]# cat zoo.cfg |grep -v ^#|grep -v ^$
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
clientPort=2181
clientPortAddress= docker06
server.1=docker05:2888:3888
server.2=docker06:2888:3888
server.3=docker04:2888:3888
snapCount=10000
leaderServes=yes
autopurge.snapRetainCount=3
autopurge.purgeInterval=2
maxClientCnxns=1000

我们需要修改成形如:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
clientPort=2181
clientPortAddress= docker06  #下面的3行是固定的,主要是这行需要修改成本机的MY_POD_IP,我们可以用configmap挂载配置文件,然后在pod里面用sed替换掉这行配置
server.1=zookeeper-0.zookeeper:2888:3888
server.2=zookeeper-1.zookeeper:2888:3888
server.3=zookeeper-2.zookeeper:2888:3888
snapCount=10000
leaderServes=yes
autopurge.snapRetainCount=3
autopurge.purgeInterval=2
maxClientCnxns=1000

参考如下这种方式:

先将配置文件通过configmap挂载进pod里面,如fix-ip.sh

apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-cluster
  labels:
    app: redis-cluster
data:
  fix-ip.sh: |
    #!/bin/sh
    CLUSTER_CONFIG="/var/lib/redis/nodes.conf"
    if [ -f ${CLUSTER_CONFIG} ]; then
      if [ -z "${POD_IP}" ]; then
        echo "Unable to determine Pod IP address!"
        exit 1
      fi
      echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
      sed -i.bak -e "/myself/s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG}
    fi
    exec "[email protected]"
  redis.conf: |+
    cluster-enabled yes
    cluster-require-full-coverage no
    cluster-node-timeout 15000
    cluster-config-file /var/lib/redis/nodes.conf
    cluster-migration-barrier 1
    appendonly yes
    protected-mode no

然后在启动pod的时候执行这个脚本:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-cluster
  labels:
    app: redis-cluster
spec:
  serviceName: redis-cluster
  replicas: 6
  selector:
    matchLabels:
      app: redis-cluster
  template:
    metadata:
      labels:
        app: redis-cluster
    spec:
      containers:
      - name: redis
        image: 10.11.100.85/library/redis
        ports:
        - containerPort: 6379
          name: client
        - containerPort: 16379
          name: gossip
        command: ["/etc/redis/fix-ip.sh", "redis-server", "/etc/redis/redis.conf"]  #此处先执行了那个脚本,然后启动的redis
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "redis-cli -h $(hostname) ping"
          initialDelaySeconds: 15
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "redis-cli -h $(hostname) ping"
          initialDelaySeconds: 20
          periodSeconds: 3
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: conf
          mountPath: /etc/redis
          readOnly: false
        - name: data
          mountPath: /var/lib/redis
          readOnly: false
      volumes:
      - name: conf
        configMap:
          name: redis-cluster
          defaultMode: 0755
#          items:
#          - key: redis.conf
#            path: redis.conf
#          - key: fix-ip.sh
#            path: fix-ip.sh
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        name: redis-cluster
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 150Mi

注意:通过configmap生成的配置文件为只读,无法通过sed修改,可以通过挂载到临时目录,然后拷过去之后sed,但是这样也存在一个问题,就是你动态修改了configmap,只会改变临时目录里的文件,而不会改变考过去的文件

实际生产环境的配置:

1.重新制作image

[[email protected] zookeeper]# ll
总用量 4
drwxr-xr-x 2 root root  45 5月  24 15:48 conf
-rw-r--r-- 1 root root 143 5月  23 06:19 Dockerfile
drwxr-xr-x 2 root root  20 5月  24 15:48 scripts

[[email protected] conf]# cd conf
[[email protected] conf]# ll
总用量 8
-rw-r--r-- 1 root root 1503 5月  23 04:15 log4j.properties
-rw-r--r-- 1 root root  324 5月  24 15:48 zoo.cfg

[[email protected] conf]# cat zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
clientPort=2181
clientPortAddress=PODIP #此处用ip,下面用主机名,原因见本文上面
server.1=zookeeper-0.zookeeper:2888:3888
server.2=zookeeper-1.zookeeper:2888:3888
server.3=zookeeper-2.zookeeper:2888:3888
snapCount=10000
leaderServes=yes
autopurge.snapRetainCount=3
autopurge.purgeInterval=2
maxClientCnxns=1000

[[email protected] conf]# cd ../scripts/
[[email protected] scripts]# ll
总用量 4
-rwxr-xr-x 1 root root 177 5月  24 15:48 sed.sh

[[email protected] scripts]# cat sed.sh
#!/bin/bash
MY_ID=`echo ${MY_POD_NAME} |awk -F‘-‘ ‘{print $NF}‘`
MY_ID=`expr ${MY_ID} + 1`
echo ${MY_ID} > /data/myid
sed -i ‘s/PODIP/‘${MY_POD_IP}‘/g‘ /conf/zoo.cfg
exec "[email protected]"

[[email protected] scripts]# cd ..
[[email protected] zookeeper]# ls
conf  Dockerfile  scripts
[[email protected] zookeeper]# cat Dockerfile
FROM harbor.test.com/middleware/zookeeper:3.4.10
MAINTAINER [email protected]

ARG zookeeper_version=3.4.10

COPY conf /conf/
COPY scripts /

这样我们docker build就制作出了image :harbor.test.com/middleware/zookeeper:v3.4.10

然后我们启通过yml启动pod:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zookeeper
spec:
 # podManagementPolicy: Parallel  #此配置决定是否让3个pod同时起来,而不是按 0 1 2的顺序
  serviceName: zookeeper
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
    spec:
      volumes:
      - name: volume-logs
        hostPath:
          path: /var/log/zookeeper
      - name: volume-data
        hostPath:
          path: /opt/zookeeper/data
      terminationGracePeriodSeconds: 10
      containers:
      - name: zookeeper
        image: harbor.test.com/middleware/zookeeper:v3.4.10
        imagePullPolicy: Always
        ports:
        - containerPort: 2181
          protocol: TCP
        - containerPort: 2888
          protocol: TCP
        - containerPort: 3888
          protocol: TCP
        env:
        - name: SERVICE_NAME
          value: "zookeeper"
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        volumeMounts:
        - name: volume-logs
          mountPath: /var/log/zookeeper
        #- name: volume-data  此处不能挂载、data到本地,否则如果两个pod分配到同一个节点会相互覆盖,myid也会被覆盖
         # mountPath: /data
        command:
          - /bin/bash
          - -c
          - -x
          - |
            /sed.sh #此脚本作用就是讲podip写入zoo.cfg配置文件中,然后写/data/myid
            sleep 10
            zkServer.sh start-foreground
      nodeSelector:
        zookeeper: enable
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper
spec:
  ports:
  - port: 2181
  selector:
    app: zookeeper
  clusterIP: None

原文地址:https://blog.51cto.com/4169523/2465894

时间: 2024-10-08 04:20:23

k8s部署有状态服务zookeeper示例的相关文章

k8s的StatefulSet(有状态服务)实现

StatefulSet介绍 遇到的问题: 使用Deployment创建的Pod是无状态的,当挂在Volume之后,如果该Pod挂了,Replication Controller会再run一个来保证可用性,但是由于是无状态的,Pod挂了的时候与之前的Volume的关系就已经断开了,新起来的Pod无法找到之前的Pod.但是对于用户而言,他们对底层的Pod挂了没有感知,但是当Pod挂了之后就无法再使用之前挂载的磁盘了. StatefulSet: 是一种给Pod提供唯一标志的控制器,它可以保证部署和扩展

部署Nginx网站服务实现访问状态统计以及访问控制功能

Nginx专为性能优化而开发,最知名的优点是它的稳定性和低系统资源消耗,以及对HTTP并发连接的高处理能力,单个物理服务器可支持30000-50000个并发请求. Nginx的安装文件可以从官方网站http://www.nginx.org/下载,下面以Nginx1.12版本为例,基于CentOS7,部署Nginx网站服务. 安装Nginx 第一步源码编译安装 1. 安装支持软件 Nginx的配置及运行需要gcc . gcc-c++ . make . pcre.pcre-devel.zlib-de

Kubernetes针对有状态服务数据持久化之StatefulSet(自动创建PVC)

一.Kubernetes无状态服务VS有状态服务 1)Kubernetes无状态服务 Kubernetes无状态服务特征:1)是指该服务运行的实例不会在本地存储需要持久化的数据,并且多个实例对于同一请求响应的结果是完全一致的:2)多个实例可以共享相同的持久化数据.例如:nginx实例.tomcat实例等:3)相关的Kubernetes资源有:ReplicaSet.ReplicationController.Deployment等,由于是无状态服务,所以这些控制器创建的Pod名称都是随机性的.并且

如何基于 k8s 开发高可靠服务?容器云牛人有话说

?? k8s 是当前主流的容器编排服务,它主要解决「集群环境」下「容器化应用」的「管理问题」,主要包括如下几方面:?? 容器集群管理 编排? 调度? 访问? ? 基础设施管理 计算资源? 网络资源? 存储资源?? k8s 的强大依赖于它良好的设计理念和抽象,吸引了越来越多的开发者投入到 k8s 社区,把 k8s 作为基础设施运行服务的公司也逐步增多.?在设计理念方面,k8s 只有 APIServer 与 etcd (存储) 通信,其他组件在内存中维护状态,通过 APIServer 持久化数据.管

4.K8S部署-------- Master节点部署

部署Kubernetes API服务部署 如果没有特别指定在那台服务器执行命令.只需要按照文中的步骤执行即可 0.准备软件包 [[email protected] ~]# cd /usr/local/src/kubernetes      #kubernets解压的目录 [[email protected] kubernetes]# cp server/bin/kube-apiserver /opt/kubernetes/bin/ [[email protected] kubernetes]#

k8s部署环境

k8s部署环境 公有云环境:AWS.腾讯云.阿里云等等 私有云:OpenStack.vSphere等 Baremetal环境:物理服务器或独立虚拟机(底层没有云环境).    k8s部署方式 Minikube:Kubernetes官网提供的微型分布式环境,适合学习.初次体验,不过应该需要梯子才能用. Kubeadm:由于二进制部署方式过于复杂,所以后来出现了Kubeadm的部署方式,这种方式其实是将k8s的各组件容器化了.注意,使用容器方式部署Master节点各组件时,需要安装kubelet和d

部署ftp文件共享服务

部署ftp文件共享服务 一.部署ftp服务 常用的部署网络服务的步骤: 部署 FTP 服务器 ? FTP ( 文件传输协议 ) 是 INTERNET 上仍常用的最老的网络协议之一 , 它为系统提供了通过网络与远程服务器进行传输的简单方法 ? 在 RED HAT ENTREPRISE LINUX 6 中. FTP 服务器包的名称为 VSFTPD , 它代表 Very Secure File Transfer Protocol Damon 服务器名称也叫做 vsftpd ? 默认配置文件让 ANON

知识链-分布式协调服务zookeeper

分布式协调服务 Zookeeper zookeeper是一个开源的分布式协调服务.是典型的分布式数据一致性的解决方案. 集群内所有server基于Zab(ZooKeeper Atomic Broadcast)协议进行通信 Zookeeper官网地址: http://zookeeper.apache.org/ Zookeeper官网文档地址:http://zookeeper.apache.org/doc/trunk/index.html 认识ZooKeeper ZooKeeper概述 ZooKee

ASP.NET 状态服务 及 session丢失问题解决方案总结

ASP.NET2.0系统时,在程序中做删除或创建文件操作时,出现session丢失问题.采用了如下方法:1.asp.net Session的实现:asp.net的Session是基于HttpModule技术做的,HttpModule可以在请求被处理之前,对请求进行状态控制,由于Session本身就是用来做状态维护的,因此用HttpModule做Session是再合适不过了.ASP.NET中Session的状态保持方式  ASP.NET提供了Session对象,从而允许程序员识别.存储和处理同一个