基于 Kubernetes v1.14.0 之 CoreDNS部署

1、部署容器前说明:

1.1、如果没有特殊指明,本文档的所有操作均在 k8s-operation 节点上执行;

kuberntes 自带插件的 manifests yaml 文件使用 gcr.io 的 docker registry,国内被墙,需要手动替换为其它 registry 地址;

1.2、由于k8s-master 没有部署容器服务于路由服务,但是k8s-master 又要访问容器网络跟k8s集群网络,1、在上级路由器写入静态路由让其能访问容器网络与k8s集群网络。2、在k8s-master 服务器写入静态路由。 以k8s-master 服务器写入静态路由为例:

ip route add 10.48.0.0/12 via 192.168.31.252
ip route add 10.64.0.0/16 via 192.168.31.252
加入重启服务器自动写入路由:
vim /etc/rc.local
chmod +x /etc/rc.local
192.168.31.252 为k8s-vip ip 由于vip ip安装了容器服务与路由服务
上级路由器写入让外面访问整个容器网络建议也写vip ip 这样坏掉一台第二台可以直接接管所以的配置可以不用更换

1.3、配置k8s-vip 与k8s-ingress 不参与调度

kubectl cordon k8s-vip-01
kubectl cordon k8s-vip-02
kubectl cordon k8s-ingress-01
kubectl cordon k8s-ingress-02

2、CoreDNS 准备

cd /apps/work/k8s/kubernetes
tar -xvf  kubernetes-src.tar.gz
cd cluster/addons/dns/coredns

3、修改CoreDNS 配置

export CLUSTER_DNS_DOMAIN="niuke.local"
export CLUSTER_DNS_SVC_IP="10.64.0.2"
sed -i -e "s/__PILLAR__DNS__DOMAIN__/${CLUSTER_DNS_DOMAIN}/" -e "s/__PILLAR__DNS__SERVER__/${CLUSTER_DNS_SVC_IP}/" coredns.yaml
vi coredns.yaml
替换成
 image: coredns/coredns
 删除 loop 

4、coredns.yaml

# __MACHINE_GENERATED_WARNING__

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes niuke.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: ‘docker/default‘
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      containers:
      - name: coredns
        image: coredns/coredns
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.64.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

5、创建CoreDNS 自动横向扩容基于请求数 后期执行

vi  hpa-coredns.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: coredns
  namespace: kube-system
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: coredns
  minReplicas: 3
  maxReplicas: 10
  metrics:
  - type: Pods
    pods:
      metricName: coredns_dns_request_count
      targetAverageValue: 1000

6、创建CoreDNS 服务

kubectl create -f coredns.yaml

7、检查CoreDNS 功能

kubectl get all -n kube-system
[[email protected] coredns]# kubectl get all -n kube-system
NAME                                           READY   STATUS    RESTARTS   AGE
pod/coredns-6cbf85dbc6-hcqd7                   1/1     Running   1          47d
pod/coredns-6cbf85dbc6-rdcww                   1/1     Running   0          2d18h
pod/coredns-6cbf85dbc6-wcqmd                   1/1     Running   0          32d
NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns               ClusterIP   10.64.0.2       <none>        53/UDP,53/TCP,9153/TCP   47d
NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns                    3/3     3            3           47d
NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-6cbf85dbc6                   3         3         3       47d
NAME                                          REFERENCE            TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/coredns   Deployment/coredns   11%/80%   2         10        3          53d

新建一个 Deployment
kubectl run myip --image=cloudnativelabs/whats-my-ip --replicas=3 --port=8080
export 该 Deployment, 生成 myip 服务:
kubectl expose deployment myip --port=8080 --target-port=8080 --type=NodePort
kubectl get pod -o wide
kubectl get services --all-namespaces
说明:k8s-vip 主要提供以后监控,等内部访问,主要基于dns+nginx 对外提供业务暴露访问
修改k8s-vip /etc/resolv.conf
options ndots:5
; generated by /usr/sbin/dhclient-script
nameserver 10.64.0.2
search kube-system.svc.niuke.local svc.niuke.local niuke.local

安装 yum install -y bind-utils
测试CoreDNS 服务是否正常

[[email protected] src]# dig @10.64.0.2 www.baid.com

; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> @10.64.0.2 www.baid.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9443
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.baid.com.                  IN      A

;; ANSWER SECTION:
www.baid.com.           30      IN      A       47.254.33.193

;; Query time: 38 msec
;; SERVER: 10.64.0.2#53(10.64.0.2)
;; WHEN: Tue Jun 11 10:04:17 CST 2019
;; MSG SIZE  rcvd: 69
查询刚刚建立myip 服务是否能解析
[[email protected] src]# dig @10.64.0.2 myip.default.svc.niuke.local

; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> @10.64.0.2 myip.default.svc.niuke.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62331
;; flags: qr rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;myip.default.svc.niuke.local.  IN      A

;; ANSWER SECTION:
myip.default.svc.niuke.local. 1 IN      A       10.64.160.236

;; Query time: 1 msec
;; SERVER: 10.64.0.2#53(10.64.0.2)
;; WHEN: Tue Jun 11 10:06:13 CST 2019
;; MSG SIZE  rcvd: 101

curl myip.default.svc.niuke.local:8080
[[email protected] src]# curl myip.default.svc.niuke.local:8080
HOSTNAME:myip-7ddc5b85f4-6h47f IP:10.65.1.250
[[email protected] src]# curl myip.default.svc.niuke.local:8080
HOSTNAME:myip-7ddc5b85f4-69jlx IP:10.65.4.243
[[email protected] src]# curl myip.default.svc.niuke.local:8080
HOSTNAME:myip-7ddc5b85f4-9wxd4 IP:10.65.3.149
解析正常也能正常访问

下一篇: Kubernetes 生产环境安装部署 基于 Kubernetes v1.14.0 之 kubernetes-dashboard部署

原文地址:https://blog.51cto.com/juestnow/2407119

时间: 2024-07-29 09:10:49

基于 Kubernetes v1.14.0 之 CoreDNS部署的相关文章

基于 Kubernetes v1.14.0 之 Alertmanager 部署

1.部署准备 说明:所有的容器组都运行在monitoring 命名空间 本文参考https://github.com/coreos/kube-prometheus 由于官方维护的版本在现有的部署环境出现问题所以下面做了一些修改及变更不影响整体效果 Alertmanager 项目使用官方yaml 不做任何修改 2.Alertmanager 相关服务的yaml 准备 2.1.下载官方yaml mkdir kube-prometheus cd kube-prometheus git clone htt

基于 Kubernetes v1.14.0 之 vpa 部署

1.部署准备 说明:所有的容器组都运行在kube-system 命名空间 本文参考https://github.com/kubernetes/autoscaler 由于官方维护的版本在现有的部署环境出现问题所以下面做了一些修改及变更不影响整体效果 同时vpa只作为学习使用,生产环境可能会出现一些未知问题,它会重新创建pod 可能业务会出现短暂的中断 2.准备相关yaml git clone https://github.com/kubernetes/autoscaler cd autoscale

基于 Kubernetes v1.14.0 之heapster与influxdb部署

1.部署准备 说明:所有的容器组都运行在kube-system 命名空间 github 项目地址 https://github.com/kubernetes-retired/heapster.git mkdir heapster git clone https://github.com/kubernetes-retired/heapster.git cd heapster/deploy/kube-config/influxdb 2.influxdb 部署 2.1.创建influxdb pvc 源

Kubernetes 生产环境安装部署 基于 Kubernetes v1.14.0 之 部署规划

1. 安装规划 1.1 部署节点说明 etcd集群规划 etcd 中心集群 192.168.2.247192.168.2.248192.168.2.249 etcd 事件集群 192.168.2.250192.168.2.251192.168.2.252 Kubernetes master节点集群规划 192.168.3.10192.168.3.11192.168.3.12192.168.3.13192.168.3.14 Kubernetes master vip 192.168.4.1192.

Kubernetes v1.14.0 之 kube-apiserver集群部署

kube-apiserver集群准备 1.kube-apiserver 服务器配置 对外ip 内网ip cpu 内存 硬盘 192.168.3.10 172.172.1.1 64 256 1T 192.168.3.11 172.172.1.2 64 256 1T 192.168.3.12 172.172.1.3 64 256 1T 192.168.3.13 172.172.1.4 64 256 1T 192.168.3.14 172.172.1.5 64 256 1T 2.kube-apiser

kubernetes v1.14.0版本集群搭建(centos7)

一.主机环境配置(centos7.6) 1.主机名设置 1 #所有主机分别设置如下 2 # hostnamectl set-hostname master 3 # hostnamectl set-hostname node1 4 # hostnamectl set-hostname node2 2.主机名绑定hosts #所有主机设置相同 # cat /etc/hosts ::1 localhost localhost.localdomain localhost6 localhost6.loca

kubeadm部署高可用K8S集群(v1.14.0)

一. 集群规划 主机名 IP 角色 主要插件 VIP 172.16.1.10 实现master高可用和负载均衡 k8s-master01 172.16.1.11 master kube-apiserver.kube-controller.kube-scheduler.kubelet.kube-proxy.kube-flannel.etcd k8s-master02 172.16.1.12 master kube-apiserver.kube-controller.kube-scheduler.k

kubeadm创建高可用kubernetes v1.12.0集群

节点规划 主机名 IP Role k8s-master01 10.3.1.20 etcd.Master.Node.keepalived k8s-master02 10.3.1.21 etcd.Master.Node.keepalived k8s-master03 10.3.1.25 etcd.Master.Node.keepalived VIP 10.3.1.29 None 版本信息: OS::Ubuntu 16.04 Docker:17.03.2-ce k8s:v1.12 来自官网的高可用架构

kubeadm安装高可用kubernetes v1.14.1

前言 步骤跟之前安装1.13版本的是一样的 区别就在于kubeadm init的configuration file 目前kubeadm init with configuration file已经处于beta阶段了,在1.15版本已经进入到了v1beta2版本 虽然还没到GA版,但是相对于手动配置k8s集群,kubeadm不但简化了步骤,而且还减少了手动部署的出错的概率,何乐而不为呢 环境介绍: 系统版本:CentOS 7.6 内核:4.18.7-1.el7.elrepo.x86_64 Kub