kubernetes 1.8 高可用安装(六)

6 、安装kube-dns

下载kube-dns.yaml

#获取文件
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kube-dns.yaml.sed
mv kube-dns.yaml.sed kube-dns.yaml

#修改配置
sed -i ‘s/$DNS_SERVER_IP/10.96.0.12/g‘ kube-dns.yaml 
sed -i ‘s/$DNS_DOMAIN/cluster.local/g‘ kube-dns.yaml

# 创建
kubectl create -f kube-dns.yaml

kube-dns.yaml

# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file.

# Warning: This is a file generated from the base underscore template file: kube-dns.yaml.base

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: $DNS_SERVER_IP
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ‘‘
    spec:
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      containers:
      - name: kubedns
        image: foxchan/k8s-dns-kube-dns-amd64:1.14.7
        resources:
          # TODO: Set memory limits when we‘ve profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn‘t backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that‘s available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=$DNS_DOMAIN.
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
        volumeMounts:
        - name: kube-dns-config
          mountPath: /kube-dns-config
      - name: dnsmasq
        image: foxchan/k8s-dns-dnsmasq-nanny-amd64:1.14.7
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --no-negcache
        - --log-facility=-
        - --server=/$DNS_DOMAIN/127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 20Mi
        volumeMounts:
        - name: kube-dns-config
          mountPath: /etc/k8s/dns/dnsmasq-nanny
      - name: sidecar
        image: foxchan/k8s-dns-sidecar-amd64:1.14.7
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.$DNS_DOMAIN,5,SRV
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.$DNS_DOMAIN,5,SRV
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don‘t use cluster DNS.
      serviceAccountName: kube-dns
时间: 2024-11-03 12:50:26

kubernetes 1.8 高可用安装(六)的相关文章

kubernetes 1.8 高可用安装(三)

3.master 组件安装(etcd/api-server/controller/scheduler) 3.1 etcd集群安装 确定你要安装的master机器, 上面安装rpm包,配置kubelet 注意: 所有的image,我都已经放到docker hub仓库,需要的可以去下载 https://hub.docker.com/u/foxchan/ 安装rpm包 yum localinstall -y kubectl-1.8.0-1.x86_64.rpm kubelet-1.8.0-1.x86_

kubernetes 1.8 高可用安装(一)

1.创建证书 1.1 安装cfssl工具 wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/loca

kubernetes 1.8 高可用安装(五)

5安装网络组件calico 安装前需要确认kubelet配置是否已经增加--network-plugin=cni如果没有配置就加到kubelet配置文件里 Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin 5.1先装rbac 官方URLhttps://docs.projectcalico.org/v2.6/getting-s

kubernetes 1.8 高可用安装(二)

2.设置kubeconfig 2.1 设置kubectl的kubeconfig(admin.conf) # 设置集群参数 kubectl config set-cluster kubernetes   --certificate-authority=/etc/kubernetes/pki/ca.pem   --embed-certs=true   --server=https://master_VIP:6443   --kubeconfig=admin.conf # 设置客户端认证参数 kube

最简单的 kubernetes 高可用安装方式

sealos 项目地址:https://github.com/fanux/sealos 本文教你如何用一条命令构建 k8s 高可用集群且不依赖 haproxy 和 keepalived,也无需 ansible.通过内核 ipvs 对 apiserver 进行负载均衡,并且带 apiserver 健康检测.架构如下图所示: 本项目名叫 sealos,旨在做一个简单干净轻量级稳定的 kubernetes 安装工具,能很好的支持高可用安装. 其实把一个东西做的功能强大并不难,但是做到极简且灵活可扩展就

ceph对象存储(rgw)服务、高可用安装配置

ceph对象存储服务.高可用安装配置 简介:    Ceph本质上就是一个rados,利用命令rados就可以访问和使用ceph的对象存储,但作为一个真正产品机的对象存储服务,通常使用的是Restfulapi的方式进行访问和使用.而radosgw其实就是这个作用,安装完radosgw以后,就可以使用api来访问和使用ceph的对象存储服务了.    首先明白一下架构,radosgw其实名副其实,就是rados的一个网关,作用是对外提供对象存储服务.本质上radosgw(其实也是一个命令)和rbd

如何将Rancher 2.1.x 从单节点安装迁移到高可用安装

Rancher提供了两种安装方法,即单节点安装和高可用安装.单节点安装可以让用户快速部署适用于短期开发或PoC的Rancher 2.x,而高可用部署则明显更适合Rancher的长期部署.  要点须知 针对开源用户,对于从单个节点迁移到HA的工作,Rancher Labs不提供官方技术支持. 以防在此过程中出现问题,您应该熟悉Rancher架构以及故障排除的方法. 前期准备 为了顺利将单个节点Rancher安装迁移到高可用性安装,您必须做如下准备: 您需要运行Rancher的2.1.x版本以及RK

Keepalived+Mysql互为主从高可用安装配置

Keepalived+Mysql互为主从高可用安装配置环境介绍:keepalived_vip=192.168.1.210    (写虚拟ip)mysql_master01      eth0:192.168.1.211  eth1:172.20.27.211 (1核1G)mysql_master02      eth0:192.168.1.212  eth1:172.20.27.212 (1核1G) 1.安装mysql数据库(所有节点安装)  //此处省略安装mysql服务2.编辑my.cnf配

CentOS下Redis高可用安装笔记

(WJW)Redis高可用安装笔记 [x] 安装环境介绍: Master: T1 Slave: T2 VIP: 192.168.68.45 [x] 安装Redis(Master,Slave) 注意: 安装redis前flushall的修改 查找src/redis.c文件,把 `{"flushdb",flushdbCommand,1,"w",0,NULL,0,0,0,0,0},` `{"flushall",flushallCommand,1,&qu