k8s 安装随笔

常见错误的解决方案

错误1 IsPrivilegedUser

[ERROR IsPrivilegedUser]: user is not running as root [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

需要使用sudo 用户执行

sudo kubeadm init --kubernetes-version=v1.15.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

错误2: FileContent--proc-sys-net-bridge-bridge-nf-call-iptables

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

#你可以手工自行,但是开机就会失效了。开机生效方法。后面提供sudo bash -c "echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables"
sudo bash -c "echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables"
sudo bash -c "echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables"#开机生效方法

错误3: DirAvailable--var-lib-etcd

[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty

进入/var/lib/etcd 目录 sudo rm * -rf 即可,即删除掉无用的文件

错误4:ImagePull

[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.1: output: Error response from daemon: manifest for k8s.gcr.io/kube-apiserver:v1.15.1 not found

检查自己的版本是否指定错误。v1.15.1是不对的

启动日志分析

[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service‘
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull‘
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.11]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master01 localhost] and IPs [192.168.11.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master01 localhost] and IPs [192.168.11.11 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.505998 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the label "node-role.kubernetes.io/master=‘‘"
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7y3zbz.r80ie248lqrtof9g
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
#如下这几行,是需要在当前用户执行的操作,及时拷贝认证文件,由于kubectl 依赖认证的信息,这个文件就可以提供
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
#下面这一行,表示,你可以通过如下的脚本,在子节点上执行,
kubeadm join 192.168.11.11:6443 --token 7y3zbz.r80ie248lqrtof9g     --discovery-token-ca-cert-hash sha256:a99e0c66d5ff741421dd2b8499f663b0145ab0f7c5007c723efb4ba1991589f0

  

原文地址:https://www.cnblogs.com/blueboz/p/11129771.html

时间: 2024-10-06 23:10:43

k8s 安装随笔的相关文章

k8s安装 1.10.1

k8s版本 主机名 ip 配置 1.10 master 192.168.61.250 4C8G node1 192.168.61.251 4C8G node2 192.168.61.252 4C8G kubectl 命令补全 [email protected]:/# vim /etc/profile  #添加下面这句,再source source <(kubectl completion bash) [email protected]:/# source /etc/profile 修改主机名,时

k8s安装

k8s安装:cat /etc/hosts127.0.0.1 localhost10.26.3.182 kuber-node110.26.3.184 kuber-master1.关闭防火墙 systemctl stop firewalld.service2.关闭selinux setenforce 03.创建/etc/sysctl.d/k8s.conf文件,添加如下内容: net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-ca

k8s1.4.3安装实践记录(2)-k8s安装

前面一篇已经安装好了ETCD.docker与flannel(k8s1.4.3安装实践记录(1)),现在可以开始安装k8s了 1.K8S 目前centos yum上的kubernetes还是1.2.0,因此我们只能是使用下载的安装包,进行kubernetes的安装 [[email protected] system]# yum list |grep kubernetes cockpit-kubernetes.x86_64 0.114-2.el7.centos extras kubernetes.x

K8S安装部署在centos7下

所有节点操作 K8S的安装部署可以参考文档: 需要在每一台机器上执行的操作 l 各节点禁用防火墙 # systemctl stop firewalld # systemctl disable firewalld l 禁用SELINUX: # setenforce 0 # sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g‘ /etc/selinux/config # SELINUX=disabled l 创建/etc/sysctl.d/k8s.conf文

k8s安装dashboard

1.Kubernetes Dashboard 是 k8s集群的?个 WEB UI管理?具,代码托管在 github 上,地址: https://github.com/kubernetes/dashboard 2.安装直接使用官方文件就行(如果地址失效去github 看一下) wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.

k8s安装之配置

配置 Configuration file details https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/ Kubeconfig ###安装kubectl工具 wget https://dl.k8s.io/v1.11.0/kubernetes-client-linux-amd64.tar.gz tar -zxf kubernetes-client-linux-amd64.tar.g

k8s安装dashboard(未解决)

第一种,用helm安装 helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the "stable" chart repository Update Complete. [root@k8s-master templates]# helm r

k8s安装nexus并导入第三方jar包

搭建k8s集群 Docker search nexus 编写yaml文件 注意,以下的service不是nodeport,而是LoadBalancer apiVersion: apps/v1kind: StatefulSetmetadata: name: nexus labels: app: nexusspec: serviceName: nexus replicas: 1 selector: matchLabels: app: nexus template: metadata: labels:

使用kubeadm离线下载k8s安装镜像

1.说明2.系统及软件版本3.安装kubeadm4.离线镜像5.打包镜像6.导入镜像 1.说明由于使用kubeadm安装 k8s 时需要从 k8s.gcr.io 拉取镜像,但是该网站被屏蔽了,国内没法正常访问导致无法正常进行k8s的安装.这里介绍从阿里云镜像平台拉取镜像并重新打tag的方式来绕过对 k8s.gcr.io 的访问. 2.系统及软件版本 # cat /etc/redhat-release CentOS Linux release 7.7.1908 (Core) # docker ve