[k8s]kubeadm k8s免费实验平台labs.play-with-k8s.com

k8s实验

labs.play-with-k8s.com特色

  • 这玩意允许你用github或dockerhub去登录
  • 这玩意登录后倒计时,给你4h实践
  • 这玩意用kubeadm来部署(让你用weave网络)
  • 这玩意提供5台centos7(7.4.1708) 内核4.x(4.4.0-101-generic) docker17(17.09.0-ce)

  • 这玩意资源配置每台4核32G

搭建kubeam5节点集群

按照提示搞吧

让你用kubeadm
 1. Initializes cluster master node:
 kubeadm init --apiserver-advertise-address $(hostname -i)

这玩意让你部署weave网络,否则添加节点后显示notready
 2. Initialize cluster networking:
 kubectl apply -n kube-system -f     "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

让你部署一个nginx尝尝鲜
 3. (Optional) Create an nginx deployment:
 kubectl apply -f https://k8s.io/docs/user-guide//nginx-app.yaml
  • 这玩意资源排布情况

这玩意的启动参数


kube-apiserver
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-allowed-names=front-proxy-client
--advertise-address=192.168.0.13
--requestheader-username-headers=X-Remote-User
--service-account-key-file=/etc/kubernetes/pki/sa.pub
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
--client-ca-file=/etc/kubernetes/pki/ca.crt
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key
--secure-port=6443
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--insecure-port=0
--allow-privileged=true
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--requestheader-group-headers=X-Remote-Group
--service-cluster-ip-range=10.96.0.0/12
--enable-bootstrap-token-auth=true
--authorization-mode=Node,RBAC
--etcd-servers=http://127.0.0.1:2379
--token-auth-file=/etc/pki/tokens.csv

etcd的静态pod

[node1 kubernetes]$ cat manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=http://127.0.0.1:2379
    - --data-dir=/var/lib/etcd
    - --listen-client-urls=http://127.0.0.1:2379
    image: gcr.io/google_containers/etcd-amd64:3.0.17
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2379
        scheme: HTTP
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: etcd
    resources: {}
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd
  hostNetwork: true
  volumes:
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd
status: {}

master 三大组件的yaml

 cat manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-allowed-names=front-proxy-client
    - --advertise-address=192.168.0.13
    - --requestheader-username-headers=X-Remote-User
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,Node
Restriction,ResourceQuota
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --secure-port=6443
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --insecure-port=0
    - --allow-privileged=true
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --requestheader-group-headers=X-Remote-Group
    - --service-cluster-ip-range=10.96.0.0/12
    - --enable-bootstrap-token-auth=true
    - --authorization-mode=Node,RBAC
    - --etcd-servers=http://127.0.0.1:2379
    - --token-auth-file=/etc/pki/tokens.csv
    image: gcr.io/google_containers/kube-apiserver-amd64:v1.8.6
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: ca-certs-etc-pki
      readOnly: true
  dnsPolicy: ClusterFirst
  hostNetwork: true
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  terminationGracePeriodSeconds: 30
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: ca-certs-etc-pki
status: {}
cat manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-controller-manager
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --address=127.0.0.1
    - --leader-elect=true
    - --use-service-account-credentials=true
    - --controllers=*,bootstrapsigner,tokencleaner
    image: gcr.io/google_containers/kube-controller-manager-amd64:v1.8.6
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10252
        scheme: HTTP
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-controller-manager
    resources:
      requests:
        cpu: 200m
    volumeMounts:
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/kubernetes/controller-manager.conf
      name: kubeconfig
      readOnly: true
    - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      name: flexvolume-dir
    - mountPath: /etc/pki
      name: ca-certs-etc-pki
      readOnly: true
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/kubernetes/controller-manager.conf
      type: FileOrCreate
    name: kubeconfig
  - hostPath:
      path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      type: DirectoryOrCreate
    name: flexvolume-dir
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: ca-certs-etc-pki
status: {}
 cat manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --address=127.0.0.1
    - --leader-elect=true
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    image: gcr.io/google_containers/kube-scheduler-amd64:v1.8.6
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10251
        scheme: HTTP
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-scheduler
    resources:
      requests:
        cpu: 100m
    volumeMounts:
    - mountPath: /etc/kubernetes/scheduler.conf
      name: kubeconfig
      readOnly: true
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/scheduler.conf
      type: FileOrCreate
    name: kubeconfig
status: {}
[node1 kubernetes]$ cat kubelet.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxx
  name: kubernetes
contexts:
- context:
[node1 kubernetes]$ cat kubelet.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxx
  name: system:node:[email protected]: system:node:[email protected]: Configpreferences: {}users:- name: system:node:node1  user:    client-certificate-data: xxx
    client-key-data: xxx
[node1 kubernetes]$ ls
admin.conf  controller-manager.conf  kubelet.conf  manifests  pki  scheduler.conf
[node1 kubernetes]$ cat controller-manager.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxx
    server: https://192.168.0.13:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:[email protected]
current-context: system:[email protected]
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
  user:
    client-certificate-data: xxx
    client-key-data: xxx
[node1 kubernetes]$
[node1 kubernetes]$ cat admin.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxx
    server: https://192.168.0.13:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: [email protected]
current-context: [email protected]
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: xxx
    client-key-data: xxx

kubelet参数

/usr/bin/kubelet
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
--kubeconfig=/etc/kubernetes/kubelet.conf
--pod-manifest-path=/etc/kubernetes/manifests
--allow-privileged=true
--network-plugin=cni
--cni-conf-dir=/etc/cni/net.d
--cni-bin-dir=/opt/cni/bin
--cluster-dns=10.96.0.10
--cluster-domain=cluster.local
--authorization-mode=Webhook
--client-ca-file=/etc/kubernetes/pki/ca.crt
--cadvisor-port=0
--cgroup-driver=cgroupfs
--fail-swap-on=false

kube-proxy

/usr/local/bin/kube-proxy
--kubeconfig=/var/lib/kube-proxy/kubeconfig.conf
--cluster-cidr=172.17.0.1/16
--masquerade-all
--conntrack-max=0
--conntrack-max-per-core=0

kube-controller-manager

kube-controller-manager
--root-ca-file=/etc/kubernetes/pki/ca.crt
--kubeconfig=/etc/kubernetes/controller-manager.conf
--service-account-private-key-file=/etc/kubernetes/pki/sa.key
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key
--address=127.0.0.1
--leader-elect=true
--use-service-account-credentials=true
--controllers=*,bootstrapsigner,tokencleaner
kube-scheduler
--address=127.0.0.1
--leader-elect=true
--kubeconfig=/etc/kubernetes/scheduler.conf

原文地址:https://www.cnblogs.com/iiiiher/p/8203529.html

时间: 2024-08-30 16:30:23

[k8s]kubeadm k8s免费实验平台labs.play-with-k8s.com的相关文章

amazon aws 服务实验平台

听说云计算很火,大家都在玩.如果你了解了amazon aws,是不是想亲自动手?那么你需要一个实验平台,来了解亚马逊云服务的使用.(没了解看这里 click) 什么?还有没有AWS账号?什么?还没有信用卡?什么?不知道哪里有详细的学习文档? 假如你申请了一个亚马逊云服务帐号,并且用自己的账户,但是你担心不当的操作,会导致服务扣费. 统统忘掉这些吧.AWS祭出了神器:https://run.qwiklabs.com. 通过网站https://run.qwiklabs.com,你可以进行亚马逊云服务

openstack(liberty):部署实验平台(一,基础网络环境搭建)

openstack项目的研究,到今天,算是要进入真实环境了,要部署实验平台了.不再用devstack了.也就是说,要独立controller,compute,storage和network了.要做这个的第一步,就是要将各个服务器的软件环境(操作系统是否满足liberty的要求,centos最好是7以上),以及服务器之间的网络连接通路打通,方便后续的软件安装需要. 下面看看我的节点拓扑图,有5台机器,其中参与这个openstack的节点有4个,计划中的controller,network,stor

Android中实现java与PHP服务器(基于新浪云免费云平台)http通信详解

Android中实现java与PHP服务器(基于新浪云免费云平台)http通信详解 (本文转自: http://blog.csdn.net/yinhaide/article/details/44756989) 前言:现在很多APP都需要云的功能,也就是通过网络与服务器交换数据.有的采用tcp/ip协议,但是你必须拥有一个固定ip的服务器,可以购买阿里云服务器之类的,就是贵了点.如果只是个人的小应用的的话可以采用新浪云平台这种免费的服务器,采用的协议是http协议,具体实现方式如下: 方式一.在线

多目标优化实验平台OTL使用说明书

多目标优化实验平台OTL(Copyright (C), Ruimin Shen, a legend) 采用面向对象设计,将优化问题.算子.算法以及评价指标封装成独立的模板.由于平台采用C++和Python混合编程搭建,对初学者有一定的门槛要求,下面对平台的使用(如何添加自己算法并进行试验)进行简单的说明. (1).OTL实验平台搭建好后,首先在OptimizationTemplateLibibrary (OTL, c++模块 ) /[Source directory]/Include/OTL/O

CNCF启动K8s软件一致性项目,Rancher入选全球首批K8s认证平台

CNCF于美国旧金山当地时间2017年11月13日宣布推出Certified Kubernetes Conformance Program,并公布了首批通过认证的32个Kubernetes平台的名单. 此次认证项目中得到认证的Kubernetes产品与平台确保具有一致性及可移植性,可确保完整的Kubernetes API按照规定运行,因此用户可以获得无缝且稳定的体验.同时,得到认证的产品或平台可以使用新的Kubernetes认证标志,并可以将Kubernetes标志与其产品名称(例如XYZ Ku

openstack(liberty):部署实验平台(二,简单版本软件安装 part1)

软件安装过程中,考虑到现在是一个实验环境,且也考虑到规模不大,还有,网络压力不会大,出于简单考虑,将各个节点的拓扑结构改了一下,主要体现在网络节点和控制节点并在了一起.在一个服务器上安装! 到目前位置,我的这个平台,只有keystone,glance,neutron,dashboard以及nova几个服务.省出的那个服务器,打算也作为计算节点.所以,最新规划topo如下了: 绿色节点表示目前已经安装了openstack的模块软件,灰色部分,表示下一步即将安装的部分. 下面简要说下安装的过程,重点

App创业者必看:如何选择免费数据分析平台

笔者是一位移动互联网老兵,做过好几个App的开发运营工作,其中一些如今侥幸有了上亿用户.今天和大家聊一下App开发中,不能缺少的一个工具——数据分析系统 首先,App创业者为什么需要一个数据分析系统? 当你开发了一个App,它有精美的UI,优秀的功能,极致的用户体验,满心欢喜地提交到各个应用商店,是否满满的成就感呢?可是,这App每天的用户是多少?每天新增多少用户?用户喜欢哪些功能?用户来自哪里?新增的用户又有多少变成死忠用户?这些你都一无所知. 这时,你可能会考虑给你的应用加上了一些统计功能,

基于openstack的免费云计算平台使用方法总结(IBM 出品支持docker)

声明:此文档只做学习交流使用,请勿用作其他商业用途 author:朝阳_tony E-mail : [email protected] Create Date: 2015-3-6 13:55:38 Friday Last Change: 2015-3-6 15:07:32 Friday 转载请注明出处:http://blog.csdn.net/linzhaolover 摘要: 现在云计算越来越火,各大互联网公司都开始推出了云服务平台,阿里的阿里云,微软的Windows Azure ,  腾讯的腾

推荐一个云实验平台

腾讯云前段时间推出的开发者实验室 都是已经搭建好的环境,可以按照步骤一步步操作,操作错了可以重置实验进度.非常方便. 很适合初学者学习,省去了一些浪费时间的步骤. 链接 https://cloud.tencent.com/developer/labs/gallery 原文地址:http://blog.51cto.com/12804405/2112541