中国何时亮剑 k8s搭建

前言

最近中国和印度的局势也是愈演愈烈。作为一个爱国青年我有些愤怒,但有时又及其的骄傲。不知道是因为中国外交强势还是软弱,怎样也应该有个态度吧?这是干嘛?就会抗议 在不就搞一些军演。有毛用啊?

自己判断可能是国家有自己的打算吧!就好比狮子和疯狗一样何必那!中国和印度的纷纷扰扰,也不知道怎样霸气侧漏还是在伤仲永。

霸气侧漏是航母的电子弹射还是核潜艇或者是无人机.....

项目开始

我想大家都知道docker 但是也都玩过k8s吧!

搭建kubernetes集群时遇到一些问题,网上有不少搭建文档可以参考,但是满足以下网络互通才能算k8s集群ready。

需求如下:

k8s结构图如下:

以下是版本和机器信息:

节点初始化

  • 更新CentOS-Base.repo为阿里云yum源
mv -f /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bk; 
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

设置bridge

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
EOF
sudo sysctl --system
  • disable selinux (请不要用setenforce 0)
sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/‘ /etc/selinux/config
  • 关闭防火墙
sudo systemctl disable firewalld.service
sudo systemctl stop firewalld.service
  • 关闭iptables
sudo yum install -y iptables-services;iptables -F;   #可略过sudo systemctl disable iptables.service
sudo systemctl stop iptables.service
  • 安装相关软件
sudo yum install -y vim wget curl screen git etcd ebtables flannel
sudo yum install -y socat net-tools.x86_64 iperf bridge-utils.x86_64
  • 安装docker (目前默认安装是1.12)
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum install -y libdevmapper* docker
  • 安装kubernetes

  • 方便复制粘贴如下:

    ##设置kubernetes.repo为阿里云源,适合国内cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=0
    EOF##设置kubernetes.repo为阿里云源,适合能连通google的网络cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF## 安装k8s 1.7.2 (kubernetes-cni会作为依赖一并安装,在此没有做版本指定)export K8SVERSION=1.7.2
    sudo yum install -y "kubectl-${K8SVERSION}-0.x86_64" "kubelet-${K8SVERSION}-0.x86_64" "kubeadm-${K8SVERSION}-0.x86_64"

  • 重启机器 (这一步是需要的)
reboot


重启机器后执行如下步骤

  • 配置docker daemon并启动docker
cat <<EOF >/etc/sysconfig/docker
OPTIONS="-H unix:///var/run/docker.sock -H tcp://127.0.0.1:2375 --storage-driver=overlay --exec-opt native.cgroupdriver=cgroupfs --graph=/localdisk/docker/graph --insecure-registry=gcr.io --insecure-registry=quay.io  --insecure-registry=registry.cn-hangzhou.aliyuncs.com --registry-mirror=http://138f94c6.m.daocloud.io"EOF

systemctl start docker
systemctl status docker -l
  • 拉取k8s 1.7.2 需要的镜像
quay.io/calico/node:v1.3.0
quay.io/calico/cni:v1.9.1
quay.io/calico/kube-policy-controller:v0.6.0

gcr.io/google_containers/pause-amd64:3.0
gcr.io/google_containers/kube-proxy-amd64:v1.7.2
gcr.io/google_containers/kube-apiserver-amd64:v1.7.2
gcr.io/google_containers/kube-controller-manager-amd64:v1.7.2
gcr.io/google_containers/kube-scheduler-amd64:v1.7.2
gcr.io/google_containers/etcd-amd64:3.0.17

gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
  • 在非k8s master节点 10.12.0.22 上启动ETCD (也可搭建成ETCD集群)
screen etcd -name="EtcdServer" -initial-advertise-peer-urls=http://10.12.0.22:2380 -listen-peer-urls=http://0.0.0.0:2380 -listen-client-urls=http://10.12.0.22:2379 -advertise-client-urls http://10.12.0.22:2379 -data-dir /var/lib/etcd/default.etcd
  • 在每个节点上check是否可通达ETCD, 必须可通才行, 不通需要看下防火墙是不是没有关闭
etcdctl --endpoint=http://10.12.0.22:2379 member list
etcdctl --endpoint=http://10.12.0.22:2379 cluster-health
  • 在k8s master节点上使用kubeadm启动,
    pod-ip网段设定为10.68.0.0/16, cluster-ip网段为默认10.96.0.0/16
  • 如下命令在master节点上执行
cat << EOF >kubeadm_config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: 10.12.0.18  bindPort: 6443
etcd:
  endpoints:
  - http://10.12.0.22:2379
networking:
 dnsDomain: cluster.local
 serviceSubnet: 10.96.0.0/16
 podSubnet: 10.68.0.0/16
kubernetesVersion: v1.7.2#token: <string>#tokenTTL: 0EOF##kubeadm init --config kubeadm_config.yaml
  • 执行kubeadm init命令后稍等几十秒,master上api-server, scheduler, controller-manager容器都启动起来,以下命令来check下master
    如下命令在master节点上执行
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl get cs -o wide --show-labels
kubectl get nodes -o wide --show-labels
  • 节点加入, 需要kubeadm init命令输出的token, 如下命令在node节点上执行
systemctl start docker
systemctl start kubelet
kubeadm join --token *{6}.*{16} 10.12.0.18:6443 --skip-preflight-checks
  • 在master节点上观察节点加入情况, 因为还没有创建网络,所以,所有master和node节点都是NotReady状态, kube-dns也是pending状态
kubectl get nodes -o wide
watch kubectl get all --all-namespaces -o wide
  • 对calico.yaml做了修改
    删除ETCD创建部分,使用外部ETCD
    修改CALICO_IPV4POOL_CIDR为10.68.0.0/16
    calico.yaml如下:

# Calico Version v2.3.0

# http://docs.projectcalico.org/v2.3/releases#v2.3.0

# This manifest includes the following component versions:

# calico/node:v1.3.0

# calico/cni:v1.9.1

# calico/kube-policy-controller:v0.6.0

# This Config Map is used to configure a self-hosted Calico installation.kind: Config MapapiVersion: v1metadata:  name: calico-config  namespace: kube-systemdata:

# The location of your etcd cluster.  This uses the Service clusterIP defined below.  etcd_endpoints: "http://10.12.0.22:2379"

# Configure the Calico backend to use.  calico_backend: "bird"

# The CNI network configuration to install on each node.  cni_network_config: |-
   {        "name": "k8s-pod-network",        "cniVersion": "0.1.0",        "type": "calico",        "etcd_endpoints": "__ETCD_ENDPOINTS__",        "log_level": "info",        "ipam": {        "type": "calico-ipam"
       },        "policy": {      "type": "k8s",             "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",             "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
       },  "kubernetes": {        "kubeconfig": "/etc/cni/net.d/__KUBECONFIG_FILENAME__"
       }
   }---

# This manifest installs the calico/node container, as well

# as the Calico CNI plugins and network config on

# each master and worker node in a Kubernetes cluster.kind: DaemonSetapiVersion: extensions/v1beta1metadata:  name: calico-node  namespace: kube-system  labels:   k8s-app: calico-nodespec:selector:matchLabels:k8s-app: calico-node template: metadata:labels: k8s-app: calico-node annotations:

# Mark this pod as a critical add-on; when enabled, the critical add-on scheduler

# reserves resources for critical add-on pods so that they can be rescheduled after

# a failure.  This annotation works in tandem with the toleration below.
       scheduler.alpha.kubernetes.io/critical-pod: ‘‘  spec:      hostNetwork: true      tolerations: - key: node-role.kubernetes.io/master        effect: NoSchedule

# Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.

# This, along with the annotation above marks this pod as a critical add-on.   key: CriticalAddonsOnly        operator: Exists  serviceAccountName: calico-cni-plugin     containers:

# Runs calico/node container on each Kubernetes node.  This  # container programs network policy and routes on each
      # host.- name: calico-node image:quay.io/calico/node:v1.3.0  env:

# The location of the Calico etcd cluster.   - name: ETCD_ENDPOINTS              valueFrom:configMapKeyRef:        name: calico-config         key: etcd_endpoints

# Enable BGP.  Disable to enforce policy only.            - name: CALICO_NETWORKING_BACKEND   valueFrom:config MapKeyRef: name: calico-config    key: calico_backend

# Disable file logging so `kubectl logs` works.            - name: CALICO_DISABLE_FILE_LOGGING  value: "true"

# Set Felix endpoint to host default action to ACCEPT.         - name: FELIX_DEFAULTENDPOINTTOHOSTACTION              value: "ACCEPT"

# Configure the IP Pool from which Pod IPs will be chosen.            - name: CALICO_IPV4POOL_CIDR              value: "10.68.0.0/16"      - name: CALICO_IPV4POOL_IPIP              value: "always"

# Disable IPv6 on Kubernetes.  - name: FELIX_IPV6SUPPORT      value: "false"

# Set Felix logging to "info"  - name:FELIX_LOGSEVERITYSCREEN              value: "info"

# Auto-detect the BGP IP address.  - name: IP  value: ""      securityContext:privileged: true  resources:  requests:      cpu: 250m     volumeMounts:        - mountPath: /lib/modules  name: lib-modules   readOnly: true    - mountP/var/run/calico             name: var-run-calico   readOnly: false

# This container installs the Calico CNI binaries

# and CNI network config file on each node.        - name: install-cni          image: quay.io/calico/cni:v1.9.1          command: ["/install-cni.sh"]          env:

# The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS              valueFrom:                configMapKeyRef:           name: calico-config               key: etcd_endpoints

# The CNI network config to install on each node.            - name: CNI_NETWORK_CONFIG   valueFrom:                configMapKeyRef:    name: calico-config                  key: cni_network_config     volumeMounts:            - mountPath: /host/opt/cni/bin      ame: cni-bin-dir            - mountPath: /host/etc/cni/net.d              name: cni-net-dir      volumes:

# Used by calico/node.        - name: lib-modules          hostPath:            path: /lib/modules        - name: var-run-calico          hostPath:            path: /var/run/calico        # Used to install CNI.        - name: cni-bin-dir          hostPath:            path: /opt/cni/bin        - name: cni-net-dir          hostPath:            path: /etc/cni/net.d---# This manifest deploys the Calico policy controller on Kubernetes.

# See https://github.com/projectcalico/k8s-policyapiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: calico-policy-controller  namespace: kube-system  labels:    k8s-app: calico-policyspec:

# The policy controller can only have a single active instance.  replicas: 1  strategy:    type: Recreate  template:    metadata:      name: calico-policy-controller      namespace: kube-system      labels:        k8s-app: calico-policy-controller      annotations:

# Mark this pod as a critical add-on; when enabled, the critical add-on scheduler

# reserves resources for critical add-on pods so that they can be rescheduled after

# a failure.  This annotation works in tandem with the toleration below.
       scheduler.alpha.kubernetes.io/critical-pod: ‘‘    spec:

# The policy controller must run in the host network namespace so that

# it isn‘t governed by policy that would prevent it from working.      hostNetwork: true      tolerations:      - key: node-role.kubernetes.io/master        effect: NoSchedule      # Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.

# This, along with the annotation above marks this pod as a critical add-on.      - key: CriticalAddonsOnly        operator: Exists      serviceAccountName: calico-policy-controller      containers:        - name: calico-policy-controller          image: quay.io/calico/kube-policy-controller:v0.6.0          env:

# The location of the Calico etcd cluster.            - name: ETCD_ENDPOINTS              valueFrom:                configMapKeyRef:                  name: calico-config                  key: etcd_endpoints

# The location of the Kubernetes API.  Use the default Kubernetes

# service for API access.            - name: K8S_API              value: "https://kubernetes.default:443"

# Since we‘re running in the host namespace and might not have KubeDNS

# access, configure the container‘s /etc/hosts to resolve

# kubernetes.default to the correct service clusterIP.            - name: CONFIGURE_ETC_HOSTS              value: "true"---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:  name: calico-cni-pluginroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: calico-cni-pluginsubjects:- kind: ServiceAccount  name: calico-cni-plugin  namespace: kube-system---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata:  name: calico-cni-plugin  namespace: kube-systemrules:  - apiGroups: [""]    resources:      - pods      - nodes    verbs:      - get---apiVersion: v1kind: ServiceAccountmetadata:  name: calico-cni-plugin  namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:  name: calico-policy-controllerroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: calico-policy-controllersubjects:- kind: ServiceAccount  name: calico-policy-controller  namespace: kube-system---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata:  name: calico-policy-controller  namespace: kube-systemrules:  - apiGroups:    - ""    - extensions    resources:      - pods      - namespaces      - networkpolicies    verbs:      - watch      - list---apiVersion: v1kind: ServiceAccountmetadata:  name: calico-policy-controller  namespace: kube-system

  • 创建calico跨主机网络, 在master节点上执行如下命令
kubectl apply -f calico.yaml
  • 注意观察每个节点上会有名为calico-node-****的pod起来, calico-policy-controller和kube-dns也会起来, 这些pod都在kube-system名字空间里
>kubectl get all --all-namespaces

NAMESPACE     NAME                                                 READY     STATUS    RESTARTS   AGE
kube-system   po/calico-node-2gqf2                                 2/2       Running   0          19h
kube-system   po/calico-node-fg8gh                                 2/2       Running   0          19h
kube-system   po/calico-node-ksmrn                                 2/2       Running   0          19h
kube-system   po/calico-policy-controller-1727037546-zp4lp         1/1       Running   0          19h
kube-system   po/etcd-izuf6fb3vrfqnwbct6ivgwz                      1/1       Running   0          19h
kube-system   po/kube-apiserver-izuf6fb3vrfqnwbct6ivgwz            1/1       Running   0          19h
kube-system   po/kube-controller-manager-izuf6fb3vrfqnwbct6ivgwz   1/1       Running   0          19h
kube-system   po/kube-dns-2425271678-3t4g6                         3/3       Running   0          19h
kube-system   po/kube-proxy-6fg1l                                  1/1       Running   0          19h
kube-system   po/kube-proxy-fdbt2                                  1/1       Running   0          19h
kube-system   po/kube-proxy-lgf3z                                  1/1       Running   0          19h
kube-system   po/kube-scheduler-izuf6fb3vrfqnwbct6ivgwz            1/1       Running   0          19h

NAMESPACE     NAME                       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       svc/kubernetes             10.96.0.1       <none>        443/TCP         19h
kube-system   svc/kube-dns               10.96.0.10      <none>        53/UDP,53/TCP   19h

NAMESPACE     NAME                              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deploy/calico-policy-controller   1         1         1            1           19h
kube-system   deploy/kube-dns                   1         1         1            1           19h

NAMESPACE     NAME                                     DESIRED   CURRENT   READY     AGE
kube-system   rs/calico-policy-controller-1727037546   1         1         1         19h
kube-system   rs/kube-dns-2425271678                   1         1         1         19h
  • 部署dash-board
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml
  • 部署heapster
wget https://github.com/kubernetes/heapster/archive/v1.4.0.tar.gz
tar -zxvf v1.4.0.tar.gzcd heapster-1.4.0/deploy/kube-config/influxdb
kubectl create -f ./

其他命令

  • 强制删除某个pod
kubectl delete pod <podname> --namespace=<namspacer>  --grace-period=0 --force
  • 重置某个node节点
kubeadm reset 
systemctl stop kubelet;
docker ps -aq | xargs docker rm -fv
find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v;
rm -rf /var/lib/kubelet /etc/kubernetes/ /var/lib/etcd 
systemctl start kubelet;
  • 访问dashboard (在master节点上执行)
kubectl proxy --address=0.0.0.0 --port=8001 --accept-hosts=‘^.*‘
or
kubectl proxy --port=8011 --address=192.168.61.100 --accept-hosts=‘^192\.168\.61\.*‘

access to http://0.0.0.0:8001/ui
  • Access to API with authentication token
APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ‘ ‘) | grep -E ‘^token‘ | cut -f2 -d‘:‘ | tr -d ‘\t‘)
curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
  • 让master节点参与调度,默认master是不参与到任务调度中的
kubectl taint nodes --all node-role.kubernetes.io/master-
or
kubectl taint nodes --all dedicated-
  • kubernetes master 消除隔离之前 Annotations
Name:           izuf6fb3vrfqnwbct6ivgwzRole:Labels:         beta.kubernetes.io/arch=amd64
            beta.kubernetes.io/os=linux
            kubernetes.io/hostname=izuf6fb3vrfqnwbct6ivgwz
            node-role.kubernetes.io/master=Annotations:        node.alpha.kubernetes.io/ttl=0
            volumes.kubernetes.io/controller-managed-attach-detach=true
  • kubernetes master 消除隔离之后 Annotations
Name:           izuf6fb3vrfqnwbct6ivgwzRole:Labels:         beta.kubernetes.io/arch=amd64
            beta.kubernetes.io/os=linux
            kubernetes.io/hostname=izuf6fb3vrfqnwbct6ivgwz
            node-role.kubernetes.io/master=Annotations:        node.alpha.kubernetes.io/ttl=0
            volumes.kubernetes.io/controller-managed-attach-detach=trueTaints:         <none>

总结:通过测试已经完成但是还有错看过文档的伙伴能猜到吗?

时间: 2024-10-14 09:02:15

中国何时亮剑 k8s搭建的相关文章

K8S搭建-1 Master 2 Workers(dashboard+ingress)

本文讲述k8s最新版的搭建(v1.15.2) 分如下几个topic步骤: 各个节点的基本配置 master节点的构建 worker节点的构建 安装dashboard 安装ingress 常见命令 docker镜像惹的祸 各个节点的基本配置(以下命令每个节点都要执行:Master, Work1, Work2) IP自己变化下,根据实际情况 systemctl stop firewalld && systemctl disable firewalld cat >>/etc/host

阿里云手动搭建k8s搭建中遇到的问题解决(持续更新)

ETCD搭建 systemd启动etcd服务的时候出现错误:Failed at step CHDIR spawning /usr/bin/etcd: No such file or directory 解决办法:etcd.service服务配置文件中设置的工作目录WorkingDirectory=/var/lib/etcd/必须存在,否则会报以上错误 systemd启动etcd服务的时候出现错误:cannot assign requested address 解决办法:绑定阿里云的私网IP 原文

k8s搭建-详情

详细部署参考 :使用kubeadm安装Kubernetes 1.13 环境准备,每个节点都需要: centos7.5 4 .2G内存,cup个数>=2 1.时间同步(ntp5.aliyun.com),chronyc sources(查看是否同步) 2.主机名解析 3.关闭防火墙(firewall,iptables) 4.禁用selinux 5.禁用swap,/etc/fstable,  swapoff -a,swapof -s(查看) 6.启用内核模块: 创建/etc/sysctl.d/k8s.

裸k8s搭建中遇到的两个坑

在装docker的时候报错了,需要先安装selinux版本.才能安装容器. 需要按照提示安装这个包. 采用强制安装.rpm -ivh 包名字 --force --nodeps 在k8s的master上执行初始化集群后,会在末尾生成token,需要拷贝token去node节点上执行. 原文地址:https://www.cnblogs.com/tigergaonotes/p/11123473.html

k8s搭建之证书制作

证书 kubernetes 系统的各组件需要使用 TLS 证书对通信进行加密,本文档使用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 和其它证书. 生成的 CA 证书和秘钥文件如下: ca-key.pem ca.pem kubernetes-key.pem kubernetes.pem kube-proxy.pem kube-proxy-key.pem admin.pem admin-key.pem 使用证书的组件如下: e

K8S搭建教程及部署脚本

部署环境: 主机名 IP地址 系统OS 内核 master 10.5.1.10 CentOS7 Linux master 3.10.0-1062 node1 10.5.1.11 CentOS7 Linux master 3.10.0-1062 etcd/node2 10.5.1.12 CentOS7 Linux master 3.10.0-1062 1:配置安装前配置 1.1:SELINUX配置 首先获取selinux的状态 [[email protected] ~]# sestatus 可以看

rancher+docker+k8s搭建容器管理集群

一, 环境准备 服务器 Linux k8s-m 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux m节点:10.0.0.202 s1节点:10.0.0.203 s2节点:10.0.0.204 测试环境关闭各种墙 systemctl stop firewalld.service systemctl stop firewalld.service docker 版本包 htt

K8s搭建-kubernetes搭建RabbitMQ

感谢分享原文-http://bjbsair.com/2020-04-03/tech-info/29908.html 1.RabbitMQ简介 RabbitMQ是实现了高级消息队列协议(AMQP)的开源消息代理软件(亦称面向消息的中间件).RabbitMQ服务器是用Erlang语言编写的,而集群和故障转移是构建在开放电信平台框架上的.AMQP:Advanced Message Queue,高级消息队列协议.它是应用层协议的一个开放标准,为面向消息的中间件设计,基于此协议的客户端与消息中间件可传递消

Ceph中国社区公众号正式变更,全新开始

清晨北京 再大的雾霾也会过去 今天正好北京的雾霾散去,空气瞬间变好,也赶上了Ceph中国社区公众号变更,我来讲述下Ceph中国社区的故事--一项开源技术和一群充满朝气的年轻人之间的故事.一个开源社区从建立到发展壮大,就像一个创业公司的奋斗史,跌宕起伏.谨以此文献给在过去两年多的时间中支持Ceph中国社区成长的每一个人. 美好的展望--Ceph中国社区雏形 2014年7月份,随着我开始接触OpenStack和Ceph,当初发现国内关于Ceph的资料是少之又少,唯一有几个QQ群还都是潜水员.当初讨论