helm3.1安装及结合ceph rbd 部署harbor

[[email protected] ~]# ceph -s
  cluster:
    id:     11880418-1a9a-4b55-a353-4b141e2199d8
    health: HEALTH_WARN
            Long heartbeat ping times on back interface seen, longest is 3884.944 msec
            Long heartbeat ping times on front interface seen, longest is 3888.368 msec
            application not enabled on 1 pool(s)
            clock skew detected on mon.bs-hk-hk02, mon.bs-k8s-ceph

  services:
    mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
    mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
    osd: 6 osds: 6 up, 6 in

  data:
    pools:   3 pools, 320 pgs
    objects: 416  objects, 978 MiB
    usage:   8.6 GiB used, 105 GiB / 114 GiB avail
    pgs:     320 active+clean
[[email protected]-k8s-ceph ~]# ceph osd pool application enable harbor rbd
enabled application ‘rbd‘ on pool ‘harbor‘
[[email protected]-k8s-ceph ~]# ceph -s
  cluster:
    id:     11880418-1a9a-4b55-a353-4b141e2199d8
    health: HEALTH_WARN
            Long heartbeat ping times on back interface seen, longest is 3870.142 msec
            Long heartbeat ping times on front interface seen, longest is 3873.410 msec
            clock skew detected on mon.bs-hk-hk02

  services:
    mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
    mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
    osd: 6 osds: 6 up, 6 in

  data:
    pools:   3 pools, 320 pgs
    objects: 416  objects, 978 MiB
    usage:   8.6 GiB used, 105 GiB / 114 GiB avail
    pgs:     320 active+clean

# systemctl restart ceph.target     //让时间停一会
[[email protected] ~]# ceph -s
  cluster:
    id:     11880418-1a9a-4b55-a353-4b141e2199d8
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
    mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
    osd: 6 osds: 6 up, 6 in

  data:
    pools:   3 pools, 320 pgs
    objects: 416  objects, 978 MiB
    usage:   8.6 GiB used, 105 GiB / 114 GiB avail
    pgs:     320 active+clean
[[email protected]-k8s-master01 ~]# kubectl get nodes
The connection to the server 20.0.0.250:8443 was refused - did you specify the right host or port?
[[email protected]-hk-hk01 ~]# systemctl start haproxy
[[email protected]-k8s-master01 k8s]# kubectl get nodes
NAME              STATUS     ROLES    AGE     VERSION
bs-k8s-master01   Ready      master   7d10h   v1.17.2
bs-k8s-master02   Ready      master   7d10h   v1.17.2
bs-k8s-master03   Ready      master   7d10h   v1.17.2
bs-k8s-node01     Ready      <none>   7d10h   v1.17.2
bs-k8s-node02     Ready      <none>   7d10h   v1.17.2
bs-k8s-node03     NotReady   <none>   7d9h    v1.17.2    //为了节省cpu而关掉
https://github.com/helm/helm/releases
[[email protected]master01 helm3]# pwd
/data/k8s/helm3
[[email protected]-k8s-master01 helm3]# ll
总用量 11980
-rw-r--r-- 1 root root 12267464 2月  17 2020 helm-v3.1.0-linux-amd64.tar.gz
[[email protected]-k8s-master01 helm3]# cp linux-amd64/helm /usr/local/bin/helm
[[email protected]-k8s-master01 helm3]# helm  version
version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}
[[email protected]-k8s-master01 helm3]# helm  --help
The Kubernetes package manager

Common actions for Helm:

- helm search:    search for charts
- helm pull:      download a chart to your local directory to view
- helm install:   upload the chart to Kubernetes
- helm list:      list releases of charts

Environment variables:

+------------------+-----------------------------------------------------------------------------+
| Name             | Description                                                                 |
+------------------+-----------------------------------------------------------------------------+
| $XDG_CACHE_HOME  | set an alternative location for storing cached files.                       |
| $XDG_CONFIG_HOME | set an alternative location for storing Helm configuration.                 |
| $XDG_DATA_HOME   | set an alternative location for storing Helm data.                          |
| $HELM_DRIVER     | set the backend storage driver. Values are: configmap, secret, memory       |
| $HELM_NO_PLUGINS | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins.                  |
| $KUBECONFIG      | set an alternative Kubernetes configuration file (default "~/.kube/config") |
+------------------+-----------------------------------------------------------------------------+

Helm stores configuration based on the XDG base directory specification, so

- cached files are stored in $XDG_CACHE_HOME/helm
- configuration is stored in $XDG_CONFIG_HOME/helm
- data is stored in $XDG_DATA_HOME/helm

By default, the default directories depend on the Operating System. The defaults are listed below:

+------------------+---------------------------+--------------------------------+-------------------------+
| Operating System | Cache Path                | Configuration Path             | Data Path               |
+------------------+---------------------------+--------------------------------+-------------------------+
| Linux            | $HOME/.cache/helm         | $HOME/.config/helm             | $HOME/.local/share/helm |
| macOS            | $HOME/Library/Caches/helm | $HOME/Library/Preferences/helm | $HOME/Library/helm      |
| Windows          | %TEMP%\helm               | %APPDATA%\helm                 | %APPDATA%\helm          |
+------------------+---------------------------+--------------------------------+-------------------------+

Usage:
  helm [command]

Available Commands:
  completion  Generate autocompletions script for the specified shell (bash or zsh)
  create      create a new chart with the given name
  dependency  manage a chart‘s dependencies
  env         Helm client environment information
  get         download extended information of a named release
  help        Help about any command
  history     fetch release history
  install     install a chart
  lint        examines a chart for possible issues
  list        list releases
  package     package a chart directory into a chart archive
  plugin      install, list, or uninstall Helm plugins
  pull        download a chart from a repository and (optionally) unpack it in local directory
  repo        add, list, remove, update, and index chart repositories
  rollback    roll back a release to a previous revision
  search      search for a keyword in charts
  show        show information of a chart
  status      displays the status of the named release
  template    locally render templates
  test        run tests for a release
  uninstall   uninstall a release
  upgrade     upgrade a release
  verify      verify that a chart at the given path has been signed and is valid
  version     print the client version information

Flags:
      --add-dir-header                   If true, adds the file directory to the header
      --alsologtostderr                  log to standard error as well as files
      --debug                            enable verbose output
  -h, --help                             help for helm
      --kube-context string              name of the kubeconfig context to use
      --kubeconfig string                path to the kubeconfig file
      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
      --log-dir string                   If non-empty, write log files in this directory
      --log-file string                  If non-empty, use this log file
      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --logtostderr                      log to standard error instead of files (default true)
  -n, --namespace string                 namespace scope for this request
      --registry-config string           path to the registry config file (default "/root/.config/helm/registry.json")
      --repository-cache string          path to the file containing cached repository indexes (default "/root/.cache/helm/repository")
      --repository-config string         path to the file containing repository names and URLs (default "/root/.config/helm/repositories.yaml")
      --skip-headers                     If true, avoid header prefixes in the log messages
      --skip-log-headers                 If true, avoid headers when opening log files
      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
  -v, --v Level                          number for the log level verbosity
      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging

Use "helm [command] --help" for more information about a command.
[[email protected]-k8s-master01 helm3]# source <(helm completion bash)
[[email protected]-k8s-master01 helm3]# echo "source <(helm completion bash)" >> ~/.bashrc
[[email protected]-k8s-master01 rbd]# helm repo add aliyun  https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
"aliyun" has been added to your repositories
[[email protected]-k8s-master01 helm3]#  helm repo add stable  https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
"stable" has been added to your repositories
[[email protected]-k8s-master01 helm3]# helm repo add google  https://kubernetes-charts.storage.googleapis.com
"google" has been added to your repositories
[[email protected]-k8s-master01 helm3]# helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories
[[email protected]-k8s-master01 helm3]# helm repo list
NAME        URL
aliyun      https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
stable      https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
google      https://kubernetes-charts.storage.googleapis.com
jetstack    https://charts.jetstack.io 

[[email protected]-k8s-master01 helm3]# curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6794  100  6794    0     0    434      0  0:00:15  0:00:15 --:--:--   761

[[email protected]-k8s-master01 helm3]# ./get_helm.sh
Downloading https://get.helm.sh/helm-v3.1.0-linux-amd64.tar.gz
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
[[email protected]-k8s-master01 helm3]# helm version
version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}

[[email protected]-k8s-master01 helm3]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "aliyun" chart repository
Update Complete. ? Happy Helming!?
[[email protected]-k8s-master01 helm3]# helm search repo nginx
NAME                           CHART VERSION    APP VERSION    DESCRIPTION
aliyun/nginx-ingress           0.9.5            0.10.2         An nginx Ingress controller that uses ConfigMap...
aliyun/nginx-lego              0.3.1                           Chart for nginx-ingress-controller and kube-lego
google/nginx-ingress           1.30.3           0.28.0         An nginx Ingress controller that uses ConfigMap...
google/nginx-ldapauth-proxy    0.1.3            1.13.5         nginx proxy with ldapauth
google/nginx-lego              0.3.1                           Chart for nginx-ingress-controller and kube-lego
stable/nginx-ingress           0.9.5            0.10.2         An nginx Ingress controller that uses ConfigMap...
stable/nginx-lego              0.3.1                           Chart for nginx-ingress-controller and kube-lego
aliyun/gcloud-endpoints        0.1.0                           Develop, deploy, protect and monitor your APIs ...
google/gcloud-endpoints        0.1.2            1              DEPRECATED Develop, deploy, protect and monitor...
stable/gcloud-endpoints        0.1.0                           Develop, deploy, protect and monitor your APIs ...
[[email protected]-k8s-master01 helm3]# helm repo remove stable
"stable" has been removed from your repositories
[[email protected]-k8s-master01 helm3]# helm repo remove google
"google" has been removed from your repositories
[[email protected]-k8s-master01 helm3]# helm repo remove jetstack
"jetstack" has been removed from your repositories
[[email protected]-k8s-master01 helm3]# helm repo list
NAME      URL
aliyun    https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
[[email protected] helm3]# helm repo add harbor https://helm.goharbor.io
"harbor" has been added to your repositories
[[email protected]-k8s-master01 harbor]# pwd
/data/k8s/harbor
[[email protected]-k8s-master01 harbor]# ll
总用量 48
-rw-r--r-- 1 root root   701 2月  16 19:26 ceph-harbor-pvc.yaml
-rw-r--r-- 1 root root   863 2月  16 19:18 ceph-harbor-secret.yaml
-rw-r--r-- 1 root root   994 2月  16 19:21 ceph-harbor-storageclass.yaml
-rw-r--r-- 1 root root 35504 2月  17 13:07 harbor-1.3.0.tgz
drwxr-xr-x 2 root root   134 2月  16 19:13 rbd
[[email protected]-k8s-master01 harbor]# tar xf harbor-1.3.0.tgz
[[email protected]-k8s-master01 harbor]# cd harbor/
[[email protected]-k8s-master01 harbor]# ls
cert  Chart.yaml  conf  LICENSE  README.md  templates  values.yaml
[[email protected]-k8s-master01 harbor]# cp values.yaml{,.bak}
[[email protected]-k8s-master01 harbor]# diff values.yaml{,.bak}
26c26
<     commonName: "zisefeizhu.harbor.org"
---
>     commonName: ""
29c29
<       core: zisefeizhu.harbor.org
---
>       core: core.harbor.domain
101c101
< externalURL: https://zisefeizhu.harbor.org
---
> externalURL: https://core.harbor.domain
123c123
<       storageClass: "ceph-harbor"
---
>       storageClass: ""
129c129
<       storageClass: "ceph-harbor"
---
>       storageClass: ""
135c135
<       storageClass: "ceph-harbor"
---
>       storageClass: ""
143c143
<       storageClass: "ceph-harbor"
---
>       storageClass: ""
151c151
<       storageClass: "ceph-harbor"
---
>       storageClass: ""
253c253
< harborAdminPassword: "zisefeizhu"
---
> harborAdminPassword: "Harbor12345"
[[email protected]-k8s-master01 nginx-ingress]# pwd
/data/k8s/nginx-ingress
[[email protected]-k8s-master01 k8s]# cd nginx-ingress/
[[email protected]-k8s-master01 nginx-ingress]# helm pull aliyun/nginx-ingress
[[email protected]-k8s-master01 nginx-ingress]# tar xf nginx-ingress-0.9.5.tgz
[[email protected]-k8s-master01 nginx-ingress]# pwd
/data/k8s/nginx-ingress/nginx-ingress
[[email protected]-k8s-master01 nginx-ingress]# pwd
/data/k8s/nginx-ingress
[[email protected]-k8s-master01 nginx-ingress]# helm install nginx-ingress nginx-ingress
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
[[email protected]-k8s-master01 nginx-ingress]# grep -irl "extensions/v1beta1" nginx-ingress | grep deploy
nginx-ingress/templates/controller-deployment.yaml
nginx-ingress/templates/default-backend-deployment.yaml
[[email protected]-k8s-master01 nginx-ingress]# grep -irl "extensions/v1beta1" nginx-ingress | grep deploy | xargs sed -i ‘s#extensions/v1beta1#apps/v1#g‘
[[email protected]-k8s-master01 nginx-ingress]# helm install nginx-ingress nginx-ingress
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec
由于k8s1.16版本升级,需要Deployment.spec中加selector,所以愉快地加上就行了。

[[email protected]-k8s-master01 nginx]# helm install nginx-ingress nginx-ingress
NAME: nginx-ingress
LAST DEPLOYED: Mon Feb 17 14:12:27 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The nginx-ingress controller has been installed.
Get the application URL by running these commands:
  export HTTP_NODE_PORT=$(kubectl --namespace default get services -o jsonpath="{.spec.ports[0].nodePort}" nginx-ingress-controller)
  export HTTPS_NODE_PORT=$(kubectl --namespace default get services -o jsonpath="{.spec.ports[1].nodePort}" nginx-ingress-controller)
  export NODE_IP=$(kubectl --namespace default get nodes -o jsonpath="{.items[0].status.addresses[1].address}")

  echo "Visit http://$NODE_IP:$HTTP_NODE_PORT to access your application via HTTP."
  echo "Visit https://$NODE_IP:$HTTPS_NODE_PORT to access your application via HTTPS."

An example Ingress that makes use of the controller:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
[[email protected]-k8s-master01 nginx]# kubectl get pods
NAME                                             READY   STATUS             RESTARTS   AGE
nginx-ingress-controller-8fbb5974-l7dsx          1/1     Running            0          115s
nginx-ingress-default-backend-744fdc79c4-xcvqp   1/1     Running            0          115s
[[email protected]-k8s-master01 nginx]# pwd
/data/k8s/nginx
[[email protected]-k8s-master01 nginx]# ll
总用量 12
drwxr-xr-x 3 root root   119 2月  17 13:32 nginx-ingress
-rw-r--r-- 1 root root 10830 2月  17 13:25 nginx-ingress-0.9.5.tgz
[[email protected]-k8s-master01 harbor]# helm install harbor -n harbor harbor
NAME: harbor
LAST DEPLOYED: Mon Feb 17 14:16:05 2020
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://zisefeizhu.harbor.org.
For more details, please visit https://github.com/goharbor/harbor.
[[email protected] harbor]# kubectl get pvc -n harbor
NAME                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-harbor-harbor-redis-0               Bound    pvc-4b2c0362-aca9-4fc2-b3e8-5fed5bf46b43   1Gi        RWO            ceph-harbor    66s
database-data-harbor-harbor-database-0   Bound    pvc-ce201f8c-0909-4f69-8eb9-aeaeb542de98   1Gi        RWO            ceph-harbor    66s
harbor-harbor-chartmuseum                Bound    pvc-1ec866fa-413a-463d-bb04-a0376577ae69   5Gi        RWO            ceph-harbor    6m38s
harbor-harbor-jobservice                 Bound    pvc-03dd5393-fad1-471b-8384-b0a5f5403d90   1Gi        RWO            ceph-harbor    6m38s
harbor-harbor-registry                   Bound    pvc-b7268d13-e92a-4ab3-846a-26d14672e56c   5Gi        RWO            ceph-harbor    6m38s
[[email protected]-k8s-master01 harbor]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                           STORAGECLASS   REASON   AGE
pvc-03dd5393-fad1-471b-8384-b0a5f5403d90   1Gi        RWO            Retain           Bound    harbor/harbor-harbor-jobservice                 ceph-harbor             <invalid>
pvc-1ec866fa-413a-463d-bb04-a0376577ae69   5Gi        RWO            Retain           Bound    harbor/harbor-harbor-chartmuseum                ceph-harbor             <invalid>
pvc-494a130d-018c-4be3-9b31-e951cc4367a5   20Gi       RWO            Retain           Bound    default/wp-pv-claim                             ceph-rbd                27h
pvc-4b2c0362-aca9-4fc2-b3e8-5fed5bf46b43   1Gi        RWO            Retain           Bound    harbor/data-harbor-harbor-redis-0               ceph-harbor             <invalid>
pvc-8ffa3182-a2f6-47d9-a71d-ff8e8b379a16   1Gi        RWO            Retain           Bound    default/ceph-pvc                                ceph-rbd                29h
pvc-ac7d3a09-123e-4614-886c-cded8822a078   20Gi       RWO            Retain           Bound    default/mysql-pv-claim                          ceph-rbd                27h
pvc-b7268d13-e92a-4ab3-846a-26d14672e56c   5Gi        RWO            Retain           Bound    harbor/harbor-harbor-registry                   ceph-harbor             <invalid>
pvc-ce201f8c-0909-4f69-8eb9-aeaeb542de98   1Gi        RWO            Retain           Bound    harbor/database-data-harbor-harbor-database-0   ceph-harbor             <invalid>
[[email protected]-k8s-master01 harbor]# kubectl get pods -n harbor -o wide
NAME                                          READY   STATUS             RESTARTS   AGE    IP              NODE            NOMINATED NODE   READINESS GATES
harbor-harbor-chartmuseum-dcc6f779f-68tvn     1/1     Running            0          32m    10.209.208.21   bs-k8s-node03   <none>           <none>
harbor-harbor-clair-69789f6695-5zrf8          1/2     CrashLoopBackOff   9          32m    10.209.145.26   bs-k8s-node02   <none>           <none>
harbor-harbor-core-5675f84d5f-ddhj2           0/1     CrashLoopBackOff   8          32m    10.209.145.27   bs-k8s-node02   <none>           <none>
harbor-harbor-database-0                      1/1     Running            1          32m    10.209.46.93    bs-k8s-node01   <none>           <none>
harbor-harbor-jobservice-74f469588d-m6w64     0/1     Running            3          32m    10.209.46.91    bs-k8s-node01   <none>           <none>
harbor-harbor-notary-server-fcbcfdf9c-zgjk8   0/1     CrashLoopBackOff   9          32m    10.209.208.19   bs-k8s-node03   <none>           <none>
harbor-harbor-notary-signer-9789894bd-8p67d   0/1     CrashLoopBackOff   9          32m    10.209.208.20   bs-k8s-node03   <none>           <none>
harbor-harbor-portal-56456988bb-6cb9j         1/1     Running            0          32m    10.209.208.18   bs-k8s-node03   <none>           <none>
harbor-harbor-redis-0                         1/1     Running            0          32m    10.209.46.92    bs-k8s-node01   <none>           <none>
harbor-harbor-registry-6946847b6f-qdgfp       2/2     Running            0          32m    10.209.145.28   bs-k8s-node02   <none>           <none>
rbd-provisioner-75b85f85bd-d4b8d              1/1     Running            0          136m   10.209.145.25   bs-k8s-node02   <none>           <none>

下面的操作相比就不需要记载了
注意点:
    1.  pvc创建完毕后不要执行。切记。
    2.  本地hosts里的Ip是 nginx-controller的节点ip 

原文地址:https://www.cnblogs.com/zisefeizhu/p/12321825.html

时间: 2024-11-13 01:20:45

helm3.1安装及结合ceph rbd 部署harbor的相关文章

Ceph RBD 部署与管理

RBD 介绍 RBD块存储是ceph提供的3种存储类型中使用最广泛,最稳定的存储类型.RBD块类似于磁盘,可以挂载到物理机或虚拟机中,通常的挂载方式有两种: Kernel模块(KRBD) 利用QEMU模拟器通过LIBRBD方式 块是一个有序字节,普通的一个块大小为512字节.基于块存储是最常见的方式,常见的硬盘.软盘和CD光驱等都是存储数据最简单快捷的设备. 在物理机上提供块设备时,使用的是Kernel的RBD模块,基于内核模块驱动时,可以使用Linux自带的页缓存(Page Caching)来

CEPH快速部署(Centos7+Jewel)

ceph介绍 Ceph是统一存储系统,支持三种接口. Object:有原生的API,而且也兼容Swift和S3的API Block:支持精简配置.快照.克隆 File:Posix接口,支持快照 Ceph也是分布式存储系统,它的特点是: 高扩展性:使用普通x86服务器,支持10~1000台服务器,支持TB到PB级的扩展. 高可靠性:没有单点故障,多数据副本,自动管理,自动修复. 高性能:数据分布均衡,并行化度高.对于objects storage和block storage,不需要元数据服务器.

理解 OpenStack + Ceph (3):Ceph RBD 接口和工具 [Ceph RBD API and Tools]

本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 与 OpenStack 集成的实现 (4)TBD Ceph 作为一个统一的分布式存储,其一大特色是提供了丰富的编程接口.我们来看看下面这张经典的图: 其中,librados 是 Ceph 的基础接口,其它的接口比如 RADOSGW, RBD 和 CephFS 都是基于 librados 实现的.本文试着分析下 Ceph 的各种接口库和常用的工具.

ceph(2)--Ceph RBD 接口和工具

本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 (5)Ceph 与 OpenStack 集成的实现 (6)QEMU-KVM 和 Ceph RBD 的 缓存机制总结 (7)Ceph 的基本操作和常见故障排除方法 (8)关于Ceph PGs Ceph 作为一个统一的分布式存储,其一大特色是提供了丰富的编程接口.我们来看看下面这张经典的图: 其中,li

二十八. Ceph概述 部署Ceph集群 Ceph块存储

client:192.168.4.10 node1 :192.168.4.11 ndoe2 :192.168.4.12 node3 :192.168.4.13 1.实验环境 准备四台KVM虚拟机,其三台作为存储集群节点,一台安装为客户端,实现如下功能: 创建1台客户端虚拟机 创建3台存储集群虚拟机 配置主机名.IP地址.YUM源 修改所有主机的主机名 配置无密码SSH连接 配置NTP时间同步 创建虚拟机磁盘 1.1 五台机器(包括真机)配置yum源 1.1.1 全部搭建ftp服务 1.1.2 配

K8S与Ceph RBD集成-多主与主从数据库示例

参考文章: https://ieevee.com/tech/2018/05/16/k8s-rbd.html https://zhangchenchen.github.io/2017/11/17/kubernetes-integrate-with-ceph/https://docs.openshift.com/container-platform/3.5/install_config/storage_examples/ceph_rbd_dynamic_example.htmlhttps://jim

SUSE CaaS Platform 4 - Ceph RBD 作为 Pod 存储卷

网络存储卷系列文章 (1)SUSE CaaS Platform 4 - 简介 (2)SUSE CaaS Platform 4 - 安装部署 (3)SUSE CaaS Platform 4 - 安装技巧 (4)SUSE CaaS Platform 4 - Ceph RBD 作为 Pod 存储卷 (5)SUSE CaaS Platform 4 - 使用 Ceph RBD 作为持久存储(静态) (6)SUSE CaaS Platform 4 - 使用 Ceph RBD 作为持久存储(动态) RBD存储

Rancher(2),K8S持久性存储Ceph RBD搭建及配置

1.配置host,安装ntp(非必须)2.配置免密ssh3.配置ceph,yum源 vim /etc/yum.repo.d/ceph.cepo [ceph] name=ceph baseurl=http://mirrors.cloud.tencent.com/ceph/rpm-luminous/el7/x86_64/ gpgcheck=0 priority=1 [ceph-noarch] name=cephnoarch baseurl=http://mirrors.cloud.tencent.c

kubeadm k8s配置 ceph rbd存储 类型storageClass

k8s的storageclass,实现pvc的动态创建,绑定 首先安装ceph参考https://blog.csdn.net/qq_42006894/article/details/88424199 1.部署rbd-provisionergit clone https://github.com/kubernetes-incubator/external-storage.gitcd external-storage/ceph/rbd/deployNAMESPACE=cephsed -r -i "s