ceph搭建配置-三节点

主机名 IP  磁盘 角色
ceph01 10.10.20.55    
ceph02 10.10.20.66    
chph03 10.10.20.77    

systemctl stop [email protected]
systemctl stop [email protected]
systemctl stop [email protected]

[[email protected] ~]# parted /dev/sdb mklabel gpt
Information: You may need to update /etc/fstab.

[[email protected] ~]# parted /dev/sdb mkpart primary 1M 50%
Information: You may need to update /etc/fstab.

[[email protected] ~]# parted /dev/sdb mkpart primary 50% 100%
Information: You may need to update /etc/fstab.

[[email protected] ~]# chown ceph.ceph /dev/sdb1
[[email protected] ~]# chown ceph.ceph /dev/sdb2

初始化清空磁盘数据(仅ceph01操作即可)

[[email protected] ceph-cluster]# ceph-deploy disk zap ceph01 /dev/sd{c,d}

[[email protected] ceph-cluster]# ceph-deploy disk zap ceph01 /dev/sd{c,d}

[[email protected] ceph-cluster]# ceph-deploy disk zap ceph01 /dev/sd{c,d}

[[email protected] ceph-cluster]# ceph-deploy disk zap ceph03 /dev/sd{c,d}
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph03 /dev/sdc /dev/sdd
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1635a2fbd8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ceph03
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f1635a7a578>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [‘/dev/sdc‘, ‘/dev/sdd‘]
[ceph_deploy.osd][DEBUG ] zapping /dev/sdc on ceph03
[ceph03][DEBUG ] connected to host: ceph03
[ceph03][DEBUG ] detect platform information from remote host
[ceph03][DEBUG ] detect machine type
[ceph03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.7.1908 Core
[ceph03][DEBUG ] zeroing last few blocks of device
[ceph03][DEBUG ] find the location of an executable
[ceph03][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdc
[ceph03][WARNIN] --> Zapping: /dev/sdc
[ceph03][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph03][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync
[ceph03][WARNIN]  stderr: 10+0 records in
[ceph03][WARNIN] 10+0 records out
[ceph03][WARNIN]  stderr: 10485760 bytes (10 MB) copied, 0.0125001 s, 839 MB/s
[ceph03][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdc>
[ceph_deploy.osd][DEBUG ] zapping /dev/sdd on ceph03
[ceph03][DEBUG ] connected to host: ceph03
[ceph03][DEBUG ] detect platform information from remote host
[ceph03][DEBUG ] detect machine type
[ceph03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.7.1908 Core
[ceph03][DEBUG ] zeroing last few blocks of device
[ceph03][DEBUG ] find the location of an executable
[ceph03][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdd
[ceph03][WARNIN] --> Zapping: /dev/sdd
[ceph03][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph03][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdd bs=1M count=10 conv=fsync
[ceph03][WARNIN]  stderr: 10+0 records in
[ceph03][WARNIN] 10+0 records out
[ceph03][WARNIN] 10485760 bytes (10 MB) copied
[ceph03][WARNIN]  stderr: , 0.00957528 s, 1.1 GB/s
[ceph03][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdd>

创建OSD存储空间(仅node1操作即可)
# 创建osd存储设备,vdc为集群提供存储空间,vdb1提供JOURNAL缓存
# 一个存储设备对应一个缓存设备,缓存需要SSD,不需要很大

[[email protected] ceph-cluster]# ceph-deploy osd create ceph01 --data /dev/sdc --journal /dev/sdb1
[[email protected] ceph-cluster]# ceph-deploy osd create ceph01 --data /dev/sdd --journal /dev/sdb2
[[email protected] ceph-cluster]# ceph-deploy osd create ceph02 --data /dev/sdc --journal /dev/sdb1
[[email protected] ceph-cluster]# ceph-deploy osd create ceph02 --data /dev/sdd --journal /dev/sdb2
[[email protected] ceph-cluster]# ceph-deploy osd create ceph03 --data /dev/sdc --journal /dev/sdb1
[[email protected] ceph-cluster]# ceph-deploy osd create ceph03 --data /dev/sdd --journal /dev/sdb2

验证测试,可观察到osd由0变为6了

[[email protected] ceph-cluster]# ceph -s
  cluster:
    id:     fbc66f50-ced8-4ad1-93f7-2453cdbf59ba
    health: HEALTH_WARN
            no active mgr

  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 10m)
    mgr: no daemons active
    osd: 6 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs: 

报错,no active mgr
配置mgr
在ceph01上创建名为mgr1的mgr,三节点均可查看到
[[email protected] ceph-cluster]# ceph-deploy mgr create ceph01:mgr1

[[email protected] ceph-cluster]# ceph-deploy mgr create ceph01:mgr1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph01:mgr1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [(‘ceph01‘, ‘mgr1‘)]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f2b64aedd40>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f2b65357cf8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph01:mgr1
[ceph01][DEBUG ] connected to host: ceph01
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.7.1908 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph01
[ceph01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph01][WARNIN] mgr keyring does not exist yet, creating one
[ceph01][DEBUG ] create a keyring file
[ceph01][DEBUG ] create path recursively if it doesn‘t exist
[ceph01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.mgr1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-mgr1/keyring
[ceph01][INFO  ] Running command: systemctl enable ceph[email protected]mgr1
[ceph01][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].service.
[ceph01][INFO  ] Running command: systemctl start ceph[email protected]mgr1
[ceph01][INFO  ] Running command: systemctl enable ceph.target
You have new mail in /var/spool/mail/root

4.1 创建镜像(node1)
查看存储池
[[email protected] ceph-cluster]# ceph osd lspools
[[email protected] ceph-cluster]# ceph osd pool create pool-zk 100
pool ‘pool-zk‘ created
指定池为块设备
[[email protected] ceph-cluster]# ceph osd pool application enable pool-zk rbd
enabled application ‘rbd‘ on pool ‘pool-zk‘
重命名为pool为rbd
[[email protected] ceph-cluster]# ceph osd pool rename pool-zk rbd
pool ‘pool-zk‘ renamed to ‘rbd‘
[[email protected] ceph-cluster]# ceph osd lspools
1 rbd

原文地址:https://www.cnblogs.com/shuihuaboke/p/12582960.html

时间: 2024-08-30 15:07:51

ceph搭建配置-三节点的相关文章

RedHat 6.5+11G+RAC+ASM安装与配置(三节点)

一.安装环境 1.实验环境 虚拟机:VMware Workstation 8.0.3_64bit ORACLE:Oracle Database11g11.2.0.3.0-64bit 操作系统:Red HatEnterprise Linux  6.5 2.节点配置 描述 节点1 节点2 节点3 ISCSI存储 主機名稱 note1 note2 note3 iscsi-asm Public IP 172.16.1.7(vlan3) 172.16.1.8(vlan3) 172.16.1.9(vlan3

Centos6.10搭建Hadoop三节点分布式

(一)安装JDK 1. 下载JDK,解压到相应的路径 2.  修改 /etc/profile 文件(文本末尾添加),保存 sudo vi /etc/profile # 配置 JAVA_HOME export JAVA_HOME=/home/komean/workspace/JDK/jdk1.8.0_181 export CLASSPATH=.:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar # 设置PATH export PATH=${JAV

Cassandra集群:一,搭建一个三节点的集群

环境准备 JDK1.8 http://download.oracle.com/otn/java/jdk/8u171-b11/512cd62ec5174c3487ac17c61aaa89e8/jdk-8u171-linux-x64.tar.gz Cassandra 2.1.* http://archive.apache.org/dist/cassandra/2.1.20/apache-cassandra-2.1.20-bin.tar.gz Host Name IP Address OS sht-s

项目进阶 之 集群环境搭建(三)多管理节点MySQL集群

上次的博文项目进阶 之 集群环境搭建(二)MySQL集群中,我们搭建了一个基础的MySQL集群,这篇博客咱们继续讲解MySQL集群的相关内容,同时针对上一篇遗留的问题提出一个解决方案. 1.单管理节点MySQL集群和多管理节点MySQL集群 上一篇的博客中,我们搭建的MySQL集群架构中,只存在一个管理节点,这样搭建的集群可以用如下所示的结构表示. 仔细分析上图就会发现,上图所示的单管理节点MySQL集群存在当唯一的管理节点由于网络.断电.压力过大等各种原因宕机后,数据节点和SQL节点将会各自为

理解 OpenStack Swift (1):OpenStack + 三节点Swift 集群+ HAProxy + UCARP 安装和配置

本系列文章着重学习和研究OpenStack Swift,包括环境搭建.原理.架构.监控和性能等. (1)OpenStack + 三节点Swift 集群+ HAProxy + UCARP 安装和配置 (2)Swift 原理和架构 (3)Swift 监控 (4)Swift 性能 要实现的系统的效果图: 特点: 使用三个对等物理节点,每个节点上部署所有Swift 服务 使用开源的 UCARP 控制一个 VIP,它会被绑定到三个物理网卡中的一个. 使用开源的 HAProxy 做负载均衡 开启 Swift

Kubernetes1.12版本三节点详细搭建及Dashboard

一. Kubernetes简介 二. Kubernetes搭建环境Centos7系统Mster节点 190.168.3.230Node1节点 190.168.3.231Node2节点 190.168.3.232Etcd三节点高可用搭建之间确保三台主机时间同步,关闭selinux和firewalld三.搭建Kubernetes1.搭建etcd自签Etcd SSL证书,并在三节点搭建etcd集群 a.下载生成证书工具,并检查安装是否正常vim cfssl.shcurl -L https://pkg.

大数据之一:Hadoop2.6.5+centos7.5三节点大数据集群部署搭建

一. VM虚拟环境搭建(详细讲解) 说明:在windos10上使用VmWare Workstation创建3节点Hadoop虚拟环境创建虚拟机下一步设置虚拟机主机名和介质存放路径设置20G磁盘大小选择"自定义硬件"配置网络模式为NAT模式配置虚拟机启动镜像到这里,使用虚拟机克隆技术配置另外两台slave 同理克隆slave2, 步骤省略 此时windos网络连接里面会出现两张虚拟网卡 接下来就是给虚拟机配置IP网络 虚拟机网卡IP要和NAT模式的IP是在同一个段,虚拟机才能通过wind

ceph安装配置文档(centos6.6)

Ceph安装部署文档 目录 一:简介... 1 二:部署环境介绍... 1 三:集群配置准备工作... 2 四:安装ceph软件包... 6 五:安装ceph对象网关... 9 六.搭建ceph集群... 10 6.1.配置mon节点... 10 6.2:添加osd节点... 13 6.2.1.添加第一块osd节点... 13 6.3:添加元数据服务器... 17 6.3.1.添加第一个元数据服务器... 17 七:安装client端RBD.cephFS挂载演示... 18 7.1:客户端内核要

Shark集群搭建配置

一.Shark简介 Shark是基于Spark与Hive之上的一种SQL查询引擎,官网的架构图及性能测试图如下: 我们涉及到了2个依赖组件,1是Apache Spark, 另外一个是AMPLAB的Hive0.11. 这里注意版本的选择,一定要选择官方的推荐版本: Spark0.91 + AMPLAB Hive0.11 + Shark0.91 一定要自己编译好它们,适用于自己的集群. 二.Shark集群搭建 1. 搭建Spark集群,这个可以参照:Spark集群搭建. 2. 编译AMPLAB的Hi