准备环境:
CentOS release 6.7 (Final) 默认桌面安装,安装开发工具包
*********************************************************
如果是虚拟机装的CentOS克隆的几个主机,修改网卡信息
[[email protected] ~]# ifconfig -a [[email protected] ~]# vim /etc/udev/rules.d/70-persistent-net.rules 删除不存在的记录 [[email protected] ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp [[email protected] ~]# modprobe -r e1000 [[email protected] ~]# modprobe e1000 [[email protected] ~]# service network restart
************************************************************
所有节点:
[[email protected] ~]# service NetworkManager stop [[email protected] ~]# service iptables stop [[email protected] ~]# chkconfig NetworkManager off [[email protected] ~]# chkconfig iptables off [[email protected] ~]# setenforce 0
admin节点
[[email protected] my-cluster]# hostname admin-node [[email protected] my-cluster]# vim /etc/sysconfig/network HOSTNAME=admin-node
node1节点
[[email protected] my-cluster]# hostname node1 [[email protected] my-cluster]# vim /etc/sysconfig/network HOSTNAME=node1
node2节点
[[email protected] my-cluster]# hostname node2 [[email protected] my-cluster]# vim /etc/sysconfig/network HOSTNAME=node2
node3节点
[[email protected] my-cluster]# hostname node3 [[email protected] my-cluster]# vim /etc/sysconfig/network HOSTNAME=node3
所有节点:修改hosts文件
[[email protected] yum.repos.d]# vim /etc/hosts 192.168.81.144 admin-node 192.168.81.145 node1 192.168.81.146 node2 192.168.81.147 node3
admin节点
[[email protected] yum.repos.d]# ssh-keygen -t rsa -P ‘‘ [[email protected] yum.repos.d]# ssh-copy-id admin-node [[email protected] yum.repos.d]# ssh-copy-id node1 [[email protected] yum.repos.d]# ssh-copy-id node2 [[email protected] yum.repos.d]# ssh-copy-id node3
admin节点,免密登录 测试
[[email protected] yum.repos.d]# ssh admin-node [[email protected] yum.repos.d]# ssh node1 [[email protected] yum.repos.d]# ssh node2 [[email protected] yum.repos.d]# ssh node3
admin节点,配置yum源,保存 yum包
[[email protected] ~]# cat /etc/yum.conf keepcache=1
[[email protected] ~]# rm -rf /etc/yum.repos.d/* [[email protected] ~]# wget -P /etc/yum.repos.d/ http://mirrors.163.com/.help/CentOS6-Base-163.repo [[email protected] yum.repos.d]# vim /etc/yum.repos.d/ceph.repo [ceph-noarch] name=Ceph noarch packages baseurl=http://ceph.com/rpm-hammer/el6/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
更多版本 download.ceph.com
admin节点,安装ceph-deploy
[[email protected] yum.repos.d]# yum update && sudo yum install ceph-deploy [[email protected] my-cluster]# ceph-deploy usage: ceph-deploy [-h] [-v | -q] [--version] [--username USERNAME] [--overwrite-conf] [--cluster NAME] [--ceph-conf CEPH_CONF] COMMAND ... Easy Ceph deployment -^- / |O o| ceph-deploy v1.5.28 ).-.( ‘/|||\` | ‘|` | ‘|` Full documentation can be found at: http://ceph.com/ceph-deploy/docs optional arguments: -h, --help show this help message and exit -v, --verbose be more verbose -q, --quiet be less verbose --version the current installed version of ceph-deploy --username USERNAME the username to connect to the remote host --overwrite-conf overwrite an existing conf file on remote host (if present) --cluster NAME name of the cluster --ceph-conf CEPH_CONF use (or reuse) a given ceph.conf file commands: COMMAND description new Start deploying a new cluster, and write a CLUSTER.conf and keyring for it. install Install Ceph packages on remote hosts. rgw Ceph RGW daemon management mds Ceph MDS daemon management mon Ceph MON Daemon management gatherkeys Gather authentication keys for provisioning new nodes. disk Manage disks on a remote host. osd Prepare a data disk on remote host. admin Push configuration and client.admin key to a remote host. repo Repo definition management config Copy ceph.conf to/from remote host(s) uninstall Remove Ceph packages from remote hosts. purge Remove Ceph packages from remote hosts and purge all data. purgedata Purge (delete, destroy, discard, shred) any Ceph data from /var/lib/ceph forgetkeys Remove authentication keys from the local directory. pkg Manage packages on remote hosts. calamari Install and configure Calamari nodes. Assumes that a repository with Calamari packages is already configured. Refer to the docs for examples (http://ceph.com/ceph-deploy/docs/conf.html) Error in sys.exitfunc:
Create the cluster.
Create the cluster. [[email protected] ~]# mkdir /home/my-cluster [[email protected] ~]# cd /home/my-cluster [[email protected] my-cluster]# ceph-deploy new node1 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/bin/ceph-deploy new node1 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] func : <function new at 0x27471b8> [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x25ad098> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] ssh_copykey : True [ceph_deploy.cli][INFO ] mon : [‘node1‘] [ceph_deploy.cli][INFO ] public_network : None [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] cluster_network : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] fsid : None [ceph_deploy.new][DEBUG ] Creating new cluster named ceph [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds [node1][DEBUG ] connected to host: localhost.localdomain [node1][INFO ] Running command: ssh -CT -o BatchMode=yes node1 [node1][DEBUG ] connected to host: node1 [node1][DEBUG ] detect platform information from remote host [node1][DEBUG ] detect machine type [node1][DEBUG ] find the location of an executable [node1][INFO ] Running command: /sbin/ip link show [node1][INFO ] Running command: /sbin/ip addr show [node1][DEBUG ] IP addresses found: [‘192.168.81.145‘] [ceph_deploy.new][DEBUG ] Resolving host node1 [ceph_deploy.new][DEBUG ] Monitor node1 at 192.168.81.145 [ceph_deploy.new][DEBUG ] Monitor initial members are [‘node1‘] [ceph_deploy.new][DEBUG ] Monitor addrs are [‘192.168.81.145‘] [ceph_deploy.new][DEBUG ] Creating a random mon key... [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf... Error in sys.exitfunc: [[email protected] my-cluster]# echo $? 0 [[email protected] my-cluster]# vim /home/my-cluster/ceph.conf osd pool default size = 2 [[email protected] my-cluster]# ceph-deploy install admin-node node1 node2 node3 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/bin/ceph-deploy install admin-node node1 node2 node3 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] testing : None [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x132db00> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] install_mds : False [ceph_deploy.cli][INFO ] stable : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] adjust_repos : True [ceph_deploy.cli][INFO ] func : <function install at 0x12bc050> [ceph_deploy.cli][INFO ] install_all : False [ceph_deploy.cli][INFO ] repo : False [ceph_deploy.cli][INFO ] host : [‘admin-node‘, ‘node1‘, ‘node2‘, ‘node3‘] [ceph_deploy.cli][INFO ] install_rgw : False [ceph_deploy.cli][INFO ] repo_url : None [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] install_osd : False [ceph_deploy.cli][INFO ] version_kind : stable [ceph_deploy.cli][INFO ] install_common : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] dev : master [ceph_deploy.cli][INFO ] local_mirror : None [ceph_deploy.cli][INFO ] release : None [ceph_deploy.cli][INFO ] install_mon : False [ceph_deploy.cli][INFO ] gpg_url : None [ceph_deploy.install][DEBUG ] Installing stable version hammer on cluster ceph hosts admin-node node1 node2 node3 [ceph_deploy.install][DEBUG ] Detecting platform for host admin-node ... [admin-node][DEBUG ] connected to host: admin-node [admin-node][DEBUG ] detect platform information from remote host [admin-node][DEBUG ] detect machine type [ceph_deploy.install][INFO ] Distro info: CentOS 6.7 Final [admin-node][INFO ] installing Ceph on admin-node [admin-node][INFO ] Running command: yum clean all [admin-node][DEBUG ] 已加载插件:fastestmirror, priorities, refresh-packagekit, security [admin-node][DEBUG ] Cleaning repos: base ceph-noarch epel extras updates [admin-node][DEBUG ] 清理一切 [admin-node][DEBUG ] Cleaning up list of fastest mirrors [admin-node][INFO ] Running command: yum -y install epel-release [admin-node][DEBUG ] 已加载插件:fastestmirror, priorities, refresh-packagekit, security [admin-node][DEBUG ] 设置安装进程 [admin-node][DEBUG ] Determining fastest mirrors [admin-node][DEBUG ] * epel: mirrors.opencas.cn [admin-node][DEBUG ] 包 epel-release-6-8.noarch 已安装并且是最新版本 [admin-node][DEBUG ] 无须任何处理 [admin-node][INFO ] Running command: yum -y install yum-plugin-priorities [admin-node][DEBUG ] 已加载插件:fastestmirror, priorities, refresh-packagekit, security [admin-node][DEBUG ] 设置安装进程 [admin-node][DEBUG ] Loading mirror speeds from cached hostfile [admin-node][DEBUG ] * epel: mirrors.opencas.cn [admin-node][DEBUG ] 包 yum-plugin-priorities-1.1.30-30.el6.noarch 已安装并且是最新版本 [admin-node][DEBUG ] 无须任何处理 [admin-node][DEBUG ] Configure Yum priorities to include obsoletes [admin-node][WARNIN] check_obsoletes has been enabled for Yum priorities plugin [admin-node][INFO ] Running command: rpm --import https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc [admin-node][INFO ] Running command: rpm -Uvh --replacepkgs http://ceph.com/rpm-hammer/el6/noarch/ceph-release-1-0.el6.noarch.rpm [admin-node][DEBUG ] Retrieving http://ceph.com/rpm-hammer/el6/noarch/ceph-release-1-0.el6.noarch.rpm [admin-node][DEBUG ] Preparing... ################################################## [admin-node][DEBUG ] ceph-release ################################################## [admin-node][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority [ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: ‘ceph‘ Error in sys.exitfunc:
时间: 2024-10-04 23:01:40