openstack高可用环境搭建(一):非高可用环境的搭建

1 方案设计

四节点基本信息:

10.192.44.148

10.192.44.149

10.192.44.150

10.192.44.151

每台设备1个128G的ssd系统盘,4个2T的数据盘

usr:root

pwd:9b648

1.1 网络方案

目前先采用单网卡方案,即每台设备使用一个网卡。IP地址即采用目前的地址。

后续将管理网络、存储网络、存储管理网络、虚拟机网络、外部网络分开。目前采用单网卡方式。

IP地址列表:


Hostname


IP(eth0)


IP1(备用IP)


隧道IP(eth0:1)


openstack role


Ceph mon role


Ceph osd


配置


Vip


node1


10.192.44.148(eth0)


172.16.2.148(eth3)


Controller1+network1


Mon0


Osd0~osd3


4Core 16G


node2


10.192.44.149(eth0)


172.16.2.149(eth1)


Controller2+network2


Mon1


Osd4~osd7


4Core 16G


node3


10.192.44.150(eth0)


172.16.2.150(eth1)


Compute1


Mon2


Osd8~osd11


4Core 16G


node4


10.192.44.151(eth1)


172.16.2.151(eth2)


Compute2


Mon3


Osd12~osd15


8Core 16G

注意,后来已调整为:

因为150、151升级了libvirt,然后150每次重启后都不通

所以改为先安装(控制节点+网络节点:148)+(计算节点:149),后面高可用再把150、151安装上:


Hostname


IP(eth0)


IP1(备用IP)


隧道IP(eth0:1)


openstack role


Ceph mon role


Ceph osd


配置


Vip


10.192.44.148(eth0)


172.16.2.148(eth3)


eth0:1


controller1+network1


Mon0


Osd0~osd3


4Core 16G


10.192.44.149(eth0)


172.16.2.149(eth1)


eth0:1


compute1


Mon1


Osd4~osd7


4Core 16G

第二个网口的IP: 172.16.2.148  149 150  151

[[email protected] network-scripts]# catifcfg-eth1

DEVICE=eth1

ONBOOT=yes

STARTMODE=onboot

MTU=1500

BOOTPROTO=static

IPADDR=172.16.2.150

NETMASK=255.255.255.0

GATEWAY=10.192.44.254

使用两个节点作为(控制+网络)的复用节点

使用两个节点作为计算节点

原因:

控制节点运行服务非常多,不适合全部用来复用为计算节点跑虚拟机

计算节点需要资源多(CPU、内存),所以最大的资源的那台作为计算节点。

1.2 存储方案

目前系统盘为SSD:128G,存储盘为SATA:2T

系统盘还有空间,将剩余空间作为ceph的journal空间,剩余大概有90G,设置为hda5分区。


Hostname


Ceph mon


Ceph journal


Ceph osd


Node1


Mon0


/dev/hda5


Osd.0~osd.3: sdb1/sdc1/sdd1/sde1


Node2


Mon1


/dev/hda5


Osd.4~osd.7: sdb1/sdc1/sdd1/sde1


Node3


Mon2


/dev/hda5


Osd.8~osd.11: sdb1/sdc1/sdd1/sde1


Node4


Mon3


/dev/hda5


Osd.12~osd.15: sdb1/sdc1/sdd1/sde1

Rbd pools:


Service


Rbd pool


Pg nums


Glance


Images


128


Cinder


Volumes


128


Nova


Vms


128

备注:先给磁盘分区,否则安装时给sdx分区,每个磁盘全部分成sdx1,ceph-deploy 分出来的sdx1只有5G大小。

1.3 运行服务


Openstack role


Controller


Httpd,rabbitmq,mysql;keystone, glance, neutron-server, nova-api & scheduler, cinder-api & scheduler


Network


Neutron agents:l3-agent, openvswitch-agent, dhcp-agent


Compute


Nova-compute,neutron-openvswitch,cinder-volume

1.4 其他备注

(1)暂时不安装ceilometer,比较耗资源,且当前不对ceilometer做高可用

(2)暂时不安装swift对象存储

(3)资源非常有限,验证时可能只能开2台虚拟机

1.5 特别注意

1.      每走一步,验证一下创建镜像、云硬盘、网络、虚拟机功能,避免错误积累导致重装

2.      修改的各种配置注意保存到git上:

https://git.hikvision.com.cn/projects/FSDMDEPTHCLOUD/repos/hcloud_install_centos/browse/project_beijing

3.      不明确的问题一定要现在虚拟机上验证

4.      Horizon还是先安装2个

2基本环境

2.1         yum还是rpm

yum源:

配置内核和系统不升级:

/etc/yum.conf:

keepcache=1

exclude=kernel*

exclude=centos-release*

删除原来的

# rm yum.repos.d/ -rf

替换现在的

然后更新源:

# yum clean all

# yum makecache

# yum update –y

#yum upgrade –y

坚决不能执行yum update和yum upgrade

后续改进:此处后续做成自动化脚本

安装方案:

1.      在虚拟机上使用yum安装一遍all-in-one,把缓存的rpm包保存下来

2.      实体机上使用rpm包安装,实现持续集成,持续集成终究要做rpm包来安装,不如现在一次搞好,实现脚本化

[[email protected] etc]# vi yum.conf

[main]

cachedir=/var/cache/yum

keepcache=1

(1)      Ceph的使用yum 安装完全没问题

(2)      Openstack使用all-in-one先安装一个节点来检查环境

2.2         /etc/hostname的设置

[[email protected] ~]# cat /etc/hostname

node1

[[email protected] ~]# cat /etc/hostname

node2

[[email protected] ~]# cat /etc/hostname

node3

[[email protected] ~]# cat /etc/hostname

node4

后续改进:此处集成到自动化脚本

2.3 /etc/hosts设置

[[email protected] ~]# vi /etc/hosts

127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4

::1        localhost localhost.localdomain localhost6 localhost6.localdomain6

10.1.14.235 mirrors.hikvision.com.cn

10.192.44.148 node1

10.192.44.149 node2

10.192.44.150 node3

10.192.44.151 node4

后续改进:此处集成到自动化脚本

2.4         关闭防火墙

systemctl stop firewalld.service

systemctl disable firewalld.service

2.5 各种命令汇总

systemctl stop firewalld.service

systemctl disable firewalld.service

yum install ceph -y

yum install ceph-deploy -y

yum install yum-plugin-priorities -y

yum install snappy leveldb gdiskpython-argparse gperftools-libs -y

#ceph-deploy new lxp-node1 lxp-node2lxp-node3

# ceph-deploy install lxp-node1 lxp-node2lxp-node3

#ceph-deploy--overwrite-conf   mon create-initial

ceph-deploy mon create lxp-node1 lxp-node2lxp-node3

ceph-deploy gatherkeys lxp-node1 lxp-node2lxp-node3

/etc/init.d/ceph-a start osd

systemctl enable haproxy

systemctl start haproxy

systemctl enable keepalived

systemctl start keepalived

# systemctl enable rabbitmq-server.service

# systemctl start rabbitmq-server.service

# rabbitmqctl  add_user guest guest

chown rabbitmq:rabbitmq .erlang.cookie

rabbitmqctl stop_app

rabbitmqctl join_cluster [email protected]

rabbitmqctl start_app

rabbitmqctl cluster_status

rabbitmqctl set_policy ha-all‘^(?!amq\.).*‘ ‘{"ha-mode": "all"}‘

yuminstall MySQL-python mariadb-galera-server galera  xtrabackup socat

# systemctl enable mariadb.service

# systemctl restart mariadb.service

yuminstall openstack-keystone httpd mod_wsgi python-openstackclient memcachedpython-memcached

systemctlenable memcached.service

systemctlstart memcached.service

#yum install python-pip

#pip install eventlet

mkdir-p /var/www/cgi-bin/

将node1的keystone打包过来解压

chown-R keystone:keystone /var/www/cgi-bin/keystone

chmod755 /var/www/cgi-bin/keystone/ -R

重启httpd:

# systemctlenable httpd.service

#systemctl start httpd.service

[[email protected]~]# export OS_TOKEN=c5a16fa8158c4208b5764c00554bde49

[[email protected]~]# export OS_URL=http://192.168.129.130:35357/v2.0

#systemctlenable openstack-glance-api.service openstack-glance-registry.service

#systemctl start openstack-glance-api.serviceopenstack-glance-registry.service

systemctlrestart openstack-glance-api.service openstack-glance-registry.service

systemctlrestart openstack-glance-api.service openstack-glance-registry.service

MariaDB [(none)]> GRANT ALL PRIVILEGESON glance.* TO ‘glance‘@‘localhost‘ IDENTIFIED BY ‘6fbbf50542084b7c‘;

MariaDB [(none)]> GRANT ALL PRIVILEGESON glance.* TO ‘glance‘@‘%‘ IDENTIFIED BY ‘6fbbf50542084b7c‘;

MariaDB [(none)]> FLUSH  PRIVILEGES;

ceph osd tree

/etc/init.d/ceph-a start osd

# ceph-deploy --overwrite-conf osd preparelxp-node1:/data/osd4.lxp-node1:/dev/sdb2lxp-node2:/data/osd5.lxp-node2:/dev/sdb2lxp-node3:/data/osd6.lxp-node3:/dev/sdb2

# ceph-deploy --overwrite-conf osd activatelxp-node1:/data/osd4.lxp-node1:/dev/sdb2lxp-node2:/data/osd5.lxp-node2:/dev/sdb2lxp-node3:/data/osd6.lxp-node3:/dev/sdb2

# ceph osd lspools

# ceph pg stat

ceph osd pool create image  32

# ceph osd lspools

yum installopenstack-dashboardhttpd mod_wsgi memcached pythonmemcached

# systemctlrestarthttpd.service

yuminstall openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-consoleopenstack-nova-novncproxy penstack-nova-scheduler python-novaclient

# systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.serviceopenstack-nova-novncproxy.service

# systemctl start openstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

GRANTALL PRIVILEGES ON nova.* TO ‘nova‘@‘localhost‘ IDENTIFIED BY ‘b7cf13724ff948d7‘;

GRANTALL PRIVILEGES ON nova.* TO ‘nova‘@‘%‘ IDENTIFIED BY ‘b7cf13724ff948d7‘;

FLUSH PRIVILEGES;

# systemctl restart openstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.serviceopenstack-nova-novncproxy.service

yuminstall openstack-cinder python-cinderclient python-oslo-db

# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

# systemctl start openstack-cinder-api.serviceopenstack-cinder-scheduler.service

# systemctl restartopenstack-cinder-api.service openstack-cinder-scheduler.service

GRANT ALL PRIVILEGES ON cinder.* TO‘cinder‘@‘localhost‘ IDENTIFIED BY ‘afdfc435eb0b4372‘;

GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘%‘IDENTIFIED BY ‘afdfc435eb0b4372‘;

FLUSH PRIVILEGES;

yuminstall openstack-neutron openstack-neutron-ml2 python-neutronclient

MariaDB [(none)]> GRANT ALL PRIVILEGESON neutron.* TO ‘neutron‘@‘localhost‘ IDENTIFIED BY ‘11be293368c044cb‘;

MariaDB [(none)]> GRANT ALL PRIVILEGESON neutron.* TO ‘neutron‘@‘%‘ IDENTIFIED BY ‘11be293368c044cb‘;

MariaDB [(none)]> FLUSH PRIVILEGES;

# systemctl enable neutron-server.service

# systemctl restart neutron-server.service

# systemctl enable openvswitch.service

# systemctl restart openvswitch.service

# systemctl enable neutron-openvswitch-agent.serviceneutron-l3-agent.service neutron-dhcp-agent.serviceneutron-metadata-agent.service neutron-ovs-cleanup.service

# systemctl restart neutron-openvswitch-agent.serviceneutron-l3-agent.service neutron-dhcp-agent.serviceneutron-metadata-agent.service

systemctlrestart openstack-nova-api.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service

# systemctl enable openstack-cinder-volume.service target.service

# systemctl start openstack-cinder-volume.service target.service

2.6 遇到的差异记录

yum -y update

升级所有包,改变软件设置和系统设置,系统版本内核都升级

yum -y upgrade

升级所有包,不改变软件设置和系统设置,系统版本升级,内核不改变

貌似系统起不来了

2.7 特别注意

配置一项就要确认一下虚拟机、vnc是否还正常

如果系统起不来需要重装,那么立即沟通

在北京重装的过程中,自己做尽可能多的验证

3. Ceph安装

3.1 磁盘分区:SSD(/dev/hda5)作为Ceph journal盘

原因:Journal使用SSD盘,会对Ceph性能有一定的提升。

设置/dev/hda5自动挂载到/data/目录

(1)      将hda5做成ext4系统:

# mkfs.ext4 /dev/hda5

(2)      创建目录/data

# mkdir /data

(3)在/etc/fstab中添加:

/dev/hda5         /data        ext4   defaults,async,noatime,nodiratime,data=writeback,barrier=0 0 0

(4)重启验证是否被挂载

[[email protected] ~]# mount |grep hda5

/dev/hda5 on /data type ext4(rw,noatime,nodiratime,nobarrier,data=writeback)

OK,挂载成功

SSD(/dev/hda5)开机自动挂载为本地目录作为journal盘的方法:

[[email protected] osd.lxp-node1]# ceph-deployosd prepare --help

usage: ceph-deploy osd prepareHOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] ...]

3.2 ceph、ceph-deploy安装

Ceph的包目前使用rpm安装没有任何问题:

yum install ceph -y

yum install ceph-deploy -y

yum install yum-plugin-priorities -y

yum install snappy leveldb gdiskpython-argparse gperftools-libs -y

重启看是否还可以重启成功。Ceph目前只需要这些包

可以重启成功,先不处理ceph,没有风险,先处理openstack。先验证安装方案!

3.3 ceph mon安装

放到第7章

3.4 ceph osd安装

放到第7章

4. 使用packstack进行openstack基本环境安装(4节点)--【此路不通】

先安装一个all-in-one的,检查一下环境是否有冲突

在10.192.44.148使用packstack安装openstack的all-in-one环境

验证安装方案的可行性

# yum install openstack-packstack

# yum install screen

# packstack --gen-answer-file=hcloud.txt

关闭如下选项:

CONFIG_PROVISION_DEMO=n

CONFIG_CEILOMETER_INSTALL=n

CONFIG_SWIFT_INSTALL=n

CONFIG_NAGIOS_INSTALL=n

安装:

# screen packstack --answer-file=hcloud.txt

出现问题:

10.192.44.151_mariadb.pp:                         [ ERROR ]

Applying Puppet manifests                         [ ERROR ]

4.1 解决数据库冲突问题

ERROR : Error appeared during Puppet run:10.192.44.151_mariadb.pp

Error: Execution of ‘/usr/bin/rpm -emariadb-server-5.5.35-3.el7.x86_64‘ returned 1: error: Failed dependencies:

You will find full trace in log/var/tmp/packstack/20160524-195517-yG5qIz/manifests/10.192.44.151_mariadb.pp.log

数据库安装错误,依赖问题,

删除原来的mysql包,packstack会下载galera版本的mariadb:

[[email protected] ~]# rpm -aq |grep maria

mariadb-devel-5.5.35-3.el7.x86_64

mariadb-5.5.35-3.el7.x86_64

mariadb-test-5.5.35-3.el7.x86_64

mariadb-libs-5.5.35-3.el7.x86_64

mariadb-embedded-5.5.35-3.el7.x86_64

mariadb-embedded-devel-5.5.35-3.el7.x86_64

mariadb-server-5.5.35-3.el7.x86_64

# rpm -e --nodeps mariadb-devel mariadbmariadb-test mariadb-libs mariadb-embedded mariadb-embedded-develmariadb-server

重新安装openstack,在node1上:

删除之后还是有问题

手动安装试试:

把galera也删除:

# rpm -e --nodeps mariadb-galera-commonmariadb-galera-server galera

# rpm -e --nodeps mariadb-libs mariadb

[[email protected] ~]# rpm -aq |grepmaria

[[email protected] ~]# rpm -aq |grep galera

手动验证:

# yum install mariadb mariadb-serverMySQL-python

问题:

Error: mariadb-galera-server conflicts with1:mariadb-server-5.5.44-2.el7.centos.x86_64

解决:

OK,可以安装

再次用packstack,看是否还会出错,如果出错,选择mariadb_install=n是否可行

如果不行,修改packstack是否可行

手动安装一直无法启动,有如下和sql相关的报错:

yum install mariadb-galera-server  galera

MySQL-python-1.2.3-11.el7.x86_64 hasmissing requires of libmysqlclient.so.18()(64bit)

MySQL-python-1.2.3-11.el7.x86_64 hasmissing requires of libmysqlclient.so.18(libmysqlclient_18)(64bit)

perl-DBD-MySQL-4.023-5.el7.x86_64 hasmissing requires of libmysqlclient.so.18()(64bit)

perl-DBD-MySQL-4.023-5.el7.x86_64 hasmissing requires of libmysqlclient.so.18(libmysqlclient_18)(64bit)

Installing : 1:mariadb-libs-5.5.44-2.el7.centos.x86_64                                                                                                                                         1/5

/sbin/ldconfig: /lib64/libosipparser2.so.3is not a symbolic link

/sbin/ldconfig: /lib64/libeXosip2.so.4 isnot a symbolic link

/sbin/ldconfig: /lib64/libosip2.so.3 is nota symbolic link

/sbin/ldconfig: /lib64/libosipparser2.so.3is not a symbolic link

/sbin/ldconfig: /lib64/libeXosip2.so.4 isnot a symbolic link

/sbin/ldconfig: /lib64/libosip2.so.3 is nota symbolic link

解决:/sbin/ldconfig: /lib64/libosipparser2.so.3 is not a symbolic link

(1)

[[email protected] lib64]# rmlibosipparser2.so.3

[[email protected] lib64]# ln -slibosipparser2.so libosipparser2.so.3

[[email protected] lib64]# ls libosipparser2.*-l

-rw-r--r-- 1 root root 707666 Apr  1 13:52 libosipparser2.a

-rw-r--r-- 1 root root    857 Apr 1 13:52 libosipparser2.la

-rw-r--r-- 1 root root 380223 Apr  1 13:52 libosipparser2.so

lrwxrwxrwx 1 root root     17 May 24 21:06 libosipparser2.so.3 ->libosipparser2.so

解决:

/sbin/ldconfig: /lib64/libeXosip2.so.4 isnot a symbolic link

(2)

[[email protected] lib64]# rm libeXosip2.so.4

[[email protected] lib64]# ln -s libeXosip2.solibeXosip2.so.4

[[email protected] lib64]# ls libeXosip2.so*-l

-rw-r--r-- 1 root root 818385 Apr  1 13:52 libeXosip2.so

lrwxrwxrwx 1 root root     13 May 24 21:08 libeXosip2.so.4 ->libeXosip2.so

(3)/sbin/ldconfig: /lib64/libosip2.so.3 is not a symbolic link

解决:

[[email protected] lib64]# rm libosip2.so.3

[[email protected] lib64]# ln -s libosip2.solibosip2.so.3

重新安装:

yum install mariadb-galera-server  galera

没有再报这些错误!!!!!

还有其他依赖错误:

MySQL-python-1.2.3-11.el7.x86_64 has missingrequires of libmysqlclient.so.18()(64bit)

MySQL-python-1.2.3-11.el7.x86_64 hasmissing requires of libmysqlclient.so.18(libmysqlclient_18)(64bit)

10:libcacard-1.5.3-60.el7.x86_64 hasmissing requires of libgfapi.so.0()(64bit)

10:libcacard-1.5.3-60.el7.x86_64 hasmissing requires of libgfrpc.so.0()(64bit)

10:libcacard-1.5.3-60.el7.x86_64 hasmissing requires of libgfxdr.so.0()(64bit)

libvirt-daemon-driver-storage-1.1.1-29.el7.x86_64has missing requires of libgfapi.so.0()(64bit)

libvirt-daemon-driver-storage-1.1.1-29.el7.x86_64has missing requires of libgfrpc.so.0()(64bit)

libvirt-daemon-driver-storage-1.1.1-29.el7.x86_64has missing requires of libgfxdr.so.0()(64bit)

libvirt-daemon-driver-storage-1.1.1-29.el7.x86_64has missing requires of libglusterfs.so.0()(64bit)

perl-DBD-MySQL-4.023-5.el7.x86_64 hasmissing requires of libmysqlclient.so.18()(64bit)

perl-DBD-MySQL-4.023-5.el7.x86_64 hasmissing requires of libmysqlclient.so.18(libmysqlclient_18)(64bit)

2:postfix-2.10.1-6.el7.x86_64 has missingrequires of libmysqlclient.so.18()(64bit)

2:postfix-2.10.1-6.el7.x86_64 has missingrequires of libmysqlclient.so.18(libmysqlclient_18)(64bit)

10:qemu-img-1.5.3-60.el7.x86_64 has missingrequires of libgfapi.so.0()(64bit)

10:qemu-img-1.5.3-60.el7.x86_64 has missingrequires of libgfrpc.so.0()(64bit)

10:qemu-img-1.5.3-60.el7.x86_64 has missingrequires of libgfxdr.so.0()(64bit)

10:qemu-kvm-1.5.3-60.el7.x86_64 has missingrequires of libgfapi.so.0()(64bit)

10:qemu-kvm-1.5.3-60.el7.x86_64 has missingrequires of libgfrpc.so.0()(64bit)

10:qemu-kvm-1.5.3-60.el7.x86_64 has missingrequires of libgfxdr.so.0()(64bit)

10:qemu-kvm-common-1.5.3-60.el7.x86_64 hasmissing requires of libgfapi.so.0()(64bit)

10:qemu-kvm-common-1.5.3-60.el7.x86_64 hasmissing requires of libgfrpc.so.0()(64bit)

10:qemu-kvm-common-1.5.3-60.el7.x86_64 hasmissing requires of libgfxdr.so.0()(64bit)

先试一下可否启动mysql

还是启动失败

将这些库从OK的环境整理过来:

解决:

MySQL-python-1.2.3-11.el7.x86_64 hasmissing requires of libmysqlclient.so.18()(64bit)

MySQL-python-1.2.3-11.el7.x86_64 hasmissing requires of libmysqlclient.so.18(libmysqlclient_18)(64bit)

还是启动失败:

手动启动试试:

/usr/bin/mysqld_safe --basedir=/usr

解决办法:

删除ib_logfile0 ib_logfile1 文件:

#cd /var/lib/mysql/

#rm ib_logfile0  ib_logfile1

重启mysql服务

还是有错:

mysqld_safe mysqld from pid file /var/run/mariadb/mariadb.pid ended

解决:

touch /var/run/mariadb/mariadb.pi

# chown mysql:mysql/var/run/mariadb/mariadb.pid

# chmod 0660 /var/run/mariadb/mariadb.pid

再次启动:

# systemctl enable mariadb.service

# systemctl restart mariadb.service

查看/var/log/mariadb/mariadb.log,报如下错误:

160525  8:34:16 [Note] Plugin ‘FEEDBACK‘ is disabled.

160525  8:34:16 [Note] Server socket created on IP:‘0.0.0.0‘.

160525 8:34:16 [ERROR] Fatal error: Can‘t open and lock privilege tables: Table‘mysql.host‘ doesn‘t exist

16052508:34:16 mysqld_safe mysqld from pid file /var/run/mariadb/mariadb.pid ended

根据:

http://blog.csdn.net/indexman/article/details/16980433

执行mysql_install_db

Mariadb.log中报如下错误:

160525 8:37:30 [Note] WSREP: Read nil XID from storage engines, skippingposition init

160525 8:37:30 [Note] WSREP: wsrep_load(): loading provider library ‘none‘

160525 8:37:30 [ERROR] mysqld: Incorrect information in file:‘./mysql/tables_priv.frm‘

ERROR: 1033 Incorrect information in file: ‘./mysql/tables_priv.frm‘

160525 8:37:30 [ERROR] Aborting

重启:

160525 8:39:42 [ERROR] mysqld: Can‘t find file: ‘./mysql/host.frm‘ (errno: 13)

160525 8:39:42 [ERROR] Fatal error: Can‘t open and lock privilege tables: Can‘tfind file: ‘./mysql/host.frm‘ (errno: 13)

可能是权限问题:

http://181054867.iteye.com/blog/614656

/var/lib/mysql

[[email protected] mysql]# pwd

/var/lib/mysql

[[email protected] mysql]# chmod 770 mysql/ -R

还是报找不到

改成777:

chmod 777 mysql/ -R

继续重启

报错:

160525 8:45:20 [ERROR] mysqld: Incorrect information in file:‘./mysql/proxies_priv.frm‘

160525 8:45:20 [ERROR] Fatal error: Can‘t open and lock privilege tables:Incorrect information in file: ‘./mysql/proxies_priv.frm‘

按照这里的方法:

http://stackoverflow.com/questions/2314249/how-to-recover-a-mysql-database-incorrect-information-in-file-xxx-frm

[[email protected] mysql]# rm proxies*

重启后重启成功

但是启动日志会有很多错误:

tail -f mariadb.log

160525 8:48:38 [ERROR] mysqld: Incorrect information in file: ‘./mysql/tables_priv.frm‘

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘THREAD_ID‘ atposition 0 to have type int(11), found type bigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘EVENT_NAME‘ atposition 2, found ‘END_EVENT_ID‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘EVENT_NAME‘ atposition 2 to have type varchar(128), found type bigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘SOURCE‘ at position3, found ‘EVENT_NAME‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘SOURCE‘ at position 3to have type varchar(64), found type varchar(128).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘TIMER_START‘ atposition 4, found ‘SOURCE‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘TIMER_START‘ atposition 4 to have type bigint(20), found type varchar(64).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘TIMER_END‘ atposition 5, found ‘TIMER_START‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘TIMER_WAIT‘ atposition 6, found ‘TIMER_END‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘SPINS‘ at position 7,found ‘TIMER_WAIT‘.

160525 8:48:38 [ERROR] Incorrect definition of table performance_schema.events_waits_current:expected column ‘SPINS‘ at position 7 to have type int(10), found typebigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘OBJECT_SCHEMA‘ atposition 8, found ‘SPINS‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘OBJECT_SCHEMA‘ atposition 8 to have type varchar(64), found type int(10) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘OBJECT_NAME‘ atposition 9, found ‘OBJECT_SCHEMA‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘OBJECT_NAME‘ atposition 9 to have type varchar(512), found type varchar(64).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘OBJECT_TYPE‘ atposition 10, found ‘OBJECT_NAME‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘OBJECT_TYPE‘ atposition 10 to have type varchar(64), found type varchar(512).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column‘OBJECT_INSTANCE_BEGIN‘ at position 11, found ‘INDEX_NAME‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column‘OBJECT_INSTANCE_BEGIN‘ at position 11 to have type bigint(20), found typevarchar(64).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘NESTING_EVENT_ID‘ atposition 12, found ‘OBJECT_TYPE‘.

160525 8:48:38 [ERROR] Incorrect definition of table performance_schema.events_waits_current:expected column ‘NESTING_EVENT_ID‘ at position 12 to have type bigint(20),found type varchar(64).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘OPERATION‘ at position13, found ‘OBJECT_INSTANCE_BEGIN‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘OPERATION‘ atposition 13 to have type varchar(16), found type bigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘NUMBER_OF_BYTES‘ atposition 14, found ‘NESTING_EVENT_ID‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘FLAGS‘ at position15, found ‘NESTING_EVENT_TYPE‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column ‘FLAGS‘ at position 15to have type int(10), found type enum(‘STATEMENT‘,‘STAGE‘,‘WAIT‘).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘THREAD_ID‘ atposition 0 to have type int(11), found type bigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of table performance_schema.events_waits_history:expected column ‘EVENT_NAME‘ at position 2, found ‘END_EVENT_ID‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘EVENT_NAME‘ atposition 2 to have type varchar(128), found type bigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘SOURCE‘ at position3, found ‘EVENT_NAME‘.

160525 8:48:38 [ERROR] Incorrect definition of table performance_schema.events_waits_history:expected column ‘SOURCE‘ at position 3 to have type varchar(64), found typevarchar(128).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘TIMER_START‘ at position4, found ‘SOURCE‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘TIMER_START‘ atposition 4 to have type bigint(20), found type varchar(64).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘TIMER_END‘ atposition 5, found ‘TIMER_START‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘TIMER_WAIT‘ at position6, found ‘TIMER_END‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘SPINS‘ at position 7,found ‘TIMER_WAIT‘.

160525 8:48:38 [ERROR] Incorrect definition of table performance_schema.events_waits_history:expected column ‘SPINS‘ at position 7 to have type int(10), found typebigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘OBJECT_SCHEMA‘ atposition 8, found ‘SPINS‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘OBJECT_SCHEMA‘ atposition 8 to have type varchar(64), found type int(10) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘OBJECT_NAME‘ atposition 9, found ‘OBJECT_SCHEMA‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘OBJECT_NAME‘ atposition 9 to have type varchar(512), found type varchar(64).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘OBJECT_TYPE‘ atposition 10, found ‘OBJECT_NAME‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘OBJECT_TYPE‘ atposition 10 to have type varchar(64), found type varchar(512).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column‘OBJECT_INSTANCE_BEGIN‘ at position 11, found ‘INDEX_NAME‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column‘OBJECT_INSTANCE_BEGIN‘ at position 11 to have type bigint(20), found typevarchar(64).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘NESTING_EVENT_ID‘ atposition 12, found ‘OBJECT_TYPE‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘NESTING_EVENT_ID‘ atposition 12 to have type bigint(20), found type varchar(64).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘OPERATION‘ atposition 13, found ‘OBJECT_INSTANCE_BEGIN‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘OPERATION‘ atposition 13 to have type varchar(16), found type bigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘NUMBER_OF_BYTES‘ atposition 14, found ‘NESTING_EVENT_ID‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘FLAGS‘ at position15, found ‘NESTING_EVENT_TYPE‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column ‘FLAGS‘ at position 15to have type int(10), found type enum(‘STATEMENT‘,‘STAGE‘,‘WAIT‘).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘THREAD_ID‘ atposition 0 to have type int(11), found type bigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of table performance_schema.events_waits_history_long:expected column ‘EVENT_NAME‘ at position 2, found ‘END_EVENT_ID‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘EVENT_NAME‘ atposition 2 to have type varchar(128), found type bigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘SOURCE‘ atposition 3, found ‘EVENT_NAME‘.

160525 8:48:38 [ERROR] Incorrect definition of table performance_schema.events_waits_history_long:expected column ‘SOURCE‘ at position 3 to have type varchar(64), found typevarchar(128).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘TIMER_START‘ atposition 4, found ‘SOURCE‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘TIMER_START‘ atposition 4 to have type bigint(20), found type varchar(64).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘TIMER_END‘ atposition 5, found ‘TIMER_START‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘TIMER_WAIT‘ atposition 6, found ‘TIMER_END‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘SPINS‘ atposition 7, found ‘TIMER_WAIT‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘SPINS‘ atposition 7 to have type int(10), found type bigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘OBJECT_SCHEMA‘at position 8, found ‘SPINS‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘OBJECT_SCHEMA‘at position 8 to have type varchar(64), found type int(10) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘OBJECT_NAME‘ atposition 9, found ‘OBJECT_SCHEMA‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘OBJECT_NAME‘ atposition 9 to have type varchar(512), found type varchar(64).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘OBJECT_TYPE‘ atposition 10, found ‘OBJECT_NAME‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘OBJECT_TYPE‘ atposition 10 to have type varchar(64), found type varchar(512).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column‘OBJECT_INSTANCE_BEGIN‘ at position 11, found ‘INDEX_NAME‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘OBJECT_INSTANCE_BEGIN‘at position 11 to have type bigint(20), found type varchar(64).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column‘NESTING_EVENT_ID‘ at position 12, found ‘OBJECT_TYPE‘.

160525  8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column‘NESTING_EVENT_ID‘ at position 12 to have type bigint(20), found typevarchar(64).

160525 8:48:38 [ERROR] Incorrect definition of table performance_schema.events_waits_history_long:expected column ‘OPERATION‘ at position 13, found ‘OBJECT_INSTANCE_BEGIN‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘OPERATION‘ atposition 13 to have type varchar(16), found type bigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘NUMBER_OF_BYTES‘at position 14, found ‘NESTING_EVENT_ID‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘FLAGS‘ atposition 15, found ‘NESTING_EVENT_TYPE‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column ‘FLAGS‘ atposition 15 to have type int(10), found type enum(‘STATEMENT‘,‘STAGE‘,‘WAIT‘).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.threads: expected column ‘THREAD_ID‘ at position 0 to havetype int(11), found type bigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.threads: expected column ‘PROCESSLIST_ID‘ at position 1,found ‘NAME‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.threads: expected column ‘PROCESSLIST_ID‘ at position 1 tohave type int(11), found type varchar(128).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.threads: expected column ‘NAME‘ at position 2, found ‘TYPE‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.threads: expected column ‘NAME‘ at position 2 to have typevarchar(128), found type varchar(10).

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_summary_by_thread_by_event_name: expected column‘THREAD_ID‘ at position 0 to have type int(11), found type bigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_event_name: expected column ‘COUNT_READ‘ atposition 1, found ‘COUNT_STAR‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_event_name: expected column ‘COUNT_WRITE‘ atposition 2, found ‘SUM_TIMER_WAIT‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_event_name: expected column‘SUM_NUMBER_OF_BYTES_READ‘ at position 3, found ‘MIN_TIMER_WAIT‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_event_name: expected column‘SUM_NUMBER_OF_BYTES_WRITE‘ at position 4, found ‘AVG_TIMER_WAIT‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_instance: expected column ‘COUNT_READ‘ atposition 2, found ‘OBJECT_INSTANCE_BEGIN‘.

160525 8:48:38 [ERROR] Incorrect definition of table performance_schema.file_summary_by_instance:expected column ‘COUNT_WRITE‘ at position 3, found ‘COUNT_STAR‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_instance: expected column‘SUM_NUMBER_OF_BYTES_READ‘ at position 4, found ‘SUM_TIMER_WAIT‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_instance: expected column‘SUM_NUMBER_OF_BYTES_WRITE‘ at position 5, found ‘MIN_TIMER_WAIT‘.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.mutex_instances: expected column ‘LOCKED_BY_THREAD_ID‘ atposition 2 to have type int(11), found type bigint(20) unsigned.

160525 8:48:38 [ERROR] Incorrect definition of tableperformance_schema.rwlock_instances: expected column‘WRITE_LOCKED_BY_THREAD_ID‘ at position 2 to have type int(11), found typebigint(20) unsigned.

160525 8:48:38 [ERROR] mysqld: Incorrect information in file:‘./mysql/event.frm‘

160525 8:48:38 [ERROR] Cannot open mysql.event

160525 8:48:38 [ERROR] Event Scheduler: An error occurred when initializingsystem tables. Disabling the Event Scheduler.

160525 8:48:38 [Note] WSREP: Read nil XID from storage engines, skippingposition init

将表信息全部删除:

其他尝试:

升级数据库表失败:

http://www.live-in.org/archives/2019.html

# /usr/bin/mysql_upgrade -u root

终极解决办法:

rpm 删除包之后,手动清理一下mysql文件:

[[email protected] var]# find ./ -name mysql

./lib/mysql

./lib/mysql/mysql

[[email protected] var]# rm ./lib/mysql/ -rf

[[email protected] usr]# find ./ -name mysql |xargs rm –rf

再次安装试试:

yum install  mariadb-galera-server  galera

启动:

# systemctl enable mariadb.service

# systemctl start mariadb.service

OK,终于解决了

需要手动清理干净!!!

将/etc/my.cnf和/etc/my.cnf.d也清理掉

4.2 解决后数据库问题重新使用packstack自动化安装,检查环境差异

清理后重新用packstack重装,否则带着手动安装的还是会报错

依旧会报错,手动使用yum逐个组件安装

5.   openstack逐个模块安装:根据官网文档,安装四节点环境,先安装controller1和compute1


Hostname


IP


Role


controller1


10.192.44.148


Controller1 (network1)


controller2


10.192.44.149


Controller2(networ2)


compute1


10.192.44.150


Compute1


compute2


10.192.44.151


Compute2

先安装如下两个节点


Hostname


IP


Role


controller1


10.192.44.148


Controller1 (network1)


compute1


10.192.44.150


Compute1

5.1 基本环境安装

设置hostname和hosts

10.1.14.235 mirrors.hikvision.com.cn

10.192.44.148 controller1

10.192.44.150 compute1

5.1.1 数据库安装

# yum install mariadb mariadb-serverMySQL-python

修改/etc/my.cnf.d/server.cnf:

[mysqld]

bind-address = 10.192.44.148

default-storage-engine= innodb

innodb_file_per_table

collation-server =utf8_general_ci

init-connect = ‘SETNAMES utf8‘

character-set-server = utf8

启动数据库:

# systemctl enable mariadb.service

# systemctl start mariadb.service

设置密码:

# mysql_secure_installation

Root密码为1,其他全部选择Y

检查数据库:

[[email protected] my.cnf.d]# mysql -uroot-p

Enter password:

Welcome to the MariaDB monitor.  Commands end with ; or \g.

Your MariaDB connection id is 11

Server version: 5.5.44-MariaDB MariaDBServer

Copyright (c) 2000, 2015, Oracle, MariaDBCorporation Ab and others.

Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ toclear the current input statement.

MariaDB [(none)]> Ctrl-C -- exit!

Aborted

5.1.2 安装rabbitmq

yum install rabbitmq-server

启动rabbitmq-server:

[[email protected] 7]# systemctl enablerabbitmq-server.service

[[email protected] 7]# systemctl startrabbitmq-server.service

增加openstack用户:

# rabbitmqctl add_user openstack 1 这里密码为1

设置访问权限:

rabbitmqctl set_permissionsopenstack ".*" ".*" ".*"

systemctl restart rabbitmq-server.service

OK

5.2 安装keystone

5.2.1 创建数据库,密码为1

MariaDB [(none)]> CREATE DATABASEkeystone;

Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGESON keystone.* TO ‘keystone‘@‘localhost‘ IDENTIFIED BY ‘1‘;

Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGESON keystone.* TO ‘keystone‘@‘%‘ IDENTIFIED BY ‘1‘;

Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;

Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> quit

生成随机数

[[email protected]]# openssl rand -hex 10

5a67199a1ba44a78ddcb

5.2.2 安装keystone

yum installopenstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached

 

启动memcached:

[[email protected] 7]# systemctl enablememcached.service

ln -s‘/usr/lib/systemd/system/memcached.service‘‘/etc/systemd/system/multi-user.target.wants/memcached.service‘

[[email protected] 7]# systemctl startmemcached.service

5.2.3 修改keystone配置:

将packstack自动安装的配置拷贝过来进行修改

生成随机数

[[email protected] 7]# openssl rand -hex 10

5a67199a1ba44a78ddcb

修改、检查如下字段:

[DEFAULT]

admin_token = 5a67199a1ba44a78ddcb

public_port=5000

admin_bind_host=0.0.0.0

public_bind_host=0.0.0.0

admin_port=35357

connection =mysql://keystone:[email protected]/keystone

rabbit_host = 10.192.44.148

rabbit_port = 5672

rabbit_hosts ="10.192.44.148:5672"

同步数据库:

su -s /bin/sh -c"keystone-manage db_sync" keystone

 

5.2.4 配置httpd

将packstack的httpd配置拷贝过来

修改如下内容:

[[email protected] httpd]#grep node ./ -r

./conf/httpd.conf:ServerName"node1"

./conf.d/15-horizon_vhost.conf:  ServerName node1

./conf.d/15-horizon_vhost.conf:  ServerAlias node1

./conf.d/10-keystone_wsgi_admin.conf:  ServerName node1

./conf.d/10-keystone_wsgi_main.conf:  ServerName node1

改为:

 

[[email protected] httpd]#grep controller1 ./ -r   

./conf/httpd.conf:ServerName"controller1"

./conf.d/15-horizon_vhost.conf:  ServerName controller1

./conf.d/15-horizon_vhost.conf:  ServerAlias controller1

./conf.d/10-keystone_wsgi_admin.conf:  ServerName controller1

./conf.d/10-keystone_wsgi_main.conf:  ServerName controller1

[[email protected] httpd]#

 

[[email protected] httpd]# grep 192 ./ -r

./conf.d/15-horizon_vhost.conf:  ServerAlias 192.168.129.131

改为:

ServerAlias 10.192.44.148

创建keystone站点:

mkdir -p/var/www/cgi-bin/keystone

拷贝packstack环境的:

[[email protected] keystone]# chown -Rkeystone:keystone /var/www/cgi-bin/keystone

[[email protected] keystone]# chmod 755/var/www/cgi-bin/keystone/*

启动httpd服务:

# systemctl enable httpd.service

# systemctl start httpd.service

验证:

15-default.conf

修改:

ServerName controller1

重启:

可以重启成功

但是目前无法登录

安装horizon再验证排查

5.2.5 创建service和endpoint

5a67199a1ba44a78ddcb

[[email protected] ~]# exportOS_TOKEN=5a67199a1ba44a78ddcb

[[email protected] ~]# exportOS_URL=http://10.192.44.148:35357/v2.0

[[email protected] ~]# openstack servicelist

创建service:

[[email protected] ~]# openstack servicecreate --name keystone --description "OpenStack Identity" identity

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Identity               |

| enabled     | True                             |

| id          | 69c389157be24cf6b4511d648e8412be |

| name        | keystone                         |

| type        | identity                         |

+-------------+----------------------------------+

创建endpoint:

openstack endpoint create \

--publicurl http://controller1:5000/v2.0 \

--internalurl http://controller1:5000/v2.0\

--adminurl http://controller1:35357/v2.0 \

--region RegionOne \

identity

# openstack endpoint create --publicurlhttp://controller1:5000/v2.0 --internalurl http://controller1:5000/v2.0--adminurl http://controller1:35357/v2.0 --region RegionOne identity

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| adminurl     | http://controller1:35357/v2.0    |

| id           | 6df505c12153483a9f8dc42d64879c69 |

| internalurl  | http://controller1:5000/v2.0     |

| publicurl    | http://controller1:5000/v2.0     |

| region       | RegionOne                        |

| service_id   | 69c389157be24cf6b4511d648e8412be |

| service_name | keystone                         |

| service_type | identity                         |

+--------------+----------------------------------+

5.2.6 创建项目、用户、角色

[[email protected] ~]# openstack projectcreate --description "Admin Project" admin

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | Admin Project                    |

| enabled     | True                             |

| id          | 617e98e151b245d081203adcbb0ce7a4 |

| name        | admin                            |

+-------------+----------------------------------+

[[email protected] ~]# openstack user create--password-prompt admin

User Password:

Repeat User Password:

+----------+----------------------------------+

| Field   | Value                           |

+----------+----------------------------------+

| email   | None                            |

| enabled | True                             |

| id      | cfca3361950644de990b52ad341a06f0 |

| name    | admin                           |

| username | admin                            |

+----------+----------------------------------+

[[email protected] ~]# openstack role createadmin

+-------+----------------------------------+

| Field | Value                            |

+-------+----------------------------------+

| id   | 6c89e70e3b274c44b068dbd6aef08bb2 |

| name | admin                           |

+-------+----------------------------------+

[[email protected] ~]#

[[email protected] ~]# openstack role add--project admin --user admin admin

+-------+----------------------------------+

| Field | Value                            |

+-------+----------------------------------+

| id   | 6c89e70e3b274c44b068dbd6aef08bb2 |

| name | admin                           |

+-------+----------------------------------+

[[email protected] ~]# openstack projectcreate --description "Service Project" service

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | Service Project                  |

| enabled     | True                             |

| id          | 165f6edf748d4bff957beada1f2a728e |

| name        | service                          |

+-------------+----------------------------------+

5.2.7 keystone的验证

unset OS_TOKEN OS_URL

[[email protected] ~]# openstack--os-auth-url http://controller1:35357 --os-project-name admin --os-usernameadmin --os-auth-type password token issue

Password:

+------------+----------------------------------+

| Field     | Value                           |

+------------+----------------------------------+

| expires   | 2016-05-25T03:27:46Z            |

| id        | 2b1325bdd1c643ad9b6ceed17e663913 |

| project_id |617e98e151b245d081203adcbb0ce7a4 |

| user_id   | cfca3361950644de990b52ad341a06f0 |

+------------+----------------------------------+

# openstack --os-auth-urlhttp://controller1:35357 --os-project-domain-id default --os-user-domain-iddefault --os-project-name admin --os-username admin --os-auth-type passwordtoken issue

Password:

+------------+----------------------------------+

| Field     | Value                           |

+------------+----------------------------------+

| expires   | 2016-05-25T03:30:03.368364Z     |

| id        | 5c8f0e1ac4f0457884e788dff3b232d8 |

| project_id |617e98e151b245d081203adcbb0ce7a4 |

| user_id   | cfca3361950644de990b52ad341a06f0 |

+------------+----------------------------------+

创建环境变量脚本:

[[email protected] ~(keystone_admin)]# catadmin_keystone

unset OS_SERVICE_TOKEN OS_TOKEN OS_URL

export OS_USERNAME=admin

export OS_PASSWORD=1

exportOS_AUTH_URL=http://10.192.44.148:35357/v2.0

export PS1=‘[\[email protected]\h \W(keystone_admin)]\$ ‘

export OS_TENANT_NAME=admin

export OS_REGION_NAME=RegionOne

[[email protected] ~(keystone_admin)]#openstack user list

+----------------------------------+-------+

| ID                               | Name  |

+----------------------------------+-------+

| cfca3361950644de990b52ad341a06f0 | admin|

+----------------------------------+-------+

5.3 安装horizon

5.3.1 horizon安装

yum installopenstack-dashboard httpd mod_wsgi memcached pythonmemcached

5.3.1 修改horizon配置

将packstack的/etc/openstack-dashboard拷贝过来:

修改如下内容:

./local_settings:OPENSTACK_KEYSTONE_URL = http://192.168.129.131:5000/v2.0

改为:

OPENSTACK_KEYSTONE_URL ="http://10.192.44.148:5000/v2.0"

其他不必修改

setsebool -Phttpd_can_network_connect on

# chown -R apache:apache/usr/share/openstack-dashboard/static

重启httpd:

# systemctlenable httpd.service memcached.service

# systemctl restarthttpd.service memcached.service

5.3.2 登录验证

Internal Server Error

The server encounteredan internal error or misconfiguration and was unable to complete your request.

Please contact theserver administrator at [no address given] to inform them of the time thiserror occurred, and the actions you performed just before this error.

More informationabout this error may be available in the server error log.

这个问题遇到过,参考PART3:

/var/log/horizon/horizon.log的属主有问题:

[[email protected](keystone_admin)]#ls -l

total 0

-rw-r--r--1 root root 0 May 20 23:44horizon.log

应该是:

[[email protected](keystone_admin)]#ls -l

total 4

-rw-r-----.1 apache apache 316 May 1819:35 horizon.log

修改:

# chownapache:apache horizon.log

OK,界面可以登录:

其他组件还没有安装、

所以登录进去肯定报错:

5.4 安装glance

5.4.1 创建数据库

MariaDB [(none)]> CREATE DATABASEglance;

MariaDB [(none)]> GRANT ALL PRIVILEGESON glance.* TO ‘glance‘@‘localhost‘ IDENTIFIED BY ‘1‘;

MariaDB [(none)]> GRANT ALL PRIVILEGESON glance.* TO ‘glance‘@‘%‘ IDENTIFIED BY ‘1‘;

 

[[email protected]~(keystone_admin)]# openstack user create --password-prompt glance

User Password:密码全是1

Repeat User Password:

+----------+----------------------------------+

| Field    | Value                            |

+----------+----------------------------------+

| email    | None                             |

| enabled  | True                             |

| id       | 9b9b7d340f5c47fa8ead236b55400675 |

| name     | glance                           |

| username | glance                           |

+----------+----------------------------------+

# openstack role add --project service --userglance admin

+-------+----------------------------------+

| Field | Value                            |

+-------+----------------------------------+

| id   | 6c89e70e3b274c44b068dbd6aef08bb2 |

| name | admin                           |

+-------+----------------------------------+

# openstack service create --name glance--description "OpenStack Image service" image

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Imageservice          |

| enabled     | True                             |

| id          | a0c905098446491cbb2f948285364c43 |

| name        | glance                           |

| type        | image                            |

+-------------+----------------------------------+

openstackendpoint create \

--publicurlhttp://10.192.44.148:9292 \

--internalurlhttp://10.192.44.148:9292 \

--adminurlhttp://10.192.44.148:9292 \

--regionRegionOne \

image

# openstack endpoint create --publicurlhttp://10.192.44.148:9292 --internalurl http://10.192.44.148:9292 --adminurlhttp://10.192.44.148:9292 --region RegionOne image

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| adminurl     | http://10.192.44.148:9292        |

| id           | 49a032e19f9841b381e795f60051f131 |

| internalurl  | http://10.192.44.148:9292        |

| publicurl    | http://10.192.44.148:9292        |

| region       | RegionOne                        |

| service_id   | a0c905098446491cbb2f948285364c43 |

| service_name | glance                           |

| service_type | image                            |

+--------------+----------------------------------+

5.4.2 安装glance

yum install openstack-glancepython-glance python-glanceclient

 

5.4.3 配置glance

将packstack的glance配置拷贝过来,修改

[[email protected] glance(keystone_admin)]#grep 192 ./ -r

./glance-registry.conf:connection=mysql://glance:[email protected]/glance

./glance-registry.conf:auth_uri=http://192.168.129.131:5000/v2.0

./glance-registry.conf:identity_uri=http://192.168.129.131:35357

./glance-api.conf:connection=mysql://glance:[email protected]/glance

./glance-api.conf:auth_uri=http://192.168.129.131:5000/v2.0

./glance-api.conf:identity_uri=http://192.168.129.131:35357

改为:

[[email protected] glance(keystone_admin)]#grep 192 ./ -r

./glance-registry.conf:connection=mysql://glance:[email protected]/glance

./glance-registry.conf:auth_uri=http://10.192.44.148:5000/v2.0

./glance-registry.conf:identity_uri=http://10.192.44.148:35357

./glance-api.conf:connection=mysql://glance:[email protected]/glance

./glance-api.conf:auth_uri=http://10.192.44.148:5000/v2.0

./glance-api.conf:identity_uri=http://10.192.44.148:353

同步数据库:

su -s /bin/sh -c"glance-manage db_sync" glance

 

重启服务:

systemctlenable openstack-glance-api.service openstack-glance-registry.service

systemctl startopenstack-glance-api.service openstack-glance-registry.service

5.4.4 glance上传镜像验证

echo "exportOS_IMAGE_API_VERSION=2" | tee -a ./admin_keystone

[[email protected] ~(keystone_admin)]# catadmin_keystone

unset OS_SERVICE_TOKEN OS_TOKEN OS_URL

export OS_USERNAME=admin

export OS_PASSWORD=1

exportOS_AUTH_URL=http://10.192.44.148:35357/v2.0

export PS1=‘[\[email protected]\h \W(keystone_admin)]\$ ‘

export OS_TENANT_NAME=admin

export OS_REGION_NAME=RegionOne

export OS_IMAGE_API_VERSION=2

[[email protected] ~(keystone_admin)]# .admin_keystone

网络组件还没安装,暂时上传不了

5.5 安装nova:控制节点

5.5.1 创建数据库

MariaDB [(none)]> CREATE DATABASE nova;

MariaDB [(none)]> GRANT ALL PRIVILEGESON nova.* TO ‘nova‘@‘localhost‘ IDENTIFIED BY ‘1‘;

MariaDB [(none)]> GRANT ALL PRIVILEGESON nova.* TO ‘nova‘@‘%‘ IDENTIFIED BY ‘1‘;

创建用户:密码都是1

# openstack user create --password-promptnova

User Password:

Repeat User Password:

+----------+----------------------------------+

| Field   | Value                           |

+----------+----------------------------------+

| email   | None                            |

| enabled | True                            |

| id      | 0520ac06230f4c238ef96c66dc9d7ba6 |

| name    | nova                            |

| username | nova                             |

+----------+----------------------------------+

# openstack role add --project service--user nova admin

+-------+----------------------------------+

| Field | Value                            |

+-------+----------------------------------+

| id   | 6c89e70e3b274c44b068dbd6aef08bb2 |

| name | admin                           |

+-------+----------------------------------+

# openstack service create --name nova --description"OpenStack Compute" compute

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Compute                |

| enabled    | True                             |

| id          | f82db038024746449b5b6be918b826f0 |

| name        | nova                             |

| type        | compute                          |

+-------------+----------------------------------+

创建endpoint:

openstackendpoint create \

--publicurlhttp://10.192.44.148:8774/v2/%\(tenant_id\)s \

--internalurlhttp:// 10.192.44.148:8774/v2/%\(tenant_id\)s \

--adminurlhttp:// 10.192.44.148:8774/v2/%\(tenant_id\)s \

--regionRegionOne \

compute

# openstack endpoint create --publicurlhttp://10.192.44.148:8774/v2/%\(tenant_id\)s --internalurlhttp://10.192.44.148:8774/v2/%\(tenant_id\)s --adminurlhttp://10.192.44.148:8774/v2/%\(tenant_id\)s --region RegionOne compute

+--------------+--------------------------------------------+

| Field        | Value                                      |

+--------------+--------------------------------------------+

| adminurl     |http://10.192.44.148:8774/v2/%(tenant_id)s |

| id           | c34d670ee15b47bda43830a48e9c4ef2           |

| internalurl  | http://10.192.44.148:8774/v2/%(tenant_id)s|

| publicurl    |http://10.192.44.148:8774/v2/%(tenant_id)s |

| region       | RegionOne                                  |

| service_id   | f82db038024746449b5b6be918b826f0           |

| service_name | nova                                       |

| service_type | compute                                    |

+--------------+--------------------------------------------+

5.5.2 安装控制节点

yuminstall openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-consoleopenstack-nova-novncproxy openstack-nova-scheduler python-novaclient

5.5.3 配置:参考萤石云配置和官网配置、packstack配置

Packstack安装的nova.conf配置项目太多太复杂,参考萤石云的配置,然后检查逛网设置的几项:

[[email protected] nova(keystone_admin)]#

[[email protected] nova(keystone_admin)]#cat nova.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 10.192.44.148

vncserver_listen = 10.192.44.148

vncserver_proxyclient_address = 10.192.148

memcached_servers = controller1:11211

[database]

connection =mysql://nova:[email protected]/nova

[oslo_messaging_rabbit]

rabbit_hosts=10.192.44.148:5672

rabbit_userid = openstack

rabbit_password = 1

[keystone_authtoken]

auth_uri = http://10.192.44.148:5000

auth_url = http://10.192.44.148:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = nova

password = 1

host = 10.192.44.148

[oslo_concurrency]

lock_path = /var/lock/nova

[[email protected] nova(keystone_admin)]#

同步数据库:

su -s /bin/sh -c"nova-manage db sync" nova

启动服务:

# systemctlenable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.serviceopenstack-nova-novncproxy.service

# systemctl startopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

 

Nova-api启动失败,其他服务OK

[[email protected] nova(keystone_admin)]#  systemctl restart openstack-nova-cert.service

[[email protected](keystone_admin)]#  systemctl restartopenstack-nova-consoleauth.service

[[email protected](keystone_admin)]#  systemctl restartopenstack-nova-scheduler.service

[[email protected](keystone_admin)]#  systemctl restartopenstack-nova-conductor.service

[[email protected](keystone_admin)]#  systemctl restartopenstack-nova-novncproxy.service

[[email protected](keystone_admin)]#

排查nova-api启动失败原因:

2016-05-25 13:46:00.431 21599 ERRORnova OSError: [Errno 13] Permission denied: ‘/var/lock/nova‘

手动创建试试:

[[email protected](keystone_admin)]# mkdir nova

[[email protected](keystone_admin)]# chmod 777 nova

OK,重启成功

5.5.4 验证nova service-list

[[email protected] ~(keystone_admin)]# novaservice-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary           | Host        | Zone     | Status | State | Updated_at                | Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1 | nova-cert        | controller1 |internal | enabled | up    |2016-05-25T05:49:02.000000 | -              |

| 2 | nova-consoleauth | controller1 | internal | enabled | up    | 2016-05-25T05:48:57.000000 | -               |

| 3 | nova-conductor   | controller1 |internal | enabled | up    |2016-05-25T05:48:59.000000 | -              |

| 4 | nova-scheduler   | controller1 |internal | enabled | up    |2016-05-25T05:49:03.000000 | -              |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

OK,nova控制节点所有服务状态正常

5.6 安装nova:计算节点(compute1)【作废:libvirtd升级会有问题】

5.6.1 安装

#yum installopenstack-nova-compute sysfsutils

5.6.2 配置

[neutron]字段暂时保留,后续整理

---------------------------------------------------------------------------------------------------------------------------------------------------

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 10.192.44.150

vnc_enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = 10.192.150

novncproxy_base_url =http://10.192.44.148:6080/vnc_auto.html

memcached_servers = controller1:11211

[database]

connection =mysql://nova:[email protected]/nova

[oslo_messaging_rabbit]

rabbit_host=10.192.44.148

rabbit_hosts=10.192.44.148:5672

rabbit_userid = openstack

rabbit_password = 1

[keystone_authtoken]

auth_uri = http://10.192.44.148:5000

auth_url = http://10.192.44.148:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = nova

password = 1

host = 10.192.44.148

[glance]

host=10.192.44.148

[oslo_concurrency]

lock_path = /var/lock/nova

[libvirt]

virt_type=qemu

---------------------------------------------------------------------------------------------------------------------------------------------------

确认:

egrep -c ‘(vmx|svm)‘/proc/cpuinfo

# systemctl enable libvirtd.service openstack-nova-compute.service

# systemctl start libvirtd.serviceopenstack-nova-compute.service

启动出错,排查:

oslo_config.cfg.ConfigFilesPermissionDeniedError:Failed to open some config files: /etc/nova/nova.conf

修改nova.conf的属性:

-rw-r----- 1 root root  805 May 25 15:32 nova.conf

# chown root:nova nova.conf

再次重启:

OK,启动成功

5.6.3 验证:nova service-list

为什么没有出现nova-compute?

Packstack安装完全的nova-compute是可以看到的

这里先记录一下,放到neutron之后再排查

5.7 neutron的安装(控制节点)

5.7.1 创建数据库

MariaDB [(none)]> CREATE DATABASEneutron;

MariaDB [(none)]> GRANT ALL PRIVILEGESON neutron.* TO ‘neutron‘@‘localhost‘ IDENTIFIED BY ‘1‘;

MariaDB [(none)]> GRANT ALL PRIVILEGESON neutron.* TO ‘neutron‘@‘%‘ IDENTIFIED BY ‘1‘;

# openstack user create --password-promptneutron

User Password:

Repeat User Password:

+----------+----------------------------------+

| Field   | Value                           |

+----------+----------------------------------+

| email   | None                            |

| enabled | True                            |

| id      |2398cfe405ac4480b27d3dfba36b64b4 |

| name    | neutron                         |

| username | neutron                          |

+----------+----------------------------------+

# openstack role add --project service--user neutron admin

+-------+----------------------------------+

| Field | Value                            |

+-------+----------------------------------+

| id   | 6c89e70e3b274c44b068dbd6aef08bb2 |

| name | admin                           |

+-------+----------------------------------+

# openstack service create --name neutron--description "OpenStack Networking" network

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Networking             |

| enabled     | True                             |

| id          | a3f4980ffb63482b905282ca7d3a2b01 |

| name        | neutron                          |

| type        | network                          |

+-------------+----------------------------------+

创建endpoint:

openstackendpoint create \

--publicurlhttp://10.192.44.148:9696 \

--adminurlhttp://10.192.44.148:9696 \

--internalurlhttp://10.192.44.148:9696 \

--regionRegionOne \

network

# openstack endpoint create --publicurlhttp://10.192.44.148:9696 --adminurl http://10.192.44.148:9696 --internalurlhttp://10.192.44.148:9696 --region RegionOne network

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| adminurl     | http://10.192.44.148:9696        |

| id           | 63fa679e443a4249a96a86ff17387b9f |

| internalurl  | http://10.192.44.148:9696        |

| publicurl    | http://10.192.44.148:9696        |

| region       | RegionOne                        |

| service_id   | a3f4980ffb63482b905282ca7d3a2b01 |

| service_name | neutron                          |

| service_type | network                          |

+--------------+----------------------------------+

5.7.2 安装网络组件:(控制节点)

yuminstall openstack-neutron openstack-neutron-ml2 python-neutronclient which

5.7.3 配置neutron

主要参考萤石云的配置。packstack的配置比较多,有些用不到,后面不好整理。

neutron.conf

[[email protected] neutron(keystone_admin)]#cat neutron.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://10.192.44.148:8774/v2

[database]

connection =mysql://neutron:[email protected]/neutron

[oslo_messaging_rabbit]

rabbit_hosts = 10.192.44.148:5672

rabbit_userid = openstack

rabbit_password = 1

[keystone_authtoken]

auth_uri = http://10.192.44.148:5000

auth_url = http://10.192.44.148:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = 1

[nova]

auth_url = http://10.192.44.148:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = RegionOne

project_name = service

username = nova

password = 1

[[email protected] neutron(keystone_admin)]#

ml2_conf.ini

/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]

type_drivers = flat,vlan,gre,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

创建软连接:

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugin.ini

nova.conf 【compute节点也要改】

[DEFAULT]

...

network_api_class =nova.network.neutronv2.api.API

security_group_api = neutron

linuxnet_interface_driver =nova.network.linux_net.LinuxOVSInterfaceDriver

firewall_driver =nova.virt.firewall.NoopFirewallDriver

[neutron]

url = http://10.192.44.148:9696

auth_strategy = keystone

admin_auth_url =http://10.192.44.148:35357/v2.0

admin_tenant_name = service

admin_username = neutron

admin_password = 1

5.7.4 同步数据库&启动服务

同步数据库:

# su -s /bin/sh -c "neutron-db-manage--config-file /etc/neutron/neutron.conf --config-file/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启nova:

systemctl restartopenstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service

重启nova-compute:

systemctl start libvirtd.serviceopenstack-nova-compute.service

启动neutron-server:

# systemctl enable neutron-server.service

# systemctl start neutron-server.service

5.7.5 验证

[[email protected] ml2(keystone_admin)]#neutron ext-list

+-----------------------+-----------------------------------------------+

| alias                 | name                                          |

+-----------------------+-----------------------------------------------+

| flavors               | Neutron Service Flavors                       |

| security-group        | security-group                                |

| dns-integration       | DNS Integration                               |

| l3_agent_scheduler    | L3 Agent Scheduler                            |

| net-mtu               | Network MTU                                   |

| ext-gw-mode           | Neutron L3 Configurable externalgateway mode |

| binding               | Port Binding                                  |

| provider              | Provider Network                              |

| agent                 | agent                                         |

| quotas                | Quota management support                      |

| subnet_allocation     | Subnet Allocation                             |

| dhcp_agent_scheduler  | DHCP Agent Scheduler                          |

| rbac-policies         | RBAC Policies                                 |

| l3-ha                 | HA Router extension                           |

| multi-provider        | Multi Provider Network                        |

| external-net          | Neutron external network                      |

| router                | Neutron L3 Router                             |

| allowed-address-pairs | Allowed AddressPairs                         |

| extraroute            | Neutron Extra Route                           |

| extra_dhcp_opt        | Neutron Extra DHCP opts                       |

| dvr                   | Distributed VirtualRouter                    |

+-----------------------+-----------------------------------------------+

5.8 neutron agent的安装(网络节点)

5.8.1 修改sysctl.conf

net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

 

[[email protected] etc(keystone_admin)]#sysctl -p

net.ipv4.icmp_echo_ignore_broadcasts = 1

net.ipv4.conf.all.rp_filter = 1

vm.max_map_count = 300000

kernel.sem = -1 -1 -1 8192

kernel.sem = -1 256000 -1 8192

kernel.sem = 1250 256000 -1 8192

kernel.sem = 1250 256000 100 8192

kernel.shmall = 1152921504606846720

kernel.shmmax = 21474836480

kernel.panic_on_io_nmi = 1

kernel.panic_on_unrecovered_nmi = 1

kernel.unknown_nmi_panic = 1

kernel.panic_on_stackoverflow = 1

net.ipv4.tcp_keepalive_intvl = 1

net.ipv4.tcp_keepalive_time = 5

net.ipv4.tcp_keepalive_probes = 5

net.ipv4.ip_forward = 1

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

5.8.2 安装配置neutron组件

yum install openstack-neutronopenstack-neutron-ml2 openstack-neutron-openvswitch

1. 配置neutron.conf:

(1)配置rabbitmq

[DEFAULT]

...

rpc_backend = rabbit

[oslo_messaging_rabbit]

...

rabbit_host = 10.192.44.148

rabbit_userid = openstack

rabbit_password = 1

(2)配置keystone

[DEFAULT]

...

auth_strategy = keystone

[keystone_authtoken]

...

auth_uri = http://10.192.44.148:5000

auth_url = http://10.192.44.148:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = 1

(3)配置ml2

[DEFAULT]

...

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

2      修改ml2_conf.ini

[[email protected] neutron(keystone_admin)]#cat plugin.ini

[ml2]

type_drivers = flat,vlan,gre,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ml2_type_flat]

flat_networks = external

[ml2_type_gre]

tunnel_id_ranges = 1:1000

[ovs]

# this is a tunnel ip, pay attention

local_ip = 10.192.44.152      #(eth3)

bridge_mappings = external:br-ex

[agent]

tunnel_types = vxlan

[[email protected] neutron(keystone_admin)]#

3 l3_agent.ini配置:参考packstack

[DEFAULT]

debug = False

interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver

handle_internal_only_routers = True

external_network_bridge = br-ex

metadata_port = 9697

send_arp_for_ha = 3

periodic_interval = 40

periodic_fuzzy_delay = 5

enable_metadata_proxy = True

router_delete_namespaces = True

agent_mode = legacy

[AGENT]

4 dhcp_agent.ini:参考Packstack

[[email protected] neutron(keystone_admin)]# catdhcp_agent.ini  |grep -v ‘^#‘ |grep -v‘^$‘

[DEFAULT]

debug = False

resync_interval = 30

interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = False

enable_metadata_network = False

dnsmasq_config_file =/etc/neutron/dnsmasq-neutron.conf

root_helper=sudo neutron-rootwrap/etc/neutron/rootwrap.conf

state_path=/var/lib/neutron

5 metadata_agent.ini

[DEFAULT]

auth_uri = http://10.192.44.148:5000

auth_url = http://10.192.44.148:35357

auth_region = RegionOne

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = 1

nova_metadata_ip = 10.192.44.148

metadata_proxy_shared_secret = 1

verbose = True

6 nova.conf

[neutron]

...

service_metadata_proxy = True

metadata_proxy_shared_secret = 1

重启nova-api:

systemctl restartopenstack-nova-api.service

 

5.8.3 配置Open vSwitch服务

# systemctlenable openvswitch.service

# systemctl startopenvswitch.service

 

创建br-ex:

# ovs-vsctl add-br br-ex

绑定一个网口到br-ex:

ovs-vsctl add-port br-ex eth0

这里网络不通了

 

从其他网口登录,解绑定:

[[email protected] ~]#ovs-vsctl del-port br-ex eth0

[[email protected] ~]# ovs-vsctllist-ports

ovs-vsctl: ‘list-ports‘command requires at least 1 arguments

[[email protected] ~]#ovs-vsctl list-ports br-ex

 

这里需要另外一个网口来作为br-ex,外网网桥

 

这里使用eth3


Hostname


IP(eth0)


IP1(open vswitch)

(br-ex)


openstack role


Ceph mon role


Ceph osd


配置


Vip


node1


10.192.44.148


Eth3:10.192.44.152


Controller1+network1


Mon0


Osd0~osd3


4Core 16G


10.192.44.155


node2


10.192.44.149


Controller2+network2


Mon1


Osd4~osd7


4Core 16G


10.192.44.155


node3


10.192.44.150


Compute1


Mon2


Osd8~osd11


4Core 16G


node4


10.192.44.151


Compute2


Mon3


Osd12~osd15


8Core 16G

创建br-ex:

# ovs-vsctl add-br br-ex

# ovs-vsctl add-port br-ex eth3

ethtool -K eth3 grooff

5.8.4 创建软连接,启动服务

ln -s/etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

# cp /usr/lib/systemd/system/neutron-openvswitch-agent.service/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig

sed -i‘s,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g‘/usr/lib/systemd/system/neutron-openvswitch-agent.service

启动服务:

# systemctlenable neutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.serviceneutron-ovs-cleanup.service

# systemctl startneutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.serviceneutron-metadata-agent.service

 

5.8.5 验证neutron服务

[[email protected] ~(keystone_admin)]#neutron agent-list

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| id                                  |agent_type         | host        | alive | admin_state_up | binary                    |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| 1746662a-081c-4800-b371-479e670fbb20 |Metadata agent     | controller1 |:-)   | True           | neutron-metadata-agent    |

| 2ead14e3-6d3d-4e1c-9e07-7665a2632565 | L3agent           | controller1 | :-)   | True           | neutron-l3-agent          |

| ad55ffa2-dd19-4cee-b5fc-db4bc60b796b |DHCP agent         | controller1 |:-)   | True           | neutron-dhcp-agent        |

| d264e9b0-c0c1-4e13-9502-43c248127dff |Open vSwitch agent | controller1 | :-)  | True           |neutron-openvswitch-agent |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

OK,所有服务都已经启动

5.9 neutron ovs的安装(计算节点)【作废:ovs配置错误】

5.9.1 sysctl.conf修改

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

[[email protected] etc]# sysctl -p

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

5.9.2 安装及配置neutron组件(计算节点)

1. 安装

yum installopenstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

2 配置neutron.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

[oslo_messaging_rabbit]

rabbit_hosts = 10.192.44.148:5672

rabbit_userid = openstack

rabbit_password = 1

[keystone_authtoken]

auth_uri = http://10.192.44.148:5000

auth_url = http://10.192.44.148:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = 1

3 配置ml2_conf.ini

[ml2]

type_drivers = flat,vlan,gre,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]

local_ip = 10.192.44.150

[agent]

tunnel_types = vxlan #注意这里要和网络节点类型一致

重启open vSwitch:

# systemctl enableopenvswitch.service

# systemctl startopenvswitch.service

重启网络、控制节点neutron服务:

# systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

4 修改nova.conf

[DEFAULT]

...

network_api_class =nova.network.neutronv2.api.API

security_group_api = neutron

linuxnet_interface_driver =nova.network.linux_net.

LinuxOVSInterfaceDriver

firewall_driver =nova.virt.firewall.NoopFirewallDriver

[neutron]

url = http://10.192.44.148:9696

auth_strategy = keystone

admin_auth_url =http://10.192.44.148:35357/v2.0

admin_tenant_name = service

admin_username = neutron

admin_password = 1

5 完成安装,重启服务

ln -s/etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

cp /usr/lib/systemd/system/neutron-openvswitch-agent.service/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig

sed -i‘s,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g‘/usr/lib/systemd/system/neutron-openvswitch-agent.service

重启nova-compute:

systemctl restartopenstack-nova-compute.service

 

启动openvswitch

# systemctlenable neutron-openvswitch-agent.service

# systemctl startneutron-openvswitch-agent.service

[[email protected] ~(keystone_admin)]#neutron agent-list

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| id                                   |agent_type         | host        | alive | admin_state_up | binary                    |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| 1746662a-081c-4800-b371-479e670fbb20 |Metadata agent     | controller1 |:-)   | True           | neutron-metadata-agent    |

| 2ead14e3-6d3d-4e1c-9e07-7665a2632565 | L3agent           | controller1 | :-)   | True           | neutron-l3-agent          |

| 96820906-bc31-4fcf-a473-10a6d6865b2a |Open vSwitch agent | compute1    |:-)   | True           | neutron-openvswitch-agent |

| ad55ffa2-dd19-4cee-b5fc-db4bc60b796b |DHCP agent         | controller1 |:-)   | True           | neutron-dhcp-agent        |

| d264e9b0-c0c1-4e13-9502-43c248127dff |Open vSwitch agent | controller1 | :-)  | True           |neutron-openvswitch-agent |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

OK!

计算节点的openvswitch启动成功

6 Legacy networking (nova-network)

这里是遗留的网络组件,以前neutron为nova的一部分,现在nova-network也可以完成简单的功能。

这里不用关心

7 验证安装

5.10 安装nova(compute2)【作废:libvirt升级会有问题】

因为每次compute1重启网络配置都有问题,所以在compute2上重装

5.10.1 安装与配置nova-compute

1. 安装nova-compute服务

yum installopenstack-nova-compute sysfsutils

 

2      设置子网卡给148、151

配置148的eth0:1

DEVICE=eth0:1

ONBOOT=yes

STARTMODE=onboot

MTU=1500

BOOTPROTO=static

IPADDR=192.168.0.148

NETMASK=255.255.254.0

[[email protected](keystone_admin)]# ifup eth0:1

eth0:1:flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500

inet 192.168.0.148  netmask 255.255.255.0  broadcast 192.168.0.255

ether 88:00:00:01:02:14 txqueuelen 1000  (Ethernet)

device memory 0xbb000000-bb020000

配置151的eth1:1

DEVICE=eth1:1

ONBOOT=yes

STARTMODE=onboot

MTU=1500

BOOTPROTO=static

IPADDR=192.168.0.151

NETMASK=255.255.255.0

千万不要配置路由,配置路由就会挂掉

eth1:1:flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

inet 192.168.0.151  netmask 255.255.255.0  broadcast 192.168.0.255

ether 00:25:90:01:b0:28 txqueuelen 1000  (Ethernet)

device interrupt 16  memory0xfaee0000-faf00000

3      配置nova.conf

[[email protected] nova]# cat nova.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 10.192.44.151

vnc_enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = 10.192.44.151

novncproxy_base_url = http://10.192.44.151:6080/vnc_auto.html

verbose = True

network_api_class =nova.network.neutronv2.api.API

security_group_api = neutron

linuxnet_interface_driver =nova.network.linux_net.LinuxOVSInterfaceDriver

firewall_driver =nova.virt.firewall.NoopFirewallDriver

[oslo_messaging_rabbit]

rabbit_hosts = 10.192.44.148:5672

rabbit_userid = openstack

rabbit_password = 1

[keystone_authtoken]

auth_uri = http://10.192.44.148:5000

auth_url = http://10.192.44.148:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = nova

password = 1

[glance]

host = 10.192.44.148

[oslo_concurrency]

lock_path = /var/lock/nova

[neutron]

url = http://10.192.44.148:9696

auth_strategy = keystone

admin_auth_url = http://10.192.44.148:35357/v2.0

admin_tenant_name = service

admin_username = neutron

admin_password = 1

[libvirt]

virt_type = qemu

[[email protected] nova]#

# systemctlenable libvirtd.service openstack-nova-compute.service

# systemctl startlibvirtd.service openstack-nova-compute.service

 

[[email protected] nova]# ps -A|grep nova

14517 ?        00:00:03 nova-compute

5.11 安装neutron ovs(compute2)【作废:libvirt升级会有问题】

[[email protected] nova]# sysctl -p

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

安装neutron-openvswitch:

# yum installopenstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

 

1 配置neutron.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

[oslo_messaging_rabbit]

rabbit_hosts = 10.192.44.148:5672

rabbit_userid = openstack

rabbit_password = 1

[keystone_authtoken]

auth_uri = http://10.192.44.148:5000

auth_url = http://10.192.44.148:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = 1

配置ml2_conf.ini:

[ml2]

type_drivers = flat,vlan,gre,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]

local_ip = 192.168.0.151

[agent]

tunnel_types =vxlan

重启服务

# systemctlenable openvswitch.service

# systemctl startopenvswitch.service

 

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugin.ini

cp/usr/lib/systemd/system/neutron-openvswitch-agent.service/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig

sed -i‘s,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g‘ /usr/lib/systemd/system/neutron-openvswitch-agent.service

systemctl restartopenstack-nova-compute.service

 

# systemctlenable neutron-openvswitch-agent.service

# systemctl start neutron-openvswitch-agent.service           

 

验证

[[email protected](keystone_admin)]# neutron agent-list

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| id                                   |agent_type         | host        | alive | admin_state_up | binary                    |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

|1746662a-081c-4800-b371-479e670fbb20 | Metadata agent     | controller1 | :-)   | True           | neutron-metadata-agent    |

|2ead14e3-6d3d-4e1c-9e07-7665a2632565 | L3 agent           | controller1 | :-)   | True           | neutron-l3-agent          |

|5749371b-df3e-4a51-a7f9-279ee2b8666a | Open vSwitch agent | compute2    | :-)  | True           | neutron-openvswitch-agent|

|96820906-bc31-4fcf-a473-10a6d6865b2a | Open vSwitch agent | compute1    | xxx  | True           |neutron-openvswitch-agent |

|ad55ffa2-dd19-4cee-b5fc-db4bc60b796b | DHCP agent         | controller1 | :-)   | True           | neutron-dhcp-agent        |

|d264e9b0-c0c1-4e13-9502-43c248127dff | Open vSwitch agent | controller1 |:-)   | True           | neutron-openvswitch-agent |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

5.12 排查nova service-list失败问题

[[email protected] etc(keystone_admin)]#nova service-list

ERROR (ConnectionRefused): Unable toestablish connection to http://10.192.44.148:8774/v2/

看起来是访问数据库有问题,排查:

[[email protected] etc(keystone_admin)]#openstack user list

+----------------------------------+---------+

| ID                               | Name    |

+----------------------------------+---------+

| 0520ac06230f4c238ef96c66dc9d7ba6 |nova    |

| 2398cfe405ac4480b27d3dfba36b64b4 |neutron |

| 9b9b7d340f5c47fa8ead236b55400675 |glance  |

| cfca3361950644de990b52ad341a06f0 |admin   |

+----------------------------------+---------+

[[email protected] etc(keystone_admin)]#openstack service list

+----------------------------------+----------+----------+

| ID                               | Name     | Type    |

+----------------------------------+----------+----------+

| 69c389157be24cf6b4511d648e8412be |keystone | identity |

| a0c905098446491cbb2f948285364c43 |glance   | image    |

| a3f4980ffb63482b905282ca7d3a2b01 |neutron  | network  |

| f82db038024746449b5b6be918b826f0 |nova     | compute  |

+----------------------------------+----------+----------+

[[email protected] etc(keystone_admin)]#openstack endpoint list

+----------------------------------+-----------+--------------+--------------+

| ID                               | Region    | Service Name | Service Type |

+----------------------------------+-----------+--------------+--------------+

| 49a032e19f9841b381e795f60051f131 |RegionOne | glance       | image        |

| c34d670ee15b47bda43830a48e9c4ef2 |RegionOne | nova         | compute      |

| 63fa679e443a4249a96a86ff17387b9f |RegionOne | neutron      | network      |

| 6df505c12153483a9f8dc42d64879c69 |RegionOne | keystone     | identity     |

+----------------------------------+-----------+--------------+--------------+

[[email protected] etc(keystone_admin)]#openstack endpoint show c34d670ee15b47bda43830a48e9c4ef2

+--------------+--------------------------------------------+

| Field        | Value                                      |

+--------------+--------------------------------------------+

| adminurl     | http://10.192.44.148:8774/v2/%(tenant_id)s|

| enabled      | True                                       |

| id           |c34d670ee15b47bda43830a48e9c4ef2          |

| internalurl  | http://10.192.44.148:8774/v2/%(tenant_id)s|

| publicurl    | http://10.192.44.148:8774/v2/%(tenant_id)s|

| region       | RegionOne                                  |

| service_id   | f82db038024746449b5b6be918b826f0           |

| service_name | nova                                       |

| service_type | compute                                    |

+--------------+--------------------------------------------+

这些都没问题

是不是权限没有设置?

MariaDB [(none)]> GRANT ALL PRIVILEGESON nova.* TO ‘nova‘@‘localhost‘ IDENTIFIED BY ‘1‘;

MariaDB [(none)]> GRANT ALL PRIVILEGESON nova.* TO ‘nova‘@‘%‘ IDENTIFIED BY ‘1‘;

MariaDB [(none)]> FLUSH PRIVILEGES;

数据库同步:

su -s /bin/sh -c"nova-manage db sync" nova

systemctlstart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

nova-api.log中的打印:

‘/var/lock/nova‘

[[email protected](keystone_admin)]# mkdir nova

[[email protected](keystone_admin)]# chown nova:root nova/

[[email protected](keystone_admin)]# chmod 755 nova/

注意这里很多目录yum安装都没有创建:

[email protected]:/var/lock#ls -l

total 0

drwxr-xr-x 2www-data   root   40 Jan 19 10:12 apache2

drwxr-xr-x 2ceilometer root   40 Jan 19 10:12ceilometer

drwxr-xr-x 2cinder     root 3080 May 26 15:26 cinder

drwxr-xr-x 2glance     root   40 Jan 19 10:12 glance

drwxr-xr-x 2heat       root   40 Jan 19 10:12 heat

drw------- 2root       root   60 May 17 11:45 iscsi

drwxr-xr-x 2keystone   root   40 Jan 29 11:29 keystone

drwx------ 2root       root   40 May 26 15:46 lvm

drwxr-xr-x 2neutron    root   40 Jan 19 10:12 neutron

drwxr-xr-x 2nova       root   60 Jan 19 10:12 nova

drwx------ 2root       root   60 Jan 19 10:10 zfs

重启nova服务:

OK,服务启动正常

[[email protected](keystone_admin)]# nova service-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id |Binary           | Host        | Zone     | Status | State | Updated_at                | Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1  | nova-cert        | controller1 | internal | enabled |up    | 2016-05-26T07:40:20.000000 |-               |

| 2  | nova-consoleauth | controller1 | internal |enabled | up    |2016-05-26T07:40:20.000000 | -              |

| 3  | nova-conductor   | controller1 | internal | enabled | up    | 2016-05-26T07:40:16.000000 | -               |

| 4  | nova-scheduler   | controller1 | internal | enabled | up    | 2016-05-26T07:40:20.000000 | -               |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

但是这里依旧没有nova-compute:

5.13 排查libvirtd启动失败问题

在计算节点同样操作:

[[email protected]]# mkdir nova

[[email protected]]# chown nova:root nova

[[email protected]]# chmod 755 nova

[[email protected]]#

重启:

systemctl restartopenstack-nova-compute.service

为什么列表中没有nova-compute?

Nova-compute启动失败,报错:

systemctl restartopenstack-nova-compute.service

ng compute node(version 12.0.1-1.el7)

r [-] Connectionevent ‘0‘ reason ‘Failed to connect to libvirt‘

iver[req-137f7a94-1409-4ce0-b1fd-b6dca8b18e1c - - - - -] Cannot update servicestatus on host "compute2" since it is not registered.

[req-137f7a94-1409-4ce0-b1fd-b6dca8b18e1c - -- - -] Connection to libvirt failed: no connection driver available forqemu:///system

Traceback (most recent call last):

File"/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line528, in get_connection

conn = self._get_connection()

File"/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line515, in _get_connection

wrapped_conn = self._get_new_connection()

File"/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line467, in _get_new_connection

wrapped_conn = self._connect(self._uri,self._read_only)

File"/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line321, in _connect

libvirt.openAuth, uri, auth, flags)

File"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, inproxy_call

rv = execute(f, *args, **kwargs)

File"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, inexecute

six.reraise(c, e, tb)

File"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, intworker

rv = meth(*args, **kwargs)

File"/usr/lib64/python2.7/site-packages/libvirt.py", line 105, inopenAuth

if ret is None:raiselibvirtError(‘virConnectOpenAuth() failed‘)

libvirtError: no connection driver availablefor qemu:///system

没有连上qemu

1.      修改libvirtd.conf:

unix_sock_group= "libvirtd"

unix_sock_ro_perms= "0777"

unix_sock_rw_perms= "0770"

auth_unix_ro ="none"

auth_unix_rw ="none"

2.      修改qemu.conf

vnc_listen = "0.0.0.0"

user = "root"

group = "root"

验证:

重启服务:

systemctl start libvirtd.service

启动失败:

May 26 16:00:27compute2 libvirtd[8507]: failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so: symboldm_task_get_info_with_deferred_remove,

May 26 16:00:27compute2 libvirtd[8507]: failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so: undefined symbol:virStorageFileCreate

参考:http://blog.csdn.net/lipei1220/article/details/50629961

解决:

# yum install librbd1

# yum installlibrbd1-devel

没用

再次启动:

systemctldaemon-reload

systemctlrestart libvirtd.service

搜一下这个符号:

这里/lib和/usr/lib等同, lib64和/usr/lib64等同

所安装的libvirt包:

[[email protected]/]# rpm -aq |grep libvirt

libvirt-daemon-driver-qemu-1.2.17-13.el7.x86_64

libvirt-daemon-driver-interface-1.2.17-13.el7.x86_64

libvirt-glib-0.1.7-3.el7.x86_64

libvirt-daemon-1.2.17-13.el7.x86_64

libvirt-daemon-driver-storage-1.2.17-13.el7.x86_64

libvirt-1.2.17-13.el7.x86_64

libvirt-gobject-0.1.7-3.el7.x86_64

libvirt-daemon-driver-nwfilter-1.2.17-13.el7.x86_64

libvirt-daemon-driver-lxc-1.2.17-13.el7.x86_64

libvirt-docs-1.2.17-13.el7.x86_64

libvirt-client-1.2.17-13.el7.x86_64

libvirt-daemon-driver-network-1.2.17-13.el7.x86_64

libvirt-daemon-config-nwfilter-1.2.17-13.el7.x86_64

libvirt-gconfig-0.1.7-3.el7.x86_64

libvirt-daemon-driver-nodedev-1.2.17-13.el7.x86_64

libvirt-java-devel-0.4.9-4.el7.noarch

libvirt-daemon-config-network-1.2.17-13.el7.x86_64

libvirt-python-1.2.17-2.el7.x86_64

libvirt-daemon-driver-secret-1.2.17-13.el7.x86_64

libvirt-devel-1.2.17-13.el7.x86_64

libvirt-daemon-kvm-1.2.17-13.el7.x86_64

libvirt-java-0.4.9-4.el7.noarch

解决方法:http://www.codesec.net/view/185902.html

重启libvirtd服务,查看日志记录/var/log/messages

Aug 10 11:15:57 localhost systemd: StartingVirtualization daemon...

Aug 10 11:15:57 localhost journal: libvirtversion: 1.2.8, package: 16.el7_1.3 (CentOS BuildSystem<http://bugs.centos.org>, 2015-05-12-20:12:58, worker1.bsys.centos.org)

Aug 10 11:15:57 localhost journal: failedto load module /usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so:symbol
dm_task_get_info_with_deferred_remove, version Base not defined in filelibdevmapper.so.1.02 with link time reference

Aug 10 11:15:57 localhost journal: failedto load module /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so: undefinedsymbol:virStorageFileCreate

Aug 10 11:15:57 localhost journal: Module/usr/lib64/libvirt/connection-driver/libvirt_driver_lxc.sonot accessible

Aug 10 11:15:57 localhost systemd: StartedVirtualization daemon.

日志记录错误关键部分:

journal: failed to load module /usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so:symbol
dm_task_get_info_with_deferred_remove, version Base not defined in filelibdevmapper.so.1.02 with link time reference

Aug 10 11:15:57 localhost journal: failedto load module /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so: undefinedsymbol:virStorageFileCreate

经过Google,解决此问题只需更新软件包

查看包版本:

yum info device-mapper-libs

更新软件包:

yum update device-mapper-libs

张工:

dev-mapper为我们的核心业务模块,升级影响较大

请先尝试使用原来的libvirt版本,不升级libvirt相关rpm

不能升级的rpm列表更新如下:

内核相关rpm

Dev-mapper

Libvirt相关rpm,不包含python相关的

--这里nova-compute安装的时候,不安装libvirt及libvirtd

试试149上libvirtd能否启动

5.14 安装nova(compute1:使用10.192.44.149)

因为compute1(150)每次重启网络都不同,且libvirtd被升级

5.14.1 安装nova-compute:以rpm方式(某些非核心包,使用yum方式安装,若依赖包有冲突,强制无依赖安装)

原来应该执行:yum install openstack-nova-compute sysfsutils

这里yum install sysfsutils

对于openstack-nova-compute,只下载不安装:

手动安装nova-compute:

只下载不安装:

yum install --downloadonly--downloaddir=/root/rpm openstack-nova-compute

手动安装nova-compute:

 

[[email protected] rpm]# rpm-ivh openstack-nova-compute-12.0.1-1.el7.noarch.rpm

warning:openstack-nova-compute-12.0.1-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature,key ID 764429e6: NOKEY

error: Failed dependencies:

        openstack-nova-common = 1:12.0.1-1.el7is needed by openstack-nova-compute-1:12.0.1-1.el7.noarch

        python-cinderclient >= 1.3.1 isneeded by openstack-nova-compute-1:12.0.1-1.el7.noarch

        python-libguestfs is needed byopenstack-nova-compute-1:12.0.1-1.el7.noarch

安装:

[[email protected] rpm]# rpm-ivh openstack-nova-common-12.0.1-1.el7.noarch.rpm

warning:openstack-nova-common-12.0.1-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature,key ID 764429e6: NOKEY

error: Failed dependencies:

        python-nova = 1:12.0.1-1.el7 is neededby openstack-nova-common-1:12.0.1-1.el7.noarch

安装:

 

[[email protected] rpm]# rpm-ivh python-nova-12.0.1-1.el7.noarch.rpm

warning:python-nova-12.0.1-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID764429e6: NOKEY

error: Failed dependencies:

这里依赖比较多,可以采用yum的方式安装,都是python库,不会有什么冲突

安装:这里可以使用yum安装,不会导致libvirt的库被替换

# yum install python-nova

继续安装:

# rpm -ivhpython-nova-12.0.1-1.el7.noarch.rpm

继续安装nova-compute:

# rpm -ivhopenstack-nova-common-12.0.1-1.el7.noarch.rpm

依赖:

[[email protected] rpm]# rpm-ivh openstack-nova-compute-12.0.1-1.el7.noarch.rpm      

warning:openstack-nova-compute-12.0.1-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature,key ID 764429e6: NOKEY

error: Failed dependencies:

        python-cinderclient >= 1.3.1 isneeded by openstack-nova-compute-1:12.0.1-1.el7.noarch

        python-libguestfs is needed byopenstack-nova-compute-1:12.0.1-1.el7.noarch

检查依赖:

# yum deplistpython-libguestfs

可以直接yum?

不可以,这里会安装libvirt:

# yum install python-libguestfs会依赖libvirt,怎么办?

先安装python-cinderclient:

# yum install python-cinderclient

继续安装:

[[email protected] rpm]# rpm -ivhopenstack-nova-compute-12.0.1-1.el7.noarch.rpm

warning:openstack-nova-compute-12.0.1-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature,key ID 764429e6: NOKEY

error: Failed dependencies:

python-libguestfs is needed byopenstack-nova-compute-1:12.0.1-1.el7.noarch

这里怎么办:

如果安装python-libguestfs,则必然会安装libvirt

手动安装:

# rpm -ivhpython-libguestfs-1.28.1-1.55.el7.centos.x86_64.rpm

warning: python-libguestfs-1.28.1-1.55.el7.centos.x86_64.rpm:Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY

error: Failed dependencies:

libguestfs = 1:1.28.1-1.55.el7.centos is needed bypython-libguestfs-1:1.28.1-1.55.el7.centos.x86_64

# rpm -ivhlibguestfs-1.28.1-1.55.el7.centos.x86_64.rpm

warning:libguestfs-1.28.1-1.55.el7.centos.x86_64.rpm: Header V3 RSA/SHA256 Signature,key ID f4a80eb5: NOKEY

error: Failed dependencies:

supermin5 >= 5.1.8-3 is needed by libguestfs-1:1.28.1-1.55.el7.centos.x86_64

systemd >= 219 is needed bylibguestfs-1:1.28.1-1.55.el7.centos.x86_64

augeas-libs >= 1.1.0-16 is needed bylibguestfs-1:1.28.1-1.55.el7.centos.x86_64

libvirt-daemon-kvm >= 1.2.8-3 is needed by libguestfs-1:1.28.1-1.55.el7.centos.x86_64

强制安装呢?

# rpm -ivh --force --nodepslibguestfs-1.28.1-1.55.el7.centos.x86_64.rpm

warning:libguestfs-1.28.1-1.55.el7.centos.x86_64.rpm: Header V3 RSA/SHA256 Signature,key ID f4a80eb5: NOKEY

Preparing...                          ################################# [100%]

Updating / installing...

1:libguestfs-1:1.28.1-1.55.el7.cent#################################[100%]

/sbin/ldconfig: /lib64/libosipparser2.so.3is not a symbolic link

/sbin/ldconfig: /lib64/libeXosip2.so.4 is nota symbolic link

/sbin/ldconfig: /lib64/libosip2.so.3 is nota symbolic link

强制安装:

# rpm -ivh --force --nodepspython-libguestfs-1.28.1-1.55.el7.centos.x86_64.rpm

warning:python-libguestfs-1.28.1-1.55.el7.centos.x86_64.rpm: Header V3 RSA/SHA256 Signature,key ID f4a80eb5: NOKEY

Preparing...                         ################################# [100%]

Updating / installing...

1:python-libguestfs-1:1.28.1-1.55.e#################################[100%]

安装nova-compute:

# rpm -ivh openstack-nova-compute-12.0.1-1.el7.noarch.rpm

warning: openstack-nova-compute-12.0.1-1.el7.noarch.rpm: Header V4RSA/SHA1 Signature, key ID 764429e6: NOKEY

Preparing...                         ################################# [100%]

Updating / installing...

1:openstack-nova-compute-1:12.0.1-1#################################[100%]

OK,安装成功!!!

5.14.2 配置nova.conf

[[email protected] nova]# cat nova.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 10.192.44.149

vnc_enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = 10.192.44.149

novncproxy_base_url = http://10.192.44.148:6080/vnc_auto.html

verbose = True

network_api_class =nova.network.neutronv2.api.API

security_group_api = neutron

linuxnet_interface_driver =nova.network.linux_net.LinuxOVSInterfaceDriver

firewall_driver =nova.virt.firewall.NoopFirewallDriver

[oslo_messaging_rabbit]

rabbit_hosts = 10.192.44.148:5672

rabbit_userid = openstack

rabbit_password = 1

[keystone_authtoken]

auth_uri = http://10.192.44.148:5000

auth_url = http://10.192.44.148:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = nova

password = 1

[glance]

host = 10.192.44.148

[oslo_concurrency]

lock_path = /var/lock/nova

[neutron]

url = http://10.192.44.148:9696

auth_strategy = keystone

admin_auth_url = http://10.192.44.148:35357/v2.0

admin_tenant_name = service

admin_username = neutron

admin_password = 1

[libvirt]

virt_type=qemu

inject_password=False

inject_key=False

inject_partition=-1

live_migration_uri=qemu+tcp://[email protected]%s/system

cpu_mode=none

vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

[[email protected] nova]#

# systemctlenable libvirtd.service openstack-nova-compute.service

# systemctl startlibvirtd.service openstack-nova-compute.service

 

[[email protected] nova]# ps -A|grep nova

14517 ?        00:00:03 nova-compute

 

5.14.3 配置libvirt【需要yum install qemu qemu-img,见5.14.4】

参考packstack自动部署的配置,现装一个packstack自动部署的环境

# cat libvirtd.conf  |grep -v ‘^#‘ |grep -v ‘^$‘

listen_tls = 0

listen_tcp = 1

auth_tcp = "none"

# cat qemu.conf  |grep -v ‘^#‘ |grep -v ‘^$‘

空的

[[email protected] libvirt]# vi qemu.conf

vnc_listen = "0.0.0.0"

user = "root"

group = "root"

再对比一下nova的配置:

[libvirt]

virt_type=qemu

inject_password=False

inject_key=False

inject_partition=-1

live_migration_uri=qemu+tcp://[email protected]%s/system

cpu_mode=none

vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

还是报:

libvirtError: no connection driveravailable for qemu:///system

再次对比nova.conf其他部分:

将packstack拷贝过来修改?

[[email protected] nova]# cat nova.conf  |grep 192

metadata_host=192.168.129.130 ->10.192.44.148

sql_connection=mysql://nova:[email protected]/nova

vncserver_proxyclient_address=10.192.44.149

novncproxy_base_url=http:// 10.192.44.148:6080/vnc_auto.html

api_servers=10.192.44.148:9292

auth_uri=http://10.192.44.148:5000/v2.0

identity_uri=http:// 10.192.44.148:35357

url=http:// 10.192.44.148:9696

admin_auth_url=http:// 10.192.44.148:5000/v2.0

rabbit_host=10.192.44.148

rabbit_hosts=10.192.44.148:5672

# systemctl restartopenstack-nova-compute.service

同样出现问题:

iver[req-9c628ebd-a804-4e8e-9bd8-111b95575f18 - - - - -] Cannot update servicestatus on host "compute1" since it is not registered.

[req-9c628ebd-a804-4e8e-9bd8-111b95575f18 - -- - -] Connection to libvirt failed: no connection driver available forqemu:///system

Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py",line 528, in get_connection

conn = self._get_connection()

File"/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line515, in _get_connection

wrapped_conn = self._get_new_connection()

File"/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line467, in _get_new_connection

wrapped_conn = self._connect(self._uri, self._read_only)

File"/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line321, in _connect

libvirt.openAuth, uri, auth, flags)

File "/usr/lib/python2.7/site-packages/eventlet/tpool.py",line 144, in proxy_call

rv = execute(f, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/eventlet/tpool.py",line 125, in execute

six.reraise(c, e, tb)

File "/usr/lib/python2.7/site-packages/eventlet/tpool.py",line 83, in tworker

rv = meth(*args, **kwargs)

File "/usr/lib64/python2.7/site-packages/libvirt.py", line102, in openAuth

if ret is None:raise libvirtError(‘virConnectOpenAuth() failed‘)

libvirtError: no connection driver availablefor qemu:///system

百度:

[DEFAULT]

compute_driver=libvirt.LibvirtDriver

没用

如此验证:

[[email protected] nova]# virsh -cqemu:///system list

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

参考:http://www.cnblogs.com/CasonChan/archive/2015/08/14/4729417.html

修改:libvirt.conf:

uri_default = "qemu:///system"

[[email protected] libvirt]#  systemctl restart libvirtd

[[email protected] libvirt]# ps -A |grep lib

8024?        00:00:00 libvirtd

再次连接

[[email protected] libvirt]# virsh -cqemu:///system list

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

[[email protected] /]# virsh -c list

error: failed to connect to the hypervisor

error: no connection driver available forlist

[[email protected] libvirt]# vi qemu.conf

vnc_listen = "0.0.0.0"

user = "root"

group = "root"

[[email protected] libvirt]# virsh -c list

error: failed to connect to the hypervisor

error: no connection driver available forlist

5.14.4 先解决qemu问题

[[email protected] etc]# qemu-img -v

qemu-img: error while loading sharedlibraries: libgfapi.so.0: cannot open shared object file: No such file ordirectory

find ./ -name libgfapi.so.0

找不到这个库

拷贝过来

[[email protected] lib64(keystone_admin)]# lslibgfapi.so.0*

libgfapi.so.0  libgfapi.so.0.0.0

# chmod 755 libgfapi.so.0.0.0

# ln -s libgfapi.so.0.0.0  libgfapi.so.0

[[email protected] lib64]# qemu-img  -v

qemu-img: error while loading sharedlibraries: libgfrpc.so.0: cannot open shared object file: No such file ordirectory

还是找不到?

[[email protected] lib64]# cplibgfapi.so.0.0.0  /usr/lib

[[email protected] lib64]# cd /usr/lib

[[email protected] lib]# ln -slibgfapi.so.0.0.0  libgfapi.so.0

再次尝试:

安装一下qemu:

# yum install qemu

再次重启nova-compute

还是出现:

[[email protected] lib]# qemu-img  -v

qemu-img: error while loading sharedlibraries: libgfrpc.so.0: cannot open shared object file: No such file ordirectory

进一步安装:

# yum install qemu-img

继续:

[[email protected] lib]# qemu-img  -v |grep version

qemu-img version 1.5.3, Copyright (c)2004-2008 Fabrice Bellard

OK

重启libvirtd:

systemctl restart  openstack-nova-compute.service

5.14.5 nova-compute被controller探测到

nodaemon nohup.out  perl5  share

[[email protected] ~(keystone_admin)]# novaservice-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary           | Host        | Zone     | Status | State | Updated_at                | Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1 | nova-cert        | controller1 |internal | enabled | up    | 2016-05-26T12:39:29.000000| -               |

| 2 | nova-consoleauth | controller1 | internal | enabled | up    | 2016-05-26T12:39:27.000000 | -               |

| 3 | nova-conductor   | controller1 |internal | enabled | up    |2016-05-26T12:39:26.000000 | -              |

| 4 | nova-scheduler   | controller1 |internal | enabled | up    |2016-05-26T12:39:27.000000 | -              |

| 5 | nova-compute     | compute1    | nova    | enabled | up    |2016-05-26T12:39:26.000000 | -              |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

5.15 安装neutron ovs (compute1: 使用10.192.44.149)

5.15.1 设置sysctl.conf

[[email protected] etc]# sysctl  -p

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

5.15.2 配置eth0:1

配置149的eth0:1

DEVICE=eth0:1

ONBOOT=yes

STARTMODE=onboot

MTU=1500

BOOTPROTO=static

IPADDR=192.168.0.149

NETMASK=255.255.255.0

[[email protected] network-scripts]# ifupeth0:1

[[email protected] network-scripts]# ifconfigeth0:1

eth0:1:flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500

inet 192.168.0.149  netmask255.255.255.0  broadcast 192.168.0.255

ether 88:00:00:01:02:80 txqueuelen 1000  (Ethernet)

device memory 0xbb000000-bb020000

5.15.3 安装neutron(计算节点部分)

yum installopenstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

1 配置neutron.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

[oslo_messaging_rabbit]

rabbit_hosts = 10.192.44.148:5672

rabbit_userid = openstack

rabbit_password = 1

[keystone_authtoken]

auth_uri = http://10.192.44.148:5000

auth_url = http://10.192.44.148:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = 1

2 配置ml2_conf.ini

[ml2]

type_drivers =flat,vlan,gre,vxlan

tenant_network_types= vxlan

mechanism_drivers= openvswitch

[ml2_type_vxlan]

vni_ranges =1:1000

[securitygroup]

enable_security_group= True

enable_ipset =True

firewall_driver= neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]

local_ip = 192.168.0.149

[agent]

tunnel_types =vxlan

3 重启服务

# systemctlenable openvswitch.service

# systemctl startopenvswitch.service

 

ln -s/etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

cp/usr/lib/systemd/system/neutron-openvswitch-agent.service/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig

sed -i‘s,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g‘/usr/lib/systemd/system/neutron-openvswitch-agent.service

systemctl restartopenstack-nova-compute.service

 

# systemctl enableneutron-openvswitch-agent.service

# systemctl start neutron-openvswitch-agent.service

 

4 验证

[[email protected]~(keystone_admin)]#neutron agent-list

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| id                                   |agent_type         | host        | alive | admin_state_up | binary                    |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

|1746662a-081c-4800-b371-479e670fbb20 | Metadata agent     | controller1 | :-)   | True           | neutron-metadata-agent    |

|2ead14e3-6d3d-4e1c-9e07-7665a2632565 | L3 agent           | controller1 | :-)   | True           | neutron-l3-agent          |

|5749371b-df3e-4a51-a7f9-279ee2b8666a | Open vSwitch agent | compute2    | xxx  | True           |neutron-openvswitch-agent |

|96820906-bc31-4fcf-a473-10a6d6865b2a | Open vSwitch agent | compute1    | :-)  | True           |neutron-openvswitch-agent |

|ad55ffa2-dd19-4cee-b5fc-db4bc60b796b | DHCP agent         | controller1 | :-)   | True           | neutron-dhcp-agent        |

|d264e9b0-c0c1-4e13-9502-43c248127dff | Open vSwitch agent | controller1 |:-)   | True           | neutron-openvswitch-agent |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

[[email protected]~(keystone_admin)]# novaservice-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id |Binary           | Host        | Zone     | Status | State | Updated_at                | Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1  | nova-cert        | controller1 | internal | enabled |up    | 2016-05-26T13:09:29.000000 | -               |

| 2  | nova-consoleauth | controller1 | internal |enabled | up    |2016-05-26T13:09:28.000000 | -              |

| 3  | nova-conductor   | controller1 | internal | enabled | up    | 2016-05-26T13:09:26.000000 | -               |

| 4  | nova-scheduler   | controller1 | internal | enabled | up    | 2016-05-26T13:09:28.000000 | -               |

| 5  | nova-compute     | compute1    | nova    | enabled | up    |2016-05-26T13:09:22.000000 | -              |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

5.11 cinder安装(注意不要安装LVM,直接用ceph)

5.11.1 创建数据库

MariaDB [(none)]> CREATE DATABASEcinder;

MariaDB [(none)]> GRANT ALL PRIVILEGESON cinder.* TO ‘cinder‘@‘localhost‘ IDENTIFIED BY ‘1‘;

MariaDB [(none)]> GRANT ALL PRIVILEGESON cinder.* TO ‘cinder‘@‘%‘ IDENTIFIED BY ‘1‘;

MariaDB [(none)]> FLUSH PRIVILEGES;

# openstack user create --password-promptcinder

User Password:

Repeat User Password:

+----------+----------------------------------+

| Field   | Value                           |

+----------+----------------------------------+

| email   | None                            |

| enabled | True                            |

| id      | 290aac0402914399a187218ac6d351af |

| name    | cinder                          |

| username | cinder                           |

+----------+----------------------------------+

# openstack role add --project service--user cinder admin

+-------+----------------------------------+

| Field | Value                            |

+-------+----------------------------------+

| id   | 6c89e70e3b274c44b068dbd6aef08bb2 |

| name | admin                           |

+-------+----------------------------------+

# openstack service create --name cinder--description "OpenStack Block Storage" volume

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack BlockStorage          |

| enabled     | True                             |

| id          | 3bfefe0409ba4b658d14071d3dbae348 |

| name        | cinder                           |

| type        | volume                           |

+-------------+----------------------------------+

# openstack service create --name cinderv2--description "OpenStack Block Storage" volumev2

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack BlockStorage          |

| enabled     | True                             |

| id          | 4094a5b3cf5546f2b5de7ceac3229160 |

| name        | cinderv2                         |

| type        | volumev2                         |

+-------------+----------------------------------+

openstackendpoint create \

--publicurlhttp://10.192.44.148:8776/v2/%\(tenant_id\)s \

--internalurlhttp://10.192.44.148:8776/v2/%\(tenant_id\)s \

--adminurlhttp://10.192.44.148:8776/v2/%\(tenant_id\)s \

--regionRegionOne \

volume

# openstack endpoint create --publicurlhttp://10.192.44.148:8776/v2/%\(tenant_id\)s --internalurlhttp://10.192.44.148:8776/v2/%\(tenant_id\)s --adminurlhttp://10.192.44.148:8776/v2/%\(tenant_id\)s --region RegionOne volume

+--------------+--------------------------------------------+

| Field        | Value                                      |

+--------------+--------------------------------------------+

| adminurl     | http://10.192.44.148:8776/v2/%(tenant_id)s|

| id           |7d19c203fdc7495fbfd0b01d9bc6203c          |

| internalurl  | http://10.192.44.148:8776/v2/%(tenant_id)s|

| publicurl    |http://10.192.44.148:8776/v2/%(tenant_id)s |

| region       | RegionOne                                  |

| service_id   | 3bfefe0409ba4b658d14071d3dbae348           |

| service_name | cinder                                     |

| service_type | volume                                     |

+--------------+--------------------------------------------+

openstackendpoint create \

--publicurlhttp://controller:8776/v2/%\(tenant_id\)s \

--internalurlhttp://controller:8776/v2/%\(tenant_id\)s \

--adminurlhttp://controller:8776/v2/%\(tenant_id\)s \

--regionRegionOne \

volumev2

 

# openstack endpoint create --publicurlhttp://10.192.44.148:8776/v2/%\(tenant_id\)s --internalurlhttp://10.192.44.148:8776/v2/%\(tenant_id\)s --adminurlhttp://10.192.44.148:8776/v2/%\(tenant_id\)s --region RegionOne volumev2

+--------------+--------------------------------------------+

| Field        | Value                                      |

+--------------+--------------------------------------------+

| adminurl     |http://10.192.44.148:8776/v2/%(tenant_id)s |

| id           | 7fd7d16a27d74eeea3a9df764d3e0a74           |

| internalurl  | http://10.192.44.148:8776/v2/%(tenant_id)s|

| publicurl    |http://10.192.44.148:8776/v2/%(tenant_id)s |

| region       | RegionOne                                  |

| service_id   | 4094a5b3cf5546f2b5de7ceac3229160           |

| service_name | cinderv2                                   |

| service_type | volumev2                                   |

+--------------+--------------------------------------------+

5.11.2 安装cinder-api和cinder-scheduler(控制节点)

yum install openstack-cinderpython-cinderclient python-oslo-db

# cp /usr/share/cinder/cinder-dist.conf/etc/cinder/cinder.conf

# chown -R cinder:cinder/etc/cinder/cinder.conf

修改cinder.conf配置

[[email protected] cinder(keystone_admin)]#cat cinder.conf

[DEFAULT]

my_ip = 10.192.44.148

rpc_backend = rabbit

auth_strategy = keystone

glance_host = 10.192.44.148

[oslo_messaging_rabbit]

rabbit_hosts = 10.192.44.148:5672

rabbit_userid = openstack

rabbit_password = 1

[database]

connection =mysql://cinder:[email protected]/cinder

[keystone_authtoken]

auth_uri = http://10.192.44.148:5000

auth_url = http://10.192.44.148:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = cinder

password = 1

[oslo_concurrency]

lock_path = /var/lock/cinder

同步数据库:

su -s /bin/sh -c"cinder-manage db sync" cinder

启动服务:

#  systemctl enable openstack-cinder-api.serviceopenstack-cinder-scheduler.service

# systemctl restartopenstack-cinder-api.service openstack-cinder-scheduler.service

查看:

[[email protected] cinder(keystone_admin)]#ps -A |grep cinder

8650?        00:00:00 cinder-api

8651?        00:00:01 cinder-schedule

8696?        00:00:00 cinder-api

8700?        00:00:00 cinder-api

8703?        00:00:00 cinder-api

8704?        00:00:00 cinder-api

[[email protected] cinder(keystone_admin)]#cinder service-list

+------------------+-------------+------+---------+-------+------------+-----------------+

|     Binary      |     Host   | Zone |  Status | State | Updated_at| Disabled Reason |

+------------------+-------------+------+---------+-------+------------+-----------------+

| cinder-scheduler | controller1 | nova |enabled |   up  |    -      |        -       |

+------------------+-------------+------+---------+-------+------------+-----------------+

5.11.3 安装cinder-volume(计算节点)

在计算节点:

yuminstall openstack-cinder

yuminstall targetcli

yuminstall python-oslo-db python-oslo-log MySQL-python

配置:

scp 10.192.44.148:/etc/cinder/cinder.conf/etc/cinder/

修改:

[DEFAULT]

my_ip = 10.192.44.149

启动cinder-volume服务:

# systemctl enableopenstack-cinder-volume.service target.service

systemctl restartopenstack-cinder-volume.service target.service

[[email protected] cinder]# ps -A|grep cinder

7608?        00:00:00 cinder-volume

7641?        00:00:00 cinder-volume

5.11.4 cinder服务验证

[[email protected] cinder(keystone_admin)]#cinder service-list

+------------------+-------------+------+---------+-------+----------------------------+-----------------+

|     Binary      |     Host   | Zone |  Status | State |         Updated_at         | Disabled Reason |

+------------------+-------------+------+---------+-------+----------------------------+-----------------+

| cinder-scheduler | controller1 | nova |enabled |   up  | 2016-05-26T13:27:59.000000 |        -       |

| cinder-volume   |   compute1 | nova | enabled |  down |             -              |        -       |

+------------------+-------------+------+---------+-------+----------------------------+-----------------+

正常,这里只是cinder-volume后端没有配置,才出现down的情况

6 整体验证:网络、镜像、云硬盘、虚拟机

6.1 查看服务状态

6.1.1 neutron状态:正常

[[email protected] cinder(keystone_admin)]#neutron agent-list

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| id                                   |agent_type         | host        | alive | admin_state_up | binary                    |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| 1746662a-081c-4800-b371-479e670fbb20 |Metadata agent     | controller1 |:-)   | True           | neutron-metadata-agent    |

| 2ead14e3-6d3d-4e1c-9e07-7665a2632565 | L3agent           | controller1 | :-)   | True           | neutron-l3-agent          |

| 5749371b-df3e-4a51-a7f9-279ee2b8666a |Open vSwitch agent | compute2    |xxx   | True           | neutron-openvswitch-agent |

| 96820906-bc31-4fcf-a473-10a6d6865b2a |Open vSwitch agent | compute1    |:-)   | True           | neutron-openvswitch-agent |

| ad55ffa2-dd19-4cee-b5fc-db4bc60b796b |DHCP agent         | controller1 |:-)   | True           | neutron-dhcp-agent        |

| d264e9b0-c0c1-4e13-9502-43c248127dff |Open vSwitch agent | controller1 | :-)  | True           |neutron-openvswitch-agent |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

这里compute2没有安装,之前是因为Libvirt问题,compute2的安装中止

6.1.2 cinder状态:正常

[[email protected] cinder(keystone_admin)]#cinder service-list

+------------------+-------------+------+---------+-------+----------------------------+-----------------+

|     Binary      |     Host   | Zone |  Status | State |         Updated_at         | Disabled Reason |

+------------------+-------------+------+---------+-------+----------------------------+-----------------+

| cinder-scheduler | controller1 | nova |enabled |   up  | 2016-05-26T13:30:59.000000 |        -       |

| cinder-volume   |   compute1 | nova | enabled |  down |             -              |        -       |

+------------------+-------------+------+---------+-------+----------------------------+-----------------+

这里cinder-volume后端没有配置,down是正常的,明天将后端配置为ceph

6.1.3 nova状态:正常

[[email protected] cinder(keystone_admin)]#nova service-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary           | Host        | Zone     | Status | State | Updated_at                | Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1 | nova-cert        | controller1 |internal | enabled | up    |2016-05-26T13:31:50.000000 | -              |

| 2 | nova-consoleauth | controller1 | internal | enabled | up    | 2016-05-26T13:31:49.000000 | -               |

| 3 | nova-conductor   | controller1 |internal | enabled | up    |2016-05-26T13:31:46.000000 | -              |

| 4 | nova-scheduler   | controller1 |internal | enabled | up    |2016-05-26T13:31:49.000000 | -              |

| 5 | nova-compute     | compute1    | nova    | enabled | up    |2016-05-26T13:31:42.000000 | -              |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

6.1.4 glance状态:正常

# glance image-list

+----+------+

| ID | Name |

+----+------+

+----+------+

6.2 基本功能验证:网络、镜像、云硬盘、虚拟机

6.2.1 镜像上传:已正常

后台上传正常:

glance image-create --name ‘cirros‘ --file./cirros-0.3.4-pre1-x86_64-disk.img --disk-format qcow2 --container-format bare--visibility public –progress

]# glance image-create --name ‘cirros‘--file ./cirros-0.3.4-pre1-x86_64-disk.img --disk-format qcow2 --container-formatbare --visibility public --progress

[=============================>] 100%

+------------------+--------------------------------------+

| Property         | Value                                |

+------------------+--------------------------------------+

| checksum         |6e496c911ee6c022501716c952fdf800     |

| container_format | bare                                 |

| created_at       | 2016-05-27T01:33:49Z                 |

| disk_format      | qcow2                                |

| id               |f04d7e2d-4cf8-4bf5-b079-0a0ca0feb21e |

| min_disk         | 0                                    |

| min_ram          | 0                                    |

| name             | cirros                               |

| owner            | 617e98e151b245d081203adcbb0ce7a4     |

| protected        | False                                |

| size             | 13224448                             |

| status           | active                               |

| tags             | []                                   |

| updated_at       | 2016-05-27T01:33:49Z                 |

| virtual_size     | None                                 |

| visibility       | public                               |

+------------------+--------------------------------------+

[[email protected] ~(keystone_admin)]#glance image-list

+--------------------------------------+--------+

| ID                                   | Name   |

+--------------------------------------+--------+

| f04d7e2d-4cf8-4bf5-b079-0a0ca0feb21e |cirros |

+--------------------------------------+--------+

但是web上传异常:

[keystone_authtoken]

auth_uri=http://10.192.44.148:5000/v2.0

auth_url = http://10.192.44.148:35357/v2.0

identity_uri=http://10.192.44.148:35357/v2.0

admin_user=glance

admin_password=1

admin_tenant_name=service

project_domain_id = default

user_domain_id = default

project_name = service

username = glance

password = 1

重启:

systemctlrestart openstack-glance-api.service openstack-glance-registry.service

 

原因:

授权问题,修改glance-api.conf,同时必须修改glance-registry.conf:

auth_uri = http://10.192.44.148:5000

auth_url = http://10.192.44.148:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = glance

password = 1

6.2.2 云硬盘功能【已正常】

6.2.2.1 排查cinder-volume状态down的问题:ntp没有安装、时钟不同步导致

这里先搞一个sdb设备简单测试下:

问题:

[[email protected] ~(keystone_admin)]#cinder service-list

+------------------+--------------+------+---------+-------+----------------------------+-----------------+

|     Binary      |     Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |

+------------------+--------------+------+---------+-------+----------------------------+-----------------+

| cinder-scheduler | controller1  | nova | enabled |   up  |2016-05-27T02:55:49.000000 |       -        |

| cinder-volume   |   compute1  | nova | enabled |  down |             -              |        -       |

| cinder-volume   | [email protected] |nova | enabled |  down |2016-05-27T04:09:46.000000 |       -        |

+------------------+--------------+------+---------+-------+----------------------------+-----------------+

实际上cinder-volume启动正常,没有报任何错误

但是在cinder service-list看起来,服务是down的

尝试方向:

(1)将packstack的所有cinder配置目录拷贝过来

(2)在控制节点上再启动一个cinder-volume试试

(1)      将packstack的所有cinder配置拷贝过来

[[email protected] cinder(keystone_admin)]#grep 192 ./ -r

./cinder.conf:glance_host = 192.168.129.130

./cinder.conf:connection =mysql://cinder:[email protected]/cinder

./cinder.conf:rabbit_host = 192.168.129.130

./cinder.conf:rabbit_hosts =192.168.129.130:5672

./cinder.conf:iscsi_ip_address=192.168.129.130

./api-paste.ini:auth_uri=http://192.168.129.130:5000/v2.0

./api-paste.ini:identity_uri=http://192.168.129.130:35357

改为:

依旧有问题

(2)      在控制节点创建cinder-volume:

在控制节点是OK的,很可能是时钟不同步导致的,参考http://www.cnblogs.com/sammyliu/p/4417091.html

配置NTP服务:

同步后重启计算节点的cinder-volume服务

systemctl startopenstack-cinder-volume.service target.service

OK,终于up起来了:

[[email protected] ntp(keystone_admin)]#cinder service-list

+------------------+-----------------+------+---------+-------+----------------------------+-----------------+

|     Binary      |       Host     | Zone |  Status | State |         Updated_at         | Disabled Reason |

+------------------+-----------------+------+---------+-------+----------------------------+-----------------+

| cinder-scheduler |   controller1  | nova | enabled |   up  | 2016-05-27T03:53:41.000000 |        -       |

| cinder-volume   |     compute1   | nova | enabled |  down |             -              |        -       |

|  cinder-volume   |  [email protected]  | nova | enabled|   up | 2016-05-27T03:53:45.000000 |       -        |

| cinder-volume   | [email protected]| nova | enabled |   up  | 2016-05-27T03:53:46.000000 |        -       |

+------------------+-----------------+------+---------+-------+----------------------------+-----------------+

[[email protected] ntp(keystone_admin)]#

将控制节点的cinder-volume停掉:

[[email protected] ntp(keystone_admin)]#systemctl stop openstack-cinder-volume.service target.service

[[email protected] ntp(keystone_admin)]#systemctl disable openstack-cinder-volume.service target.service

删除

[[email protected] ntp(keystone_admin)]#vgremove cinder-volumes

Volume group "cinder-volumes" successfully removed

[[email protected] ntp(keystone_admin)]# pv

pvchange  pvck       pvcreate   pvdisplay pvmove     pvremove   pvresize  pvs        pvscan

[[email protected] ntp(keystone_admin)]#pvremove /dev/hda4

Labels on physical volume "/dev/hda4" successfully wiped

[[email protected] ntp(keystone_admin)]# vgs

Novolume groups found

[[email protected] ntp(keystone_admin)]# pvs

[[email protected] ntp(keystone_admin)]#

6.2.2.2 云硬盘功能验证:numpy相关包需要卸载安装,有些包缺失

创建还是失败,但是服务都是up的,继续排查:

api.log中报错:

# cinder create --display-name vol 1

ERROR: The server has either erred or isincapable of performing the requested operation. (HTTP 500) (Request-ID:req-0737db60-ae0f-4137-92ca-83f1d7a52d2e)

api.log

[req-5e8ff869-9d67-4e0e-bf1d-5de490f263fe -- - - -] Availability Zones retrieved successfully.

[req-5e8ff869-9d67-4e0e-bf1d-5de490f263fe - -- - -] Failed to create api volume flow.

Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/cinder/volume/api.py",line 310, in create

volume_rpcapi)

File"/usr/lib/python2.7/site-packages/cinder/volume/flows/api/create_volume.py",line 801, in get_flow

return taskflow.engines.load(api_flow, store=create_what)

File"/usr/lib/python2.7/site-packages/taskflow/engines/helpers.py", line189, in load

invoke_args=(flow, flow_detail, backend, options))

File "/usr/lib/python2.7/site-packages/stevedore/driver.py",line 45, in __init__

verify_requirements=verify_requirements,

File "/usr/lib/python2.7/site-packages/stevedore/named.py",line 55, in __init__

verify_requirements)

File"/usr/lib/python2.7/site-packages/stevedore/extension.py", line 170,in _load_plugins

self._on_load_failure_callback(self, ep, err)

File"/usr/lib/python2.7/site-packages/stevedore/extension.py", line 162,in _load_plugins

verify_requirements,

File "/usr/lib/python2.7/site-packages/stevedore/named.py",line 123, in _load_one_plugin

verify_requirements,

File"/usr/lib/python2.7/site-packages/stevedore/extension.py", line 185,in _load_one_plugin

plugin = ep.load(require=verify_requirements)

File "/usr/lib/python2.7/site-packages/pkg_resources.py", line2260, in load

entry = __import__(self.module_name, globals(),globals(), [‘__name__‘])

File"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py",line 25, in <module>

import networkx as nx

File "/usr/lib/python2.7/site-packages/networkx/__init__.py",line 43, in <module>

from networkx import release

ImportError: cannot import name release

解决方法:

将/usr/lib/python2.7/site-packages/networkx/目录下的:release.pyc  release.pyo删除:

还是有错误

[[email protected](keystone_admin)]# rm __init__.pyc

[[email protected](keystone_admin)]# rm __init__.pyo

systemctl startopenstack-cinder-api.service

改错误解决,

继续,还是有错误:

File"/usr/lib64/python2.7/site-packages/numpy/add_newdocs.py", line 9, in<module>

from numpy.lib import add_newdoc

File"/usr/lib64/python2.7/site-packages/numpy/lib/__init__.py", line 13,in <module>

from polynomial import *

File"/usr/lib64/python2.7/site-packages/numpy/lib/polynomial.py", line11, in <module>

import numpy.core.numeric as NX

AttributeError: ‘module‘ object has noattribute ‘core‘

解决,将/usr/lib64/python2.7/site-packages/numpy/core中的如下删除:

[[email protected] core(keystone_admin)]# rm__init__.pyc

[[email protected] core(keystone_admin)]# rm__init__.pyo

还是没有解决:

11  import numpy.core.numeric as NX

解决:

将polynomial.pyc  polynomial.pyo删除:

还是报同一个错误:

AttributeError: ‘module‘ object has noattribute ‘core‘

解决:重新安装如下包:

numpy-1.7.1-10.el7.x86_64

numpy-f2py-1.7.1-10.el7.x86_64

重新安装还是报同样的错误

2016-05-27 14:20:25.600 17550 ERROR cinder.volume.apiAttributeError: ‘module‘ object has no attribute ‘core‘

依旧报错

搜索Numpy相关的包

全部安装:

numpy-f2py.x86_64 : f2py for numpy

python-numpydoc.noarch : Sphinx extensionto support docstrings in Numpy format

python34-numpy-f2py.x86_64 : f2py for numpy

netcdf4-python.x86_64 : Python/numpyinterface to netCDF

numpy.x86_64 : A fast multidimensionalarray facility for Python

python-Bottleneck.x86_64 : Collection offast NumPy array functions written in Cython

python-numdisplay.noarch : Visualize numpyarray objects in ds9

python-numexpr.x86_64 : Fast numericalarray expression evaluator for Python and NumPy

python34-numpy.x86_64 : A fastmultidimensional array facility for Python 3.4

cinder-volume所在的节点同样也要安装

6.2.2.3 云硬盘功能验证

[[email protected] site-packages(keystone_admin)]#cinder list

+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+

|                  ID                  |   Status | Migration Status | Name | Size | Volume Type | Bootable | Multiattach| Attached to |

+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+

| 186c46dc-5c9d-43ec-a652-7df53f701c3f |available |        -         | vol |  1   |     -      |  false  |    False    |             |

+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+

[[email protected] site-packages(keystone_admin)]#

终于创建成功!!!!!

6.2.3 创建外网

首先查看neutron服务:

[[email protected](keystone_admin)]# neutron agent-list

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| id                                  |agent_type         | host        | alive | admin_state_up | binary                    |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| 1746662a-081c-4800-b371-479e670fbb20 |Metadata agent     | controller1 |:-)   | True           | neutron-metadata-agent    |

| 2ead14e3-6d3d-4e1c-9e07-7665a2632565 | L3agent           | controller1 | :-)   | True           | neutron-l3-agent          |

| 5749371b-df3e-4a51-a7f9-279ee2b8666a |Open vSwitch agent | compute2    |xxx   | True           | neutron-openvswitch-agent |

| 96820906-bc31-4fcf-a473-10a6d6865b2a |Open vSwitch agent | compute1    |:-)   | True           | neutron-openvswitch-agent |

| ad55ffa2-dd19-4cee-b5fc-db4bc60b796b |DHCP agent         | controller1 |:-)   | True           | neutron-dhcp-agent        |

| d264e9b0-c0c1-4e13-9502-43c248127dff |Open vSwitch agent | controller1 | :-)  | True           |neutron-openvswitch-agent |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

创建网络:

10.192.44.160,10.192.44.170

创建成功

问题:为什么没有选择flat、external那一步呢?

Ovs是down的!

6.2.4 创建路由

这里可以创建成功

6.2.5 br-ex状态为DOWN,没有出现flat、external

May 27 19:33:28 controller1 dnsmasq[6339]:cannot read /etc/neutron/dnsmasq-neutron.conf: No such file or directory

May 27 19:33:28 controller1 dnsmasq[6339]:FAILED to start up

May 27 19:33:28 controller1 dnsmasq[6340]:cannot read /etc/neutron/dnsmasq-neutron.conf: No such file or directory

May 27 19:33:28 controller1 dnsmasq[6340]:FAILED to start up

解决:

a. Edit the /etc/neutron/dhcp_agent.inifile and complete the following action:

? In the [DEFAULT] section, enablethe dnsmasq configuration file:

[DEFAULT]

...

dnsmasq_config_file =/etc/neutron/dnsmasq-neutron.conf

b. Create and edit the /etc/neutron/dnsmasq-neutron.conffile and complete

the following action:

? Enable the DHCP MTU option (26) and configure it to 1454 bytes:

dhcp-option-force=26,1454

c. Kill any existing dnsmasq processes:

# pkill dnsmasq

6.2.6 创建虚拟机:libvirtd有问题

创建vm失败:

报错:

Error updating resources for node compute1:internal error: Cannot find suitable emulator for x86_64

[[email protected] libexec]# ./qemu-kvm -v

再次验证qemu:

./qemu-kvm: error while loading sharedlibraries: libiscsi.so.2: cannot open shared object file: No such file ordirectory

[[email protected] libexec]#

qemu-img: error while loading sharedlibraries: libiscsi.so.2: cannot open shared object file: No such file ordirectory

解决:

yum install libiscsi

yum install libiscsi-devel

[[email protected] usr]# find ./ -namelibiscsi.so.2

./lib64/iscsi/libiscsi.so.2

位置有问题,或者修改环境变量

还是报错:

[[email protected] lib64]# ls libiscsi* -l

lrwxrwxrwx 1 root root     21 May 27 17:10 libiscsi.so ->libiscsi.so.2.0.10900

lrwxrwxrwx 1 root root     21 May 27 17:09 libiscsi.so.0 ->libiscsi.so.2.0.10900

lrwxrwxrwx 1 root root     21 May 27 17:10 libiscsi.so.2 ->libiscsi.so.2.0.10900

-rwxr-xr-x 1 root root 125312 May 27 17:06libiscsi.so.2.0.10900

OK,重启nova-compute试试:

一切都正常,但是点击创建后立即显示错误

为什么?

错误在哪里?

排查:

(1)      和OK的对比,先把virsh -c调通

(2)      排查日志

(3)      在控制节点安装个nova-compute?

(4)      先排查网络问题

[[email protected](keystone_admin)]#  virsh -cqemu+tcp://10.192.44.148/system

error: failed toconnect to the hypervisor

error: unable toconnect to server at ‘10.192.44.148:16509‘: Connection refused

可能还是qemu或者libvirt有问题

# virsh -c qemu+tcp://compute1/system

error: failed to connect to the hypervisor

error: unable to connect to server at‘compute1:16509‘: Connection refused

另外,在控制节点下搜索error:

[[email protected] nova(keystone_admin)]#pwd

/var/log/nova

[[email protected] nova(keystone_admin)]#grep error ./ -r

会有一些线索

# grep -w [Ee]rror ./ -r

可以逐个log排查

对比packstack自动安装环境:

[[email protected] ~]# virsh -c qemu+tcp://localhost/system

Welcome to virsh, the virtualizationinteractive terminal.

Type: ‘help‘ for help with commands

‘quit‘ to quit

virsh # ^C

[[email protected] ~]# virsh -cqemu+tcp://192.168.129.130/system

Welcome to virsh, the virtualizationinteractive terminal.

Type: ‘help‘ for help with commands

‘quit‘ to quit

virsh # ^C

[[email protected] ~]#

149的环境:

[[email protected] ~]# virsh -cqemu+tcp://localhost/system

error: failed to connect to the hypervisor

error: unable to connect to server at‘localhost:16509‘: Connection refused

[[email protected] ~]# virsh -cqemu+tcp://10.192.44.149/system

error: failed to connect to the hypervisor

error: unable to connect to server at‘10.192.44.149:16509‘: Connection refused

差别:libvirtd在149上使用的是StorOS带的,没有重装

先对比配置内容、权限,对比环境变量

/etc/qemu目录下完全一样

1.      Libvirt.conf一样

2.      libvirtd.conf一样

3.      qemu.conf一样

4.      其他都一样

查看libvirtd的参数:

192.168.129.130:

[[email protected] libvirt]# ps -auxf |greplibvirt

root     57595  0.0  0.0 112640  960 pts/0    S+   17:50  0:00          \_ grep --color=autolibvirt

root      1267  0.0  0.4 1124500 17436 ?       Ssl May26   0:36 /usr/sbin/libvirtd–listen

10.192.44.149:

[[email protected] libvirt]# ps -auxf |greplibvirt

root     5656  0.0  0.0 112640  980 pts/0    S+   08:50  0:00          \_ grep --color=autolibvirt

nobody   3121  0.0  0.0 15524   868 ?        S   May27   0:00 /sbin/dnsmasq--conf-file=/var/lib/libvirt/dnsmasq/default.conf

root     7032  0.0  0.0 1050596 12496 ?       Ssl May27   0:32 /usr/sbin/libvirtd

检查Libvirt版本:

[[email protected] libvirt]# libvirtd--version

libvirtd (libvirt) 1.2.17

[[email protected] libvirt]# libvirtd --version

libvirtd (libvirt) 1.1.1

版本差别巨大

6.2.7 在149安装配置libvirt-1.2.17,解决16509端口(libvirt tcp_port)没有监听问题

卸载老的:

# rpm -e --nodeps libvirt-client libvirt-daemon-driver-nodedevlibvirt-glib libvirt-daemon-config-network libvirt-daemon-driver-nwfilterlibvirt-devel libvirt-daemon-driver-qemu libvirt-daemon-driver-interfacelibvirt-gobject libvirt-daemon-driver-storage libvirt-daemon-driver-network
libvirt-daemon-config-nwfilterlibvirt libvirt-daemon-driver-secret libvirt-gconfig libvirt-java-devellibvirt-daemon-kvm libvirt-docs libvirt-daemon-driver-lxc libvirt-pythonlibvirt-daemon libvirt-java

[[email protected] libvirt]# rpm -aq |greplibvirt

[[email protected] libvirt]#

扫清障碍:

# yum install systemd ceph glusterfs glusterfs-api

安装新的:

warning: libvirt-1.2.17-13.el7.x86_64.rpm:Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY

error: Failed dependencies:

libvirt-daemon = 1.2.17-13.el7 is needed by libvirt-1.2.17-13.el7.x86_64

libvirt-daemon-config-network = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-config-nwfilter = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-lxc = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-qemu = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-nwfilter = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-interface =1.2.17-13.el7 is needed by libvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-secret = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-storage = 1.2.17-13.el7 is needed by libvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-network = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-nodedev = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-client = 1.2.17-13.el7 is needed by libvirt-1.2.17-13.el7.x86_64

# rpm -ivhlibvirt-client-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-network-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-nodedev-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-secret-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-nwfilter-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-storage-1.2.17-13.el7.x86_64.rpm

]# rpm -ivhlibvirt-daemon-driver-lxc-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-1.2.17-13.el7.x86_64.rpm

]# rpm -ivhlibvirt-daemon-driver-interface-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-qemu-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-config-nwfilter-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-config-network-1.2.17-13.el7.x86_64.rpm

# rpm -ivh libvirt-1.2.17-13.el7.x86_64.rpm

配置libvirt:

[[email protected] libvirt]# vi libvirtd.conf

listen_tls = 0

listen_tcp = 1

auth_tcp = "none"

重启libvirtd

[[email protected] libvirt]# service libvirtdrestart

Redirecting to /bin/systemctl restart  libvirtd.service

验证连接:

[[email protected] libvirt]# virsh -cqemu+tcp://10.192.44.149/system

error: failed to connect to the hypervisor

error: unable to connect to server at‘10.192.44.149:16509‘: Connection refused

还是有问题

systemctl stop firewalld.service

systemctl disable firewalld.service

将packstack配置原封不动拷贝过来:

# systemctlenable libvirtd.service

# systemctl startlibvirtd.service

16509端口没有被监听

正常的:

[[email protected] etc]# ss -nalp |grep 16509

tcp   LISTEN     0      30                     *:16509                 *:*      users:(("libvirtd",1267,13))

tcp   LISTEN     0      30                    :::16509                :::*      users:(("libvirtd",1267,14))

本机:

[[email protected] libvirt]# ss -nalp |grep16409

[[email protected] libvirt]#

可能是启动脚本有问题,将packstack的对比一下:没有差别

为什么16509没有监听呢?

要让TCP、TLS等连接的生效,需要在启动
libvirtd 时加上 –listen
参数(简写为 -l
)。而默认的 service libvirtd start
命令启动 libvirtd
服务时,并没带 --listen
参数,所以如果要使用TCP等连接方式,可以使用 libvirtd –listen
-d 命令来启动libvirtd。

修改启动脚本:

ExecStart=/usr/sbin/libvirtd --listen$LIBVIRTD_ARGS

改为:

ExecStart=/usr/sbin/libvirtd --listen$LIBVIRTD_ARGS

 

[[email protected] system]#systemctl reload libvirtd.service                 

Warning: libvirtd.servicechanged on disk. Run ‘systemctl daemon-reload‘ to reload units.

[[email protected] system]#systemctl daemon-reload

systemctlenable libvirtd.service

systemctl startlibvirtd.service

[[email protected] system]# ss -nalp  |grep 16509

tcp   LISTEN     0      30                     *:16509                 *:*      users:(("libvirtd",29569,13))

tcp   LISTEN     0      30                    :::16509                :::*      users:(("libvirtd",29569,14))

但还是有问题:

[[email protected] libvirt]# virsh -cqemu+tcp://localhost/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

参考萤石云:

# vi /etc/libvirt/qemu.conf

vnc_listen = "0.0.0.0"

user = "root"

group = "root"

[[email protected] libvirt]# virsh -cqemu+tcp://localhost/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

还是不行!!!

为什么呢?端口已经监听了:

加了-d 还是不行:

ExecStart=/usr/sbin/libvirtd -d --listen$LIBVIRTD_ARGS

[[email protected] system]# virsh -cqemu+tcp://localhost:16509/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

[[email protected] system]# virsh -cqemu+tcp://localhost/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

[[email protected] system]# virsh -cqemu+tcp://10.192.44.149/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

[[email protected] system]# virsh -c qemu+tcp://10.192.44.149:16509/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

手动启动:

[[email protected] system]# /usr/sbin/libvirtd-l -d

[[email protected] system]# ps -A |greplibvirt

26161 ?        00:00:00 libvirtd

[[email protected] system]#

[[email protected] system]# ss -nalp |grep16509

tcp   LISTEN     0      30                     *:16509                 *:*      users:(("libvirtd",26161,14))

tcp   LISTEN     0      30                    :::16509                :::*      users:(("libvirtd",26161,15))

还是不行:

[[email protected] system]# virsh -cqemu+tcp://10.192.44.149:16509/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

不以守护进程启动:

[[email protected] system]# /usr/sbin/libvirtd-l

2016-05-28 02:41:07.632+0000: 31004: info :libvirt version: 1.2.17, package: 13.el7 (CentOS BuildSystem<http://bugs.centos.org>, 2015-11-20-16:24:10, worker1.bsys.centos.org)

2016-05-28 02:41:07.632+0000: 31004: error: virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so: symboldm_task_get_info_with_deferred_remove,
version Base not defined in file libdevmapper.so.1.02with link time reference

2016-05-28 02:41:07.633+0000: 31004: error: virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so: undefined symbol:virStorageFileCreate

OK,这就是问题

正常的系统:

[[email protected] nwfilter]#/usr/sbin/libvirtd -l

2016-05-28 02:44:06.983+0000: 95407: info :libvirt version: 1.2.17, package: 13.el7 (CentOS BuildSystem<http://bugs.centos.org>, 2015-11-20-16:24:10, worker1.bsys.centos.org)

2016-05-28 02:44:06.983+0000: 95407:warning : virDriverLoadModule:65 : Module/usr/lib64/libvirt/connection-driver/libvirt_driver_lxc.so not accessible

这里虽然没地方报错,但库还是有依赖问题的

再次验证StorOS原生的Libvirt(148、151):

原生的也有问题:

[[email protected] ~]# /usr/sbin/libvirtd-l

2016-05-28 02:46:43.181+0000: 12424: info :libvirt version: 1.1.1, package: 29.el7 (CentOS BuildSystem<http://bugs.centos.org>, 2014-06-17-17:13:31, worker1.bsys.centos.org)

2016-05-28 02:46:43.181+0000: 12424: error: virDriverLoadModule:79 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so libgfapi.so.0:cannot open shared object file: No such file or directory

2016-05-28 02:46:43.183+0000: 12424: error: virDriverLoadModule:79 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so: undefined symbol:virStorageFileStat

2016-05-28 02:46:43.184+0000: 12424: error: virNetTLSContextCheckCertFile:117 : Cannot read CA certificate‘/etc/pki/CA/cacert.pem‘: No such file or directory

无论原生的StorOS的libvirtd,还是更新了的libvirtd,都存在(5.13:排查libvirtd启动失败的问题),即使看起来启动成功,但实际上还是异常的。

解决:

手动安装一些包,在packstack查看这个符号表在哪里:

另外,先把刚才Libvirt rpm包安装时的依赖包都装上:

[[email protected] libvirt]# rpm -aq|greplibvirt

libvirt-daemon-driver-lxc-1.2.17-13.el7.x86_64

libvirt-daemon-config-nwfilter-1.2.17-13.el7.x86_64

libvirt-daemon-1.2.17-13.el7.x86_64

libvirt-daemon-driver-secret-1.2.17-13.el7.x86_64

libvirt-daemon-driver-nodedev-1.2.17-13.el7.x86_64

libvirt-daemon-config-network-1.2.17-13.el7.x86_64

libvirt-daemon-driver-storage-1.2.17-13.el7.x86_64

libvirt-daemon-driver-nwfilter-1.2.17-13.el7.x86_64

libvirt-client-1.2.17-13.el7.x86_64

libvirt-daemon-driver-interface-1.2.17-13.el7.x86_64

libvirt-daemon-driver-network-1.2.17-13.el7.x86_64

libvirt-daemon-driver-qemu-1.2.17-13.el7.x86_64

libvirt-1.2.17-13.el7.x86_64

继续安装:

# rpm -ivhlibvirt-daemon-kvm-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-docs-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-devel-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-python-1.2.17-2.el7.x86_64.rpm

# rpm -ivh dracut-033-359.el7.x86_64.rpm

rpm -ivhdracut-config-rescue-033-359.el7.x86_64.rpm

# rpm -ivhdracut-network-033-359.el7.x86_64.rpm

# rpm -ivh initscripts-9.49.30-1.el7.x86_64.rpm

# rpm -ivh kmod-20-5.el7.x86_64.rpm

# rpm -ivh libgudev1-219-19.el7.x86_64.rpm

# rpm -ivhlibgudev1-devel-219-19.el7.x86_64.rpm

还是不行

在packstack上也没搜到:

[[email protected] usr]# grep‘virStorageFileStat‘ ./ -r

Binary file ./lib64/libvirt/connection-driver/libvirt_driver_qemu.somatches

Binary file./lib64/libvirt/connection-driver/libvirt_driver_storage.so matches

进一步解决:

device-mapper-libs

对于这个包,先下载,不安装:

yum install --downloadonly--downloaddir=/root/device-mapper-libs device-mapper-libs

[[email protected] device-mapper-libs]# ls -l

total 916

-rw-r--r-- 1 root root 257444 Nov 25  2015 device-mapper-1.02.107-5.el7.x86_64.rpm

-rw-r--r-- 1 root root 170732 Nov 25  2015device-mapper-event-1.02.107-5.el7.x86_64.rpm

-rw-r--r-- 1 root root 172676 Nov 25  2015device-mapper-event-libs-1.02.107-5.el7.x86_64.rpm

-rw-r--r-- 1 root root 311392 Nov 25  2015device-mapper-libs-1.02.107-5.el7.x86_64.rpm

解压这些包,看里面有没有:

查看网页:

http://osdir.com/ml/fedora-virt-maint/2014-11/msg00310.html

--- Comment #12from Michal Privoznik <[email protected]> ---

(In reply to Kashyap Chamarthy from comment #11)

> Okay, we found (thanks to DanPB for thehint to take a look at`journalctl`

> libvirt logs ) the root cause :device-mapper RPM version should be this:

> device-mapper-1.02.90-1.fc21.x86_64(instead of:

> device-mapper-1.02.88-2.fc21.x86_64)

> 

> From `journalctl`:

> 

> $ journalctl -u libvirtd --since=yesterday-p err

> [. . .] failed to load module

> /usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so

> /usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so:symbol

> dm_task_get_info_with_deferred_remove,version Base not defined in file

> libdevmapper.so.1.02 with link timereference

So libdevmapper.so (from device-mapper-libs)hasn‘t provided the symbol. Hence,

storage driver has failed to load.

在150上更新试一下:

[[email protected]]# /usr/sbin/libvirtd -l

2016-05-2804:51:03.750+0000: 488: info : libvirt version: 1.2.17, package: 13.el7 (CentOSBuildSystem <http://bugs.centos.org>, 2015-11-20-16:24:10,worker1.bsys.centos.org)

2016-05-2804:51:03.750+0000: 488: error : virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so: symboldm_task_get_info_with_deferred_remove,
version Base not defined in filelibdevmapper.so.1.02 with link time reference

2016-05-2804:51:03.751+0000: 488: error : virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so: undefined symbol:virStorageFileCreate

2016-05-2804:51:03.752+0000: 488: error : virNetTLSContextCheckCertFile:120 : Cannot readCA certificate ‘/etc/pki/CA/cacert.pem‘: No such file or directory

问题依旧会出现

看来,也不一定是device-mapper的问题

# /usr/sbin/libvirtd -l

2016-05-28 03:27:07.435+0000: 6930: info : libvirtversion: 1.2.17, package: 13.el7 (CentOS BuildSystem<http://bugs.centos.org>, 2015-11-20-16:24:10, worker1.bsys.centos.org)

2016-05-28 03:27:07.435+0000: 6930: error :virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so /usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so:symbol
dm_task_get_info_with_deferred_remove, version Base not defined in filelibdevmapper.so.1.02 with link time reference

2016-05-28 03:27:07.436+0000: 6930: error :virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so: undefined symbol:virStorageFileCreate

设置fedora的源:

这里的device-mapper包是1.02.84的

现在下载更新的包试试

这里是107的

http://mirrors.hikvision.com.cn/centos/7.2.1511/updates/x86_64/Packages/

[updates-7.2-device-mapper]

name=CentOS-7.2-local

baseurl=http://mirrors.hikvision.com.cn/centos/7.2.1511/updates/x86_64/

gpgcheck=0

升级:

升级到107同样会产生改问题,应该换个思路了

先卸载,再升级试试:

# yum remove device-mapper

# yum remove device-mapper-libs

升级:

Transaction check error:

file /usr/lib/systemd/system/blk-availability.service from install ofdevice-mapper-7:1.02.107-5.el7_2.2.x86_64 conflicts with file from packagelvm2-7:2.02.105-14.el7.x86_64

file /usr/sbin/blkdeactivate from install ofdevice-mapper-7:1.02.107-5.el7_2.2.x86_64 conflicts with file from packagelvm2-7:2.02.105-14.el7.x86_64

file /usr/share/man/man8/blkdeactivate.8.gz from install ofdevice-mapper-7:1.02.107-5.el7_2.2.x86_64 conflicts with file from packagelvm2-7:2.02.105-14.el7.x86_64

安装:

Yum install python

Yum出现问题

Yum clean all

Rpm先删除原来的device-mapper包

# yum install device-mapper

再次验证:

2016-05-28 05:27:15.342+0000: 31813: error: virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so:undefined symbol: virStorageFileCreate

依旧有错

所以目前原因还不是太明确

通过和packstack标准环境对比,再安装几个包:

libvirt-glib-0.1.7-3.el7.x86_64

libvirt-gobject-0.1.7-3.el7.x86_64

libvirt-gconfig-0.1.7-3.el7.x86_64

[[email protected] etc]# /usr/sbin/

Display all 810 possibilities? (y or n)

[[email protected] etc]# /usr/sbin/libvirtd -l

2016-05-28 04:20:36.938+0000: 11713: info :libvirt version: 1.2.17, package: 13.el7 (CentOS BuildSystem<http://bugs.centos.org>, 2015-11-20-16:24:10, worker1.bsys.centos.org)

2016-05-28 04:20:36.938+0000: 11713: error: virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so: symboldm_task_get_info_with_deferred_remove,
version Base not defined in filelibdevmapper.so.1.02 with link time reference

2016-05-28 04:20:36.940+0000: 11713: error: virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so:undefined symbol: virStorageFileCreate

依旧出现

> >

> > Updating todevice-mapper-1.02.90-1.fc21.x86_64 solved the issue:

>

> Exactly! this is a device-mapper-libsbug where they just didn‘t export some

> symbol(s) for a several versions.

6.2.8libvirtd启动问题目前的处理

目前从标准centos拷贝一个libdevmapper.so.1.02即可解决

6.2.9 重新验证虚拟机功能

# systemctl restartlibvirtd.service openstack-nova-compute.service

 

[[email protected] nova(keystone_admin)]#tail -f nova-conductor.log

Failed to compute_task_build_instances: Novalid host was found. There are not enough hosts available.

Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",line 142, in inner

return func(*args, **kwargs)

File"/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line84, in select_destinations

filter_properties)

File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py",line 90, in select_destinations

raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. Thereare not enough hosts available.

调度不到?

首先,可以确定,这是控制节点的问题,根本还没有执行到nova-compute

修改配置后,有所进步:

错误: 实例 "cs1"
执行所请求操作失败,实例处于错误状态。:
请稍后再试 [错误:Exceeded maximum number of
retries. Exceeded max scheduling attempts 3 forinstance 21976cef-af5f-495c-8265-1468a52da7f9. Last exception: [u‘Traceback(most recent call last):\n‘, u‘ File "/usr/lib/python2.7/site-packages/nova/compute/manager.py",line 1].

现在已经可以执行到nova-compute部分,不过在nova-compute中出现错误:

[-]Instance failed network setup after 1 attempt(s)

Traceback (most recent call last):

File"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line1564, in _allocate_network_async

dhcp_options=dhcp_options)

File"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",line 727, in allocate_for_instance

self._delete_ports(neutron, instance, created_port_ids)

File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py",line 195, in __exit__

six.reraise(self.type_, self.value, self.tb)

File"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",line 719, in allocate_for_instance

security_group_ids, available_macs, dhcp_opts)

File"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",line 342, in _create_port

raise exception.PortBindingFailed(port_id=port_id)

PortBindingFailed: Binding failed for port016ad6b1-c0e2-41f3-8111-35c95acf369a, please check neutron logs for moreinformation.

这里可能是因为ovs的问题

 

控制节点:neutron.log报错:

2016-05-2814:08:50.559 2498 INFO neutron.wsgi [req-bcc7fd6e-9d5f-4f49-a3c7-4c2fd8d12a59 cfca3361950644de990b52ad341a06f0617e98e151b245d081203adcbb0ce7a4 - - -] 10.192.44.149 - - [28/May/201614:08:50] "GET/v2.0/security-groups.json?tenant_id=617e98e151b245d081203adcbb0ce7a4HTTP/1.1"
200 1765 0.011616

2016-05-2814:08:50.653 2498 ERRORneutron.plugins.ml2.managers [req-e0781543-8bec-4b46-9c03-1d872b98535a2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e - - -]
Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.653 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.653 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 2 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.656 2498 ERROR neutron.plugins.ml2.managers [req-e0781543-8bec-4b46-9c03-1d872b98535a2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e - - -] Failedto bind port 2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.657 2498 ERROR neutron.plugins.ml2.managers [req-e0781543-8bec-4b46-9c03-1d872b98535a2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e - - -] Failedto bind port 2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.658 2498 INFO neutron.plugins.ml2.plugin [req-e0781543-8bec-4b46-9c03-1d872b98535a2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e - - -]Attempt 3 to bind port 2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.663 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.663 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.663 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 4 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.667 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.667 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.667 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 5 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.671 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e- - -] Failed to bind port 2799fb4c-d513-425d-becb-4947a5c8bfdd on hostcompute1

2016-05-2814:08:50.672 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.672 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 6 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.675 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.676 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.676 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 7 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.679 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.680 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.680 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 8 to bind port 2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.683 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.684 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.684 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 9 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.688 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.688 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.688 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 10 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-28 14:08:50.6922498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-28 14:08:50.6922498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.727 2498 INFO neutron.wsgi [req-e0781543-8bec-4b46-9c03-1d872b98535a2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e - - -]10.192.44.149 - - [28/May/2016 14:08:50] "POST /v2.0/ports.jsonHTTP/1.1" 201 933 0.165769

2016-05-2814:08:50.831 2498 INFO neutron.wsgi [req-42438771-5f5d-484e-a13f-e7d50766cd2a2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e - - -]10.192.44.149 - - [28/May/2016 14:08:50] "DELETE/v2.0/ports/2799fb4c-d513-425d-becb-4947a5c8bfdd.json
HTTP/1.1" 204 1730.101503

在控制节点:

# ovs-vsctl del-br br-ex

# ovs-vsctl del-br br-int

在计算节点:

# ovs-vsctl del-br br-int

重新配置neutron的ml2

在控制节点:

ovs-vsctl add-br br-ex

ovs-vsctl add-port br-exeth3

ethtool -K eth3 grooff

修改配置:

重启服务:

控制节点(网路节点):

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

计算节点:

systemctl restart openvswitch.service

systemctl restart neutron-openvswitch-agent.service

systemctl restartopenstack-nova-compute.service

 

状态都是DOWN:

29: br-ex: <BROADCAST,MULTICAST> mtu1500 qdisc noop state DOWN

link/ether 88:00:00:01:02:13 brd ff:ff:ff:ff:ff:ff

30: br-int: <BROADCAST,MULTICAST> mtu1500 qdisc noop state DOWN

link/ether3e:92:f4:94:22:4c brd ff:ff:ff:ff:ff:ff

# ifconfig br-int up

# ifconfig br-int up

# ifconfig br-int up

目前状态:

[[email protected](keystone_admin)]# ifconfig

br-ex:flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

inet6 fe80::8a00:ff:fe01:213 prefixlen 64  scopeid0x20<link>

ether 88:00:00:01:02:13 txqueuelen 0  (Ethernet)

RX packets 0  bytes 0 (0.0 B)

RX errors 0  dropped 0  overruns 0 frame 0

TX packets 8  bytes 648 (648.0 B)

TX errors 0  dropped 0 overruns0  carrier 0  collisions 0

br-int:flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

inet6 fe80::3c92:f4ff:fe94:224c prefixlen 64  scopeid0x20<link>

ether 3e:92:f4:94:22:4c txqueuelen 0  (Ethernet)

RX packets 36  bytes 3024 (2.9KiB)

RX errors 0  dropped 0  overruns 0 frame 0

TX packets 8  bytes 648 (648.0 B)

TX errors 0  dropped 0 overruns0  carrier 0  collisions 0

再次创建虚拟机:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

systemctl restartopenstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.serviceopenstack-nova-novncproxy.service

计算节点:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service

systemctl restart openstack-nova-compute.service

再次尝试创建虚拟机:

还是挂在孵化中

py:215

[-]Instance failed network setup after 1 attempt(s)

Traceback (most recent call last):

File"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line1564, in _allocate_network_async

dhcp_options=dhcp_options)

File"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",line 727, in allocate_for_instance

self._delete_ports(neutron, instance, created_port_ids)

File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py",line 195, in __exit__

six.reraise(self.type_, self.value, self.tb)

File"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",line 719, in allocate_for_instance

security_group_ids, available_macs, dhcp_opts)

File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",line 342, in _create_port

raise exception.PortBindingFailed(port_id=port_id)

PortBindingFailed: Binding failed for port29b7a877-a2c5-4418-b28b-9f4bcf10661e, please check neutron logs for moreinformation.

还是有错误

#tail -f ovs-vswitchd.log                                                                                                       

2016-05-28T06:52:26.272Z|00059|connmgr|INFO|br-int<->unix:1 flow_mods in the last 0 s (1 deletes)

2016-05-28T06:52:40.259Z|00060|connmgr|INFO|br-int<->unix:1 flow_mods in the last 0 s (1 deletes)

2016-05-28T06:52:40.262Z|00061|ofp_util|INFO|normalizationchanged ofp_match, details:

2016-05-28T06:52:40.262Z|00062|ofp_util|INFO|pre: in_port=2,nw_proto=58,tp_src=136

2016-05-28T06:52:40.262Z|00063|ofp_util|INFO|post:in_port=2

2016-05-28T06:52:40.262Z|00064|connmgr|INFO|br-int<->unix:1 flow_mods in the last 0 s (1 deletes)

2016-05-28T06:52:40.265Z|00065|connmgr|INFO|br-int<->unix:1 flow_mods in the last 0 s (1 deletes)

2016-05-28T06:55:54.779Z|00066|bridge|INFO|bridgebr-int: added interface tapee0d7e7c-7e on port 5

2016-05-28T06:55:54.880Z|00067|netdev_linux|INFO|ioctl(SIOCGIFHWADDR)on tapee0d7e7c-7e device failed: No such device

2016-05-28T06:55:54.883Z|00068|netdev_linux|WARN|ioctl(SIOCGIFINDEX)on tapee0d7e7c-7e device failed: No such device

# tail -f openvswitch-agent.log

neutron.agent.common.ovs_lib[req-d7fbc1ca-da8c-48b8-8c7e-4928586a05ba - - - - -] Port7cbb1fd7-f6cb-4ba2-be89-1313637afa91 not present in bridge br-int

neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent[req-d7fbc1ca-da8c-48b8-8c7e-4928586a05ba - - - - -] port_unbound(): net_uuidNone not in local_vlan_map

neutron.agent.common.ovs_lib[req-d7fbc1ca-da8c-48b8-8c7e-4928586a05ba - - - - -] Port31566604-5199-4b35-978d-d57cb9458236 not present in bridge br-int

neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent[req-d7fbc1ca-da8c-48b8-8c7e-4928586a05ba - - - - -] port_unbound(): net_uuidNone not in local_vlan_map

应该是创建br-int,然后br-int和eth0:1绑定,这里需要手动操作

ovs-vsctl add-port br-inteth0:1

 

[[email protected] neutron(keystone_admin)]#ovs-vsctl add-port br-int eth0\:1

ovs-vsctl: cannot create a port namedeth0:1 because a port named eth0:1 already exists on bridge br-int

这里已经绑定

那上面报错是什么原因?

[[email protected] ml2]# ovs-vsctl add-portbr-int eth0:1

ovs-vsctl: cannot create a port namedeth0:1 because a port named eth0:1 already exists on bridge br-int

[[email protected] ml2]# ovs-vsctl  show

08399ed1-bb6a-4841-aca5-12a202ebd473

Bridge br-int

fail_mode: secure

Port "eth0:1"

Interface "eth0:1"

Port br-int

Interface br-int

type: internal

ovs_version:"2.4.0"

[[email protected](keystone_admin)]# ovs-vsctl  show

ba32f48c-535c-4edd-b366-9c3ca159d756

Bridge br-ex

Port br-ex

Interface br-ex

type: internal

Port "eth3"

Interface "eth3"

Bridge br-int

fail_mode: secure

Port br-int

Interface br-int

type: internal

Port "eth0:1"

Interface "eth0:1"

Port "tapee0d7e7c-7e"

Interface"tapee0d7e7c-7e"

type: internal

ovs_version: "2.4.0"

目前的错误:[[email protected] neutron]# tail -f openvswitch-agent.log一直刷,网络和计算节点都在刷:

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

这里讲eth0:1改为eth2,反正目前所有的eth2都没有在用

删除br-int:

# ovs-vsctl del-br br-int

ovs-vsctl del-br br-int

然后控制节点使用eth3:172.16.2.148

计算节点使用eth1:172.16.2.149

将ml2_conf.ini改掉:

计算节点:

[ovs]

local_ip = 172.16.2.149

tunnel_type = vxlan

enable_tunneling = True

控制节点(网络节点):

[ovs]

local_ip = 172.16.2.148

bridge_mappings = external:br-ex

tunnel_type = vxlan

enable_tunneling = True

重启服务:

目前控制节点的br-ex也是绑定到eth3的,这样两个都会到eth3,会不会有问题?

如果还有问题,尝试:br-int绑到不接网线的物理网口

先验证:

重启服务:

后台有报错

将子网口放到eth2,eth2没接网线,不过没人用

计算:

[[email protected] network-scripts]# ifdowneth2

[[email protected] network-scripts]# ifup eth2

[[email protected] network-scripts]# catifcfg-eth2

DEVICE=eth2

ONBOOT=yes

STARTMODE=onboot

MTU=1500

BOOTPROTO=static

IPADDR=192.168.0.149

NETMASK=255.255.255.0

网络节点:

DEVICE=eth2

ONBOOT=yes

STARTMODE=onboot

MTU=1500

BOOTPROTO=static

IPADDR=192.168.0.148

NETMASK=255.255.255.0

将eth2 up起来:

然后修改ml2_conf.ini:

[ovs]

local_ip = 192.168.0.148

bridge_mappings = external:br-ex

tunnel_type = vxlan

enable_tunneling = True

[ovs]

local_ip = 192.168.0.149

tunnel_type = vxlan

enable_tunneling = True

删除br-int:

ovs-vsctl del-br br-int

重启服务:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

systemctl restartopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

计算节点:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service

systemctl restartopenstack-nova-compute.service

还是出错

手动将br-int up起来,重启服务:

依旧会出现:

Port 3f304402-80a8-4849-a09f-195546c3e1b8not present in bridge br-int

难道是因为没有插网线?

做一个大胆的尝试:

控制节点:

将eth0作为外网的网口br-ex

将eth3作为br-int

做如下修改:

[[email protected] ~]# ovs-vsctl list-br

br-ex

br-int

[[email protected] ~]# ovs-vsctl del-brbr-ex

[[email protected] ~]# ovs-vsctl del-brbr-int

[[email protected] ~]# ovs-vsctl add-brbr-ex

[[email protected] ~]# ovs-vsctl add-portbr-ex eth0

然后修改配置:

[ovs]

local_ip = 172.16.2.148

bridge_mappings = external:br-ex

tunnel_type = vxlan

enable_tunneling = True

计算节点:

将eth1作为br-int

[ovs]

local_ip = 172.16.2.149

tunnel_type = vxlan

enable_tunneling = True

重启服务:

这样horizon就不能登录了

先将br-ex绑定到eth1上,否则就无法访问horizon了

[[email protected] neutron]#  ovs-vsctl del-br br-ex

[[email protected] neutron]# ovs-vsctl add-brbr-ex

[[email protected] neutron]# ovs-vsctl add-portbr-ex eth1

大不了不出外网

重启服务:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

计算节点:

systemctl restart openvswitch.service

systemctl restart neutron-openvswitch-agent.service

systemctl restartopenstack-nova-compute.service

同样的错误。

openvswitch-agent.log中打印:not present in bridge br-int

再检查配置

openvswitch正常的启动过程应该是怎样的?不会有nopresent in bridge br-int打印

6.2.10 安装两节点的环境对比一下:完全和packstack同步neutron和nova配置

使用packstack安装两节点的环境对比一下,理论上应该没问题了,梳理很清楚了

关于网络,只需要搞清楚那几个东西就可以:

br-ex、br-int、br-tun

控制节点:

4: br-ex:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether 7a:c8:ee:30:23:4c brd ff:ff:ff:ff:ff:ff

inet6 fe80::78c8:eeff:fe30:234c/64 scope link

valid_lft forever preferred_lft forever

5: br-int: <BROADCAST,MULTICAST> mtu1500 qdisc noop state DOWN

link/ether 72:6a:10:e8:24:47 brd ff:ff:ff:ff:ff:ff

6: br-tun: <BROADCAST,MULTICAST> mtu1500 qdisc noop state DOWN

link/ether9a:2f:55:03:db:40 brd ff:ff:ff:ff:ff:ff

计算节点:

4: br-ex:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether 7a:c8:ee:30:23:4c brd ff:ff:ff:ff:ff:ff

inet6 fe80::78c8:eeff:fe30:234c/64 scope link

valid_lft forever preferred_lft forever

5: br-int: <BROADCAST,MULTICAST> mtu1500 qdisc noop state DOWN

link/ether 72:6a:10:e8:24:47 brd ff:ff:ff:ff:ff:ff

6: br-tun: <BROADCAST,MULTICAST> mtu1500 qdisc noop state DOWN

link/ether9a:2f:55:03:db:40 brd ff:ff:ff:ff:ff:ff

并且这里br-int、br-tun状态即使为DOWN,也不影响虚拟机创建

主要是这里的br-ex、br-int、br-tun的创建和连接

对于br-ex:

创建:

ovs-vsctl add-br br-ex

和网口设备绑定:

ovs-vsctl add-port br-ex eth3

ethtool -K eth3 grooff

 

对于br-int的创建:

标准OK版本:

控制(网络)节点:

[[email protected] ml2(keystone_admin)]# pwd

/etc/neutron/plugins/ml2

[[email protected] ml2(keystone_admin)]# ls

ml2_conf_brocade_fi_ni.ini ml2_conf_brocade.ini ml2_conf_fslsdn.ini ml2_conf.ini  ml2_conf_ofa.ini  ml2_conf_sriov.ini  openvswitch_agent.ini  restproxy.ini sriov_agent.ini

这里是有各种配置的

# cat ml2_conf.ini |grep -v ‘^#‘ |grep -v ‘^$‘

[ml2]

type_drivers = vxlan

tenant_network_types = vxlan

mechanism_drivers =openvswitch

path_mtu = 0

[ml2_type_flat]

[ml2_type_vlan]

[ml2_type_gre]

[ml2_type_vxlan]

vni_ranges =10:100

vxlan_group =224.0.0.1

[ml2_type_geneve]

[securitygroup]

enable_security_group = True

# cat openvswitch_agent.ini  |grep -v ‘^#‘ |grep -v ‘^$‘

[ovs]

integration_bridge = br-int

tunnel_bridge = br-tun

local_ip =192.168.129.130

enable_tunneling=True

[agent]

polling_interval = 2

tunnel_types =vxlan

vxlan_udp_port =4789

l2_population = False

arp_responder = False

prevent_arp_spoofing = True

enable_distributed_routing = False

drop_flows_on_start=False

[securitygroup]

firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

计算节点:

[[email protected] ml2]# pwd

/etc/neutron/plugins/ml2

[[email protected] ml2]# ls

openvswitch_agent.ini 这里只有一个openvswitch_agent.ini

# cat openvswitch_agent.ini  |grep -v ‘^#‘ |grep -v ‘^$‘

[ovs]

integration_bridge = br-int

tunnel_bridge = br-tun

local_ip =192.168.129.131

enable_tunneling=True

[agent]

polling_interval = 2

tunnel_types =vxlan

vxlan_udp_port =4789

l2_population = False

arp_responder = False

prevent_arp_spoofing = True

enable_distributed_routing = False

drop_flows_on_start=False

[securitygroup]

firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

计算节点、网络节点的配置,就是这么分布的

将这些配置原封拷贝到StorOS上验证一下

6.2.10.1 控制节点neutron配置同步

1.      将(控制+网络)节点的配置拷贝到控制节点中,然后做如下修改:

(1)      修改IP

[[email protected]]# grep 192 ./ -r

./api-paste.ini:identity_uri=http://192.168.129.130:35357

./api-paste.ini:admin_password=db12219fd1924853

./api-paste.ini:auth_uri=http://192.168.129.130:5000/v2.0

./neutron.conf:#l3_ha_net_cidr = 169.254.192.0/18

./neutron.conf:nova_url= http://192.168.129.130:8774/v2

./neutron.conf:nova_admin_auth_url=http://192.168.129.130:5000/v2.0

./neutron.conf:auth_uri= http://192.168.129.130:5000/v2.0

./neutron.conf:identity_uri= http://192.168.129.130:35357

./neutron.conf:admin_password= db12219fd1924853

./neutron.conf:connection= mysql://neutron:[email protected]/neutron

./neutron.conf:rabbit_host= 192.168.129.130

./neutron.conf:rabbit_hosts= 192.168.129.130:5672

./metadata_agent.ini:auth_url= http://192.168.129.130:5000/v2.0

./metadata_agent.ini:admin_password= db12219fd1924853

./metadata_agent.ini:nova_metadata_ip= 192.168.129.130

./plugins/ml2/openvswitch_agent.ini:local_ip=192.168.129.130

(2)      api-paste.ini改为:

[filter:authtoken]

identity_uri=http://10.192.44.148:35357

admin_user=neutron

admin_password=1

auth_uri=http://10.192.44.148:5000/v2.0

admin_tenant_name=service

(3)      neutron.conf改为:

[[email protected] neutron]# catneutron.conf

[DEFAULT]

verbose = True

router_distributed = False

debug = False

state_path = /var/lib/neutron

use_syslog = False

use_stderr = True

log_dir =/var/log/neutron

bind_host = 0.0.0.0

bind_port = 9696

core_plugin=neutron.plugins.ml2.plugin.Ml2Plugin

service_plugins =router

auth_strategy = keystone

mac_generation_retries = 16

dhcp_lease_duration = 86400

dhcp_agent_notification = True

allow_bulk = True

allow_pagination = False

allow_sorting = False

allow_overlapping_ips = True

advertise_mtu = False

agent_down_time = 75

router_scheduler_driver =neutron.scheduler.l3_agent_scheduler.ChanceScheduler

allow_automatic_l3agent_failover = False

dhcp_agents_per_network = 1

l3_ha = False

api_workers = 4

rpc_workers = 4

use_ssl = False

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://10.192.44.148:8774/v2

nova_region_name =RegionOne

nova_admin_username =nova

nova_admin_tenant_name =service

nova_admin_password =1

nova_admin_auth_url =http://10.192.44.148:5000/v2.0

send_events_interval = 2

rpc_response_timeout=60

rpc_backend=rabbit

control_exchange=neutron

lock_path=/var/lib/neutron/lock

[matchmaker_redis]

[matchmaker_ring]

[quotas]

[agent]

root_helper = sudo neutron-rootwrap/etc/neutron/rootwrap.conf

report_interval = 30

[keystone_authtoken]

auth_uri = http://10.192.44.148:5000/v2.0

identity_uri = http://10.192.44.148:35357

admin_tenant_name = service

admin_user = neutron

admin_password = 1

[database]

connection = mysql://neutron:[email protected]/neutron

max_retries = 10

retry_interval = 10

min_pool_size = 1

max_pool_size = 10

idle_timeout = 3600

max_overflow = 20

[nova]

[oslo_concurrency]

[oslo_policy]

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

kombu_reconnect_delay = 1.0

rabbit_host = 10.192.44.148

rabbit_port = 5672

rabbit_hosts = 10.192.44.148:5672

rabbit_use_ssl = False

rabbit_userid = openstack

rabbit_password = 1

rabbit_virtual_host = /

rabbit_ha_queues = False

heartbeat_rate=2

heartbeat_timeout_threshold=0

[qos]

(4)      metadata_agent.ini改为

[[email protected]]# cat metadata_agent.ini

[DEFAULT]

debug =False

auth_url= http://10.192.44.148:5000/v2.0

auth_region= RegionOne

auth_insecure= False

admin_tenant_name= service

admin_user= neutron

admin_password= 1

nova_metadata_ip= 10.192.44.148

nova_metadata_port= 8775

nova_metadata_protocol= http

metadata_proxy_shared_secret=1

metadata_workers=4

metadata_backlog= 4096

cache_url= memory://?default_ttl=5

[AGENT]

(5)      ./plugins/ml2/openvswitch_agent.ini改为:

[ovs]

integration_bridge= br-int

tunnel_bridge= br-tun

local_ip= 192.168.0.148

enable_tunneling=True

[agent]

polling_interval= 2

tunnel_types=vxlan

vxlan_udp_port=4789

l2_population= False

arp_responder= False

prevent_arp_spoofing= True

enable_distributed_routing= False

drop_flows_on_start=False

[securitygroup]

firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

保存:

6.2.10.2 计算节点neutron配置同步

[[email protected] neutron]# grep 192 ./ -r

./neutron.conf:# l3_ha_net_cidr =169.254.192.0/18

./neutron.conf:rabbit_host =192.168.129.130

./neutron.conf:rabbit_hosts =192.168.129.130:5672

./plugins/ml2/openvswitch_agent.ini:local_ip=192.168.129.131

修改如下配置:

(1)      neutron.conf

[[email protected]]# cat neutron.conf

[DEFAULT]

verbose =True

debug =False

state_path= /var/lib/neutron

use_syslog= False

use_stderr= True

log_dir=/var/log/neutron

bind_host= 0.0.0.0

bind_port= 9696

core_plugin=neutron.plugins.ml2.plugin.Ml2Plugin

service_plugins=router

auth_strategy= keystone

mac_generation_retries= 16

dhcp_lease_duration= 86400

dhcp_agent_notification= True

allow_bulk= True

allow_pagination= False

allow_sorting= False

allow_overlapping_ips= True

advertise_mtu= False

dhcp_agents_per_network= 1

use_ssl =False

rpc_response_timeout=60

rpc_backend=rabbit

control_exchange=neutron

lock_path=/var/lib/neutron/lock

[matchmaker_redis]

[matchmaker_ring]

[quotas]

[agent]

root_helper= sudo neutron-rootwrap /etc/neutron/rootwrap.conf

report_interval= 30

[keystone_authtoken]

auth_uri =http://10.192.44.148:35357/v2.0/

identity_uri =http://10.192.44.148:5000

admin_tenant_name = service

admin_user = neutron

admin_password = 1

[database]

[nova]

[oslo_concurrency]

[oslo_policy]

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

kombu_reconnect_delay= 1.0

rabbit_host = 10.192.44.148

rabbit_port= 5672

rabbit_hosts =10.192.44.148:5672

rabbit_use_ssl= False

rabbit_userid = openstack

rabbit_password = 1

rabbit_virtual_host= /

rabbit_ha_queues= False

heartbeat_rate=2

heartbeat_timeout_threshold=0

[qos]

(2)./plugins/ml2/openvswitch_agent.ini

[ovs]

integration_bridge = br-int

tunnel_bridge = br-tun

local_ip =192.168.0.149

enable_tunneling=True

[agent]

polling_interval = 2

tunnel_types =vxlan

vxlan_udp_port =4789

l2_population = False

arp_responder = False

prevent_arp_spoofing = True

enable_distributed_routing = False

drop_flows_on_start=False

[securitygroup]

firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

6.2.10.3 删除重建br,重启服务验证

删除所有br:

[[email protected] ml2]# ovs-vsctl del-brbr-int

[[email protected] ml2]# ovs-vsctl del-br br-ex

[[email protected] ml2]# ovs-vsctl list-br

[[email protected] ml2]# ovs-vsctl show

08399ed1-bb6a-4841-aca5-12a202ebd473

ovs_version: "2.4.0"

将控制节点eth1设置为10.192.44.152

DEVICE=eth3

ONBOOT=yes

STARTMODE=onboot

MTU=1500

BOOTPROTO=static

IPADDR=10.192.44.152

网络:


Eth0


Eth1


Eth2


Controller


10.192.44.148


10.192.44.152

(原172.6.2.148)


192.168.0.148


Compute


10.192.44.149


172.6.2.149


192.168.0.149


备注


br-ex

创建br-ex:

ovs-vsctl add-br br-ex

ovs-vsctl add-port br-ex eth1

# ifconfig br-exup

重启openvswitch和neutron服务:

控制节点(网路节点):

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

计算节点:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service

systemctl restartopenstack-nova-compute.service

创建vxlan网络

6.2.10.4 控制节点nova配置同步

修改前:

./nova.conf:metadata_host=192.168.129.130

./nova.conf:sql_connection=mysql://nova:[email protected]/nova

./nova.conf:api_servers=192.168.129.130:9292

./nova.conf:auth_uri=http://192.168.129.130:5000/v2.0

./nova.conf:identity_uri=http://192.168.129.130:35357

./nova.conf:url=http://192.168.129.130:9696

./nova.conf:admin_password=db12219fd1924853

./nova.conf:admin_auth_url=http://192.168.129.130:5000/v2.0

./nova.conf:rabbit_host=192.168.129.130

./nova.conf:rabbit_hosts=192.168.129.130:5672

修改后:

[[email protected] nova]# cat nova.conf

[DEFAULT]

novncproxy_host=0.0.0.0

novncproxy_port=6080

notify_api_faults=False

state_path=/var/lib/nova

report_interval=10

enabled_apis=ec2,osapi_compute,metadata

ec2_listen=0.0.0.0

ec2_listen_port=8773

ec2_workers=4

osapi_compute_listen=0.0.0.0

osapi_compute_listen_port=8774

osapi_compute_workers=4

metadata_listen=0.0.0.0

metadata_listen_port=8775

metadata_workers=4

service_down_time=60

rootwrap_config=/etc/nova/rootwrap.conf

volume_api_class=nova.volume.cinder.API

auth_strategy=keystone

use_forwarded_for=False

cpu_allocation_ratio=16.0

ram_allocation_ratio=1.5

network_api_class=nova.network.neutronv2.api.API

default_floating_pool=public

force_snat_range =0.0.0.0/0

metadata_host=10.192.44.148

dhcp_domain=novalocal

security_group_api=neutron

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter

scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

vif_plugging_is_fatal=True

vif_plugging_timeout=300

firewall_driver=nova.virt.firewall.NoopFirewallDriver

debug=False

verbose=True

log_dir=/var/log/nova

use_syslog=False

syslog_log_facility=LOG_USER

use_stderr=True

notification_topics=notifications

rpc_backend=rabbit

amqp_durable_queues=False

sql_connection=mysql://nova:[email protected]/nova

image_service=nova.image.glance.GlanceImageService

lock_path=/var/lib/nova/tmp

osapi_volume_listen=0.0.0.0

novncproxy_base_url=http://0.0.0.0:6080/vnc_auto.html

[api_database]

[barbican]

[cells]

[cinder]

catalog_info=volumev2:cinderv2:publicURL

[conductor]

[cors]

[cors.subdomain]

[database]

[ephemeral_storage_encryption]

[glance]

api_servers=10.192.44.148:9292

[guestfs]

[hyperv]

[image_file_url]

[ironic]

[keymgr]

[keystone_authtoken]

auth_uri=http://10.192.44.148:5000/v2.0

identity_uri=http://10.192.44.148:35357

admin_user=nova

admin_password=1

admin_tenant_name=service

[libvirt]

vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

[matchmaker_redis]

[matchmaker_ring]

[metrics]

[neutron]

service_metadata_proxy=True

metadata_proxy_shared_secret=1

url=http://10.192.44.148:9696

admin_username=neutron

admin_password=1

admin_tenant_name=service

region_name=RegionOne

admin_auth_url=http://10.192.44.148:5000/v2.0

auth_strategy=keystone

ovs_bridge=br-int

extension_sync_interval=600

timeout=30

default_tenant_id=default

[osapi_v21]

[oslo_concurrency]

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

kombu_reconnect_delay=1.0

rabbit_host=10.192.44.148

rabbit_port=5672

rabbit_hosts=10.192.44.148:5672

rabbit_use_ssl=False

rabbit_userid=openstack

rabbit_password=1

rabbit_virtual_host=/

rabbit_ha_queues=False

heartbeat_timeout_threshold=0

heartbeat_rate=2

[oslo_middleware]

[rdp]

[serial_console]

[spice]

[ssl]

[trusted_computing]

[upgrade_levels]

[vmware]

[vnc]

[workarounds]

[xenserver]

[zookeeper]

[osapi_v3]

enabled=False

6.2.10.5 计算节点nova配置同步

修改前:

./nova.conf:metadata_host=192.168.129.130

./nova.conf:sql_connection=mysql://[email protected]/nova

./nova.conf:novncproxy_base_url=http://192.168.129.130:6080/vnc_auto.html

./nova.conf:api_servers=192.168.129.130:9292

./nova.conf:url=http://192.168.129.130:9696

./nova.conf:admin_password=db12219fd1924853

./nova.conf:admin_auth_url=http://192.168.129.130:5000/v2.0

./nova.conf:rabbit_host=192.168.129.130

./nova.conf:rabbit_hosts=192.168.129.130:5672

[[email protected] nova]#

修改后:

[[email protected] nova]# cat nova.conf

[DEFAULT]

internal_service_availability_zone=internal

default_availability_zone=nova

notify_api_faults=False

state_path=/var/lib/nova

report_interval=10

compute_manager=nova.compute.manager.ComputeManager

service_down_time=60

rootwrap_config=/etc/nova/rootwrap.conf

volume_api_class=nova.volume.cinder.API

auth_strategy=keystone

heal_instance_info_cache_interval=60

reserved_host_memory_mb=512

network_api_class=nova.network.neutronv2.api.API

force_snat_range =0.0.0.0/0

metadata_host=10.192.44.148

dhcp_domain=novalocal

security_group_api=neutron

compute_driver=libvirt.LibvirtDriver

vif_plugging_is_fatal=True

vif_plugging_timeout=300

firewall_driver=nova.virt.firewall.NoopFirewallDriver

force_raw_images=True

debug=False

verbose=True

log_dir=/var/log/nova

use_syslog=False

syslog_log_facility=LOG_USER

use_stderr=True

notification_topics=notifications

rpc_backend=rabbit

amqp_durable_queues=False

vncserver_proxyclient_address=compute

vnc_keymap=en-us

sql_connection=mysql://[email protected]/nova

vnc_enabled=True

image_service=nova.image.glance.GlanceImageService

lock_path=/var/lib/nova/tmp

vncserver_listen=0.0.0.0

novncproxy_base_url=http://10.192.44.148:6080/vnc_auto.html

[api_database]

[barbican]

[cells]

[cinder]

[conductor]

[cors]

[cors.subdomain]

[database]

[ephemeral_storage_encryption]

[glance]

api_servers=10.192.44.148:9292

[guestfs]

[hyperv]

[image_file_url]

[ironic]

[keymgr]

[keystone_authtoken]

auth_uri=http://192.168.129.130:5000/v2.0

identity_uri=http://192.168.129.130:35357

admin_user=nova

admin_password=1

admin_tenant_name=service

[libvirt]

virt_type=qemu

inject_password=False

inject_key=False

inject_partition=-1

live_migration_uri=qemu+tcp://[email protected]%s/system

cpu_mode=none

vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

[matchmaker_redis]

[matchmaker_ring]

[metrics]

[neutron]

url=http://10.192.44.148:9696

admin_username=neutron

admin_password=1

admin_tenant_name=service

region_name=RegionOne

admin_auth_url=http://10.192.44.148:5000/v2.0

auth_strategy=keystone

ovs_bridge=br-int

extension_sync_interval=600

timeout=30

default_tenant_id=default

[osapi_v21]

[oslo_concurrency]

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

kombu_reconnect_delay=1.0

rabbit_host=10.192.44.148

rabbit_port=5672

rabbit_hosts=10.192.44.148:5672

rabbit_use_ssl=False

rabbit_userid=openstack

rabbit_password=1

rabbit_virtual_host=/

rabbit_ha_queues=False

heartbeat_timeout_threshold=0

heartbeat_rate=2

[oslo_middleware]

[rdp]

[serial_console]

[spice]

[ssl]

[trusted_computing]

[upgrade_levels]

[vmware]

[vnc]

[workarounds]

[xenserver]

[zookeeper]

6.2.10.6 重启nova服务,创建虚拟机

控制节点:

systemctl restartopenstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service

计算节点:

systemctl restartlibvirtd.service openstack-nova-compute.service

 

 

[[email protected]~(keystone_admin)]# nova service-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary           | Host        | Zone     | Status | State | Updated_at                | Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1  | nova-cert        | controller1 | internal | enabled |up    | 2016-05-28T12:45:59.000000 |-               |

| 2  | nova-consoleauth | controller1 | internal |enabled | up    |2016-05-28T12:45:59.000000 | -              |

| 3  | nova-conductor   | controller1 | internal | enabled | up    | 2016-05-28T12:45:59.000000 | -               |

| 4  | nova-scheduler   | controller1 | internal | enabled | up    | 2016-05-28T12:46:00.000000 | -               |

| 5  | nova-compute     | compute1    | nova    | enabled | up    |2016-05-28T12:45:57.000000 | -              |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

[[email protected]~(keystone_admin)]#

6.2.11 虚拟机创建成功,保存配置

7 Ceph安装

今天计划:

(1)在controller1上安装nova-compute

(2)创建ceph集群,配置ceph后端

(3)验证虚拟机迁移功能

(4)高可用配置

备注:注意修改下网络配置文件,否则起来后路由有问题

将主网卡的网关信息保留,其他的都去掉

注意将hda4 dd掉,或者删除后重新分区,将之前的cinder-volumes vgs删除,将lvm.conf恢复

[[email protected] ntp(keystone_admin)]#vgremove cinder-volumes

Volume group "cinder-volumes" successfully removed

[[email protected] ntp(keystone_admin)]# pv

pvchange  pvck       pvcreate   pvdisplay pvmove     pvremove   pvresize  pvs        pvscan

[[email protected] ntp(keystone_admin)]#pvremove /dev/hda4

Labels on physical volume "/dev/hda4" successfully wiped

[[email protected] ntp(keystone_admin)]# vgs

Novolume groups found

[[email protected] ntp(keystone_admin)]# pvs

[[email protected] ntp(keystone_admin)]#

7.1 创建ceph集群:磁盘准备

将现有的云硬盘删除,vgs:cinder-volumes删除,然后将hda4的信息清除

[[email protected] network-scripts]# vgremovecinder-volumes

Volume group "cinder-volumes" successfully removed

[[email protected] network-scripts]# pvremove/dev/hda4

Labels on physical volume "/dev/hda4" successfully wiped

将150、151的hda4做出来

磁盘不重启识别分区:

#partprobe

[[email protected] scsi_host]# lsblk

NAME  MAJ:MIN RM   SIZE RO TYPEMOUNTPOINT

hda     8:0    0 119.2G  0 disk

├─hda1   8:1    0 19.5G  0 part /

├─hda2   8:2    0  5.9G  0 part/dom/storoswd/b_iscsi/config

└─hda3   8:3    0  3.9G  0 part/dom/storoswd/b_iscsi/log

sdb     8:16   0   1.8T 0 disk

sdc     8:32   0   1.8T 0 disk

sdd     8:48   0   1.8T 0 disk

sde      8:64  0   1.8T  0 disk

[[email protected] scsi_host]# partprobe

[[email protected] scsi_host]# lsblk

NAME  MAJ:MIN RM   SIZE RO TYPEMOUNTPOINT

hda     8:0    0 119.2G  0 disk

├─hda1   8:1    0 19.5G  0 part /

├─hda2   8:2    0  5.9G  0 part /dom/storoswd/b_iscsi/config

├─hda3   8:3    0  3.9G  0 part/dom/storoswd/b_iscsi/log

└─hda4   8:4   0    90G  0 part

sdb     8:16   0   1.8T 0 disk

sdc     8:32   0   1.8T 0 disk

sdd     8:48   0   1.8T 0 disk

sde     8:64   0   1.8T 0 disk

7.2 创建ceph集群:ceph下载安装

7.2.1 下载安装

yum install ceph -y

yum install ceph-deploy -y

yum install yum-plugin-priorities -y

yum install snappy leveldb gdiskpython-argparse gperftools-libs -y

7.2.2 安装mon

关闭防火墙:

systemctl stop firewalld.service;systemctldisable firewalld.service

ceph-deploy new controller1 compute1controller2 compute2

[[email protected] ceph]# cat ceph.conf

[global]

fsid = d62855a0-c03c-448d-b3c5-7518640060c9

mon_initial_members = controller1,compute1, controller2, compute2

mon_host = 10.192.44.148,10.192.44.149,10.192.44.150,10.192.44.151

auth_cluster_required = none

auth_service_required = none

auth_client_required = none

filestore_xattr_use_omap = true

osd_pool_default_size = 4

public network = 10.192.44.0/23

cluster network = 10.192.44.0/23

ceph-deploy install controller1 compute1controller2 compute2

ceph-deploy --overwrite-conf   mon create-initial

ceph-deploy mon create controller1 compute1controller2 compute2

ceph-deploy gatherkeys controller1 compute1controller2 compute2

[[email protected]]# scp * 10.192.44.149:/etc/ceph

[[email protected]]# scp * 10.192.44.150:/etc/ceph

[[email protected]]# scp * 10.192.44.151:/etc/ceph

7.2.3 安装osd

分区:

将had分为5678四个区,每个大概20G

先分一个扩展分区hda4,90G

然后分出4个逻辑分区

[[email protected]controller1 ceph]# lsblk

NAME  MAJ:MIN RM   SIZE RO TYPEMOUNTPOINT

hda     8:0    0 119.2G  0 disk

├─hda1   8:1    0 19.5G  0 part /

├─hda2   8:2    0  5.9G  0 part/dom/storoswd/b_iscsi/config

├─hda3   8:3    0  3.9G  0 part/dom/storoswd/b_iscsi/log

├─hda4   8:4   0     1K  0 part

├─hda5   8:5    0 27.9G  0 part

├─hda6   8:6    0 28.6G  0 part

├─hda7   8:7    0 19.1G  0 part

└─hda8   8:8    0 14.3G  0 part

sdb     8:16   0   1.8T 0 disk

└─sdb1   8:17   0  1.8T  0 part/var/lib/ceph/osd/ceph-0

sdc     8:32   0   1.8T 0 disk

└─sdc1   8:33   0  1.8T  0 part/var/lib/ceph/osd/ceph-4

sdd     8:48   0   1.8T 0 disk

└─sdd1   8:49   0  1.8T  0 part/var/lib/ceph/osd/ceph-8

sde     8:64   0   1.8T 0 disk

└─sde1   8:65   0  1.8T  0 part /var/lib/ceph/osd/ceph-12

[[email protected] ceph]#

[[email protected]compute1 ntp]# lsblk

NAME  MAJ:MIN RM   SIZE RO TYPEMOUNTPOINT

hda     8:0    0 119.2G  0 disk

├─hda1   8:1    0 19.5G  0 part /

├─hda2   8:2    0  5.9G  0 part/dom/storoswd/b_iscsi/config

├─hda3   8:3   0   3.9G  0 part /dom/storoswd/b_iscsi/log

├─hda4   8:4    0    1K  0 part

├─hda5   8:5    0 23.2G  0 part

├─hda6   8:6    0 28.6G  0 part

├─hda7   8:7    0 14.3G  0 part

└─hda8   8:8    0 23.9G  0 part

sdb     8:16   0   1.8T 0 disk

└─sdb1   8:17   0  1.8T  0 part/var/lib/ceph/osd/ceph-2

sdc     8:32   0   1.8T 0 disk

└─sdc1   8:33   0  1.8T  0 part/var/lib/ceph/osd/ceph-6

sdd     8:48   0   1.8T 0 disk

└─sdd1   8:49   0  1.8T  0 part/var/lib/ceph/osd/ceph-10

sde     8:64   0   1.8T 0 disk

└─sde1   8:65   0  1.8T  0 part/var/lib/ceph/osd/ceph-14

[[email protected]controller2 ntp]# lsblk

NAME  MAJ:MIN RM   SIZE RO TYPEMOUNTPOINT

hda     8:0    0 119.2G  0 disk

├─hda1   8:1    0 19.5G  0 part /

├─hda2   8:2    0   5.9G  0part /dom/storoswd/b_iscsi/config

├─hda3   8:3    0  3.9G  0 part/dom/storoswd/b_iscsi/log

├─hda4   8:4    0    1K  0 part

├─hda5   8:5    0 23.2G  0 part

├─hda6   8:6    0 19.1G  0 part

├─hda7   8:7    0 19.1G  0 part

└─hda8   8:8    0 28.7G  0 part

sdb     8:16   0   1.8T 0 disk

└─sdb1   8:17   0  1.8T  0 part/var/lib/ceph/osd/ceph-1

sdc     8:32   0   1.8T 0 disk

└─sdc1   8:33   0  1.8T  0 part/var/lib/ceph/osd/ceph-5

sdd     8:48   0   1.8T 0 disk

└─sdd1   8:49   0  1.8T  0 part/var/lib/ceph/osd/ceph-9

sde     8:64   0   1.8T 0 disk

└─sde1   8:65   0  1.8T  0 part/var/lib/ceph/osd/ceph-13

[[email protected] ~]# partprobe

[[email protected]compute2 ~]# lsblk

NAME  MAJ:MIN RM   SIZE RO TYPEMOUNTPOINT

hda     8:0    0 111.8G  0 disk

├─hda1   8:1    0 19.5G  0 part /

├─hda2   8:2    0  5.9G  0 part/dom/storoswd/b_iscsi/config

├─hda3   8:3    0  3.9G  0 part/dom/storoswd/b_iscsi/log

├─hda4   8:4    0    1K  0 part

├─hda5   8:5    0 23.2G  0 part

├─hda6   8:6   0  19.1G  0 part

├─hda7   8:7    0 19.1G  0 part

└─hda8   8:8    0 21.2G  0 part

sdb     8:16   0   1.8T 0 disk

└─sdb1   8:17   0  1.8T  0 part

sdc     8:32   0   1.8T 0 disk

└─sdc1   8:33   0  1.8T  0 part

sdd     8:48   0   1.8T 0 disk

└─sdd1   8:49   0  1.8T  0 part

sde     8:64   0   1.8T 0 disk

└─sde1   8:65   0  1.8T  0 part

#ceph-deploy osdprepare controller1:/dev/sdb1:/dev/hda5 controller2:/dev/sdb1:/dev/hda5compute1:/dev/sdb1:/dev/hda5 compute2:/dev/sdb1:/dev/hda5

ceph-deploy osd activatecontroller1:/dev/sdb1:/dev/hda5 controller2:/dev/sdb1:/dev/hda5compute1:/dev/sdb1:/dev/hda5 compute2:/dev/sdb1:/dev/hda5

对于sdc、sdd、sde目前先不添加,先验证nova的热迁移功能,后面再添加OSD

#ceph-deploy osdprepare controller1:/dev/sdc1:/dev/hda6 controller2:/dev/sdc1:/dev/hda6compute1:/dev/sdc1:/dev/hda6 compute2:/dev/sdc1:/dev/hda6

#ceph-deploy osd activatecontroller1:/dev/sdc1:/dev/hda6 controller2:/dev/sdc1:/dev/hda6compute1:/dev/sdc1:/dev/hda6 compute2:/dev/sdc1:/dev/hda6

#ceph-deploy osdprepare controller1:/dev/sdd1:/dev/hda7 controller2:/dev/sdd1:/dev/hda7compute1:/dev/sdd1:/dev/hda7 compute2:/dev/sdd1:/dev/hda7

ceph-deploy osd activatecontroller1:/dev/sdd1:/dev/hda7 controller2:/dev/sdd1:/dev/hda7compute1:/dev/sdd1:/dev/hda7 compute2:/dev/sdd1:/dev/hda7

#ceph-deploy osdprepare controller1:/dev/sde1:/dev/hda8 controller2:/dev/sde1:/dev/hda8compute1:/dev/sde1:/dev/hda8 compute2:/dev/sde1:/dev/hda8

ceph-deploy osd activatecontroller1:/dev/sde1:/dev/hda8 controller2:/dev/sde1:/dev/hda8compute1:/dev/sde1:/dev/hda8 compute2:/dev/sde1:/dev/hda8

目前的状态:

[[email protected] ceph]# ceph -s

cluster d62855a0-c03c-448d-b3c5-7518640060c9

health HEALTH_WARN

clock skew detected on mon.compute1, mon.controller2, mon.compute2

Monitor clock skew detected

monmap e1: 4 mons at{compute1=10.192.44.149:6789/0,compute2=10.192.44.151:6789/0,controller1=10.192.44.148:6789/0,controller2=10.192.44.150:6789/0}

election epoch 6, quorum 0,1,2,3controller1,compute1,controller2,compute2

osdmap e17: 4 osds: 4 up, 4 in

pgmap v27: 64 pgs, 1 pools, 0 bytes data, 0 objects

134 MB used, 7448 GB / 7448 GB avail

64 active+clean

解决时钟问题:

mon clock drift allowed = 20

mon clock drift warn backoff = 30

重启:

/etc/init.d/ceph -a restart

[[email protected] ceph]# ceph -s

cluster d62855a0-c03c-448d-b3c5-7518640060c9

health HEALTH_OK

monmap e1: 4 mons at{compute1=10.192.44.149:6789/0,compute2=10.192.44.151:6789/0,controller1=10.192.44.148:6789/0,controller2=10.192.44.150:6789/0}

election epoch 14, quorum 0,1,2,3controller1,compute1,controller2,compute2

osdmap e30: 4 osds: 4 up, 4 in

pgmap v50: 64 pgs, 1 pools, 0 bytes data, 0 objects

137 MB used, 7448 GB / 7448 GB avail

64 active+clean

7.3 创建存储池images、volumes、vms

先删除现在的镜像、虚拟机、云硬盘

创建存储池

[[email protected] ceph]#  ceph pg stat

v53: 64 pgs: 64 active+clean; 0 bytes data,136 MB used, 7448 GB / 7448 GB avail

[[email protected] ceph]# ceph osd poolcreate image 64

pool ‘image‘ created

[[email protected] ceph]# ceph osd poolcreate volumes 64

pool ‘volumes‘ created

[[email protected] ceph]# ceph osd poolcreate vms 64

pool ‘vms‘ created

8. glance、nova、cinder后端配置为ceph

8.1 glance后端配置为ceph rbd

配置glance后端为ceph:

[glance_store]

default_store = rbd

stores = rbd

rbd_store_pool = image

rbd_store_user = glance

rbd_store_ceph_conf=/etc/ceph/ceph.conf

rbd_store_chunk_size = 8

重启glance服务:

systemctl restartopenstack-glance-api.service openstack-glance-registry.service

 

上传镜像:

[[email protected] ~(keystone_admin)]#glance image-list

+--------------------------------------+------+

| ID                                   | Name |

+--------------------------------------+------+

| 1685bd32-5eb3-45d7-b9b4-8bcd9d03bf37 |cs   |

+--------------------------------------+------+

[[email protected] ~(keystone_admin)]#glance image-show 1685bd32-5eb3-45d7-b9b4-8bcd9d03bf37

+------------------+--------------------------------------+

| Property         | Value                                |

+------------------+--------------------------------------+

| checksum         |6e496c911ee6c022501716c952fdf800     |

| container_format | bare                                 |

| created_at       | 2016-05-31T07:46:04Z                 |

| description      | cs                                   |

| disk_format      | qcow2                                |

| id               |1685bd32-5eb3-45d7-b9b4-8bcd9d03bf37 |

| min_disk         | 1                                    |

| min_ram          | 256                                  |

| name             | cs                                   |

| owner            | 617e98e151b245d081203adcbb0ce7a4     |

| protected        | False                                |

| size             | 13224448                             |

| status           | active                               |

| tags             | []                                   |

| updated_at       | 2016-05-31T07:46:07Z                 |

| virtual_size     | None                                 |

| visibility       | public                               |

+------------------+--------------------------------------+

成功

8.2 cinder 后端配置为ceph rbd

将cinder-volume的后端配置为ceph rbd

[DEFAULT]

enabled_backends = ceph

[ceph]

volume_driver =cinder.volume.drivers.rbd.RBDDriver

rbd_pool = volumes

rbd_ceph_conf = /etc/ceph/ceph.conf

rbd_flatten_volume_from_snapshot = false

rbd_max_clone_depth = 5

rbd_store_chunk_size = 4

rados_connect_timeout = -1

glance_api_version = 2

[[email protected] ~(keystone_admin)]#cinder service-list

+------------------+-----------------+------+---------+-------+----------------------------+-----------------+

|     Binary      |       Host     | Zone |  Status | State |         Updated_at         | Disabled Reason |

+------------------+-----------------+------+---------+-------+----------------------------+-----------------+

| cinder-scheduler |   controller1  | nova | enabled |   up  | 2016-05-31T07:50:01.000000 |        -       |

| cinder-volume   |     compute1   | nova | enabled |  down |             -              |        -       |

| cinder-volume   |  [email protected] | nova | enabled |   up  | 2016-05-31T07:50:10.000000 |        -       |

创建云硬盘:

[[email protected] ~(keystone_admin)]# cinder list

+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+

|                 ID                  |   Status | Migration Status | Name | Size | Volume Type | Bootable | Multiattach| Attached to |

+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+

| 6bff06df-554b-417c-bbae-0a82776b43e1 |available |        -         | vol |  1   |     -      |  false  |    False    |             |

+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+

[[email protected] ~(keystone_admin)]# cinder show  6bff06df-554b-417c-bbae-0a82776b43e1

+---------------------------------------+--------------------------------------+

|                Property               |                Value                 |

+---------------------------------------+--------------------------------------+

|              attachments              |                  []                  |

|          availability_zone           |                 nova                 |

|                bootable               |                false                 |

|         consistencygroup_id         |                 None                 |

|               created_at              |      2016-05-31T07:50:35.000000      |

|              description              |                 None                 |

|               encrypted               |                False                 |

|                   id                  |6bff06df-554b-417c-bbae-0a82776b43e1 |

|               metadata               |                  {}                  |

|           migration_status           |                 None                 |

|              multiattach              |                False                 |

|                  name                 |                 vol                  |

|        os-vol-host-attr:host        |         
[email protected]#RBD           |

8.3 nova后端配置为ceph rbd

配置nova后端为ceph rbd:

修改前:

[libvirt]

virt_type=qemu

inject_password=False

inject_key=False

inject_partition=-1

live_migration_uri=qemu+tcp://[email protected]%s/system

cpu_mode=none

vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

改为:

[libvirt]

virt_type=qemu

inject_password=False

inject_key=False

inject_partition=-1

live_migration_uri=qemu+tcp://[email protected]%s/system

cpu_mode=none

vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

images_type = rbd

images_rbd_pool = vms

images_rbd_ceph_conf = /etc/ceph/ceph.conf

libvirt_live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"

重启nova:

systemctl restartlibvirtd.service openstack-nova-compute.service

启动虚拟机:

启动失败,验证virsh

virsh -c qemu+tcp://10.192.44.149:16509/system

[[email protected] nova]#  virsh -cqemu+tcp://10.192.44.149:16509/system

Welcome to virsh, the virtualizationinteractive terminal.

Type: ‘help‘ for help with commands

‘quit‘ to quit

virsh # ^C

连接正常

失败在哪里呢?

控制节点配置也改一下:

[libvirt]

virt_type=qemu

inject_password=False

inject_key=False

inject_partition=-1

live_migration_uri=qemu+tcp://[email protected]%s/system

cpu_mode=none

vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

images_type = rbd

images_rbd_pool = vms

images_rbd_ceph_conf = /etc/ceph/ceph.conf

libvirt_live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"

重启所有服务:

systemctlrestart openstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

# ifconfig br-ex up

重启open vswitch 和neutron :

控制节点(网路节点):

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

计算节点:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service

还是启动失败

将后端改回来:

启动成功!!!

看来是nova配置有问题

放到9.1中排查

9. 验证虚拟机迁移功能

9.1 排查后端改为rbd后,nova启动失败的原因

参考官网:http://docs.ceph.com/docs/master/rbd/rbd-openstack/

(1)改ceph配置:

Now on every compute nodesedit your Ceph configuration file:

[client]

rbd cache = true

rbd cache writethrough until flush = true

admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok

log file = /var/log/qemu/qemu-guest-$pid.log

rbd concurrentmanagement ops = 20

Configure the permissions of these paths:

mkdir-p /var/run/ceph/guests/ /var/log/qemu/

chownqemu:libvirtd /var/run/ceph/guests /var/log/qemu/

[[email protected] ceph]# mkdir -p/var/run/ceph/guests/ /var/log/qemu/

[[email protected] ceph]# chown qemu:libvirtd/var/run/ceph/guests /var/log/qemu/

chown: invalid group: ‘qemu:libvirtd’

[[email protected] ceph]# chmod 777  /var/run/ceph/guests -R

[[email protected] ceph]# chmod 777/var/log/qemu/ -R

重启ceph:

/etc/init.d/ceph -a restart

再次将nova.conf的配置改回来

systemctl restartopenstack-nova-compute.service

 

创建虚拟机:

还是失败

修改配置:

 

参考10.33.41.22:

 

[libvirt]

images_type = rbd

images_rbd_pool = vms

images_rbd_ceph_conf = /etc/ceph/ceph.conf

disk_cachemodes ="network=writeback"

inject_password = false

inject_key = false

inject_partition = -2

live_migration_flag ="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"

重启nova服务:

OK,虚拟机可以启动

保存现在的配置

[[email protected] nova]# vinova-compute.conf

[DEFAULT]

compute_driver=libvirt.LibvirtDriver

[libvirt]

virt_type=kvm

live_migration_uri=qemu+tcp://[email protected]%s/system

9.2 在控制节点controller1上安装nova-compute,更新libvirtd

9.2.1 手动安装nova-compute

手动安装nova-compute:参考5.14:

yum install sysfsutils

yum install --downloadonly--downloaddir=/root/rpm_nova_compute openstack-nova-compute

yum install python-nova

yum install python-cinderclient

rpm -ivh --force --nodepslibguestfs-1.28.1-1.55.el7.centos.x86_64.rpm

rpm -ivh --force --nodepspython-libguestfs-1.28.1-1.55.el7.centos.x86_64.rpm

rpm -ivhopenstack-nova-compute-12.0.1-1.el7.noarch.rpm

9.2.2 手动安装libvirtd

手动安装libvirtd:参考6.2.7,注意启动脚本要加-l

手动安装libvirt:

卸载老的:

rpm -e --nodeps libvirt-clientlibvirt-daemon-driver-nodedev libvirt-glib libvirt-daemon-config-networklibvirt-daemon-driver-nwfilter libvirt-devel libvirt-daemon-driver-qemu libvirt-daemon-driver-interfacelibvirt-gobject libvirt-daemon-driver-storage libvirt-daemon-driver-networklibvirt-daemon-config-nwfilter
libvirt libvirt-daemon-driver-secretlibvirt-gconfig libvirt-java-devel libvirt-daemon-kvm libvirt-docslibvirt-daemon-driver-lxc libvirt-python libvirt-daemon libvirt-java

扫清障碍:

rpm -aq |grep libvirt

# rpm -ivhlibvirt-client-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-network-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-nodedev-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-secret-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-nwfilter-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-storage-1.2.17-13.el7.x86_64.rpm

]# rpm -ivhlibvirt-daemon-driver-lxc-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-1.2.17-13.el7.x86_64.rpm

]# rpm -ivhlibvirt-daemon-driver-interface-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-qemu-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-config-nwfilter-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-config-network-1.2.17-13.el7.x86_64.rpm

# rpm -ivh libvirt-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-kvm-1.2.17-13.el7.x86_64.rpm

# rpm -ivh libvirt-docs-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-devel-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-python-1.2.17-2.el7.x86_64.rpm

# rpm -ivh dracut-033-359.el7.x86_64.rpm

rpm -ivhdracut-config-rescue-033-359.el7.x86_64.rpm

# rpm -ivhdracut-network-033-359.el7.x86_64.rpm

# rpm -ivhinitscripts-9.49.30-1.el7.x86_64.rpm

# rpm -ivh kmod-20-5.el7.x86_64.rpm

# rpm -ivh libgudev1-219-19.el7.x86_64.rpm

# rpm -ivhlibgudev1-devel-219-19.el7.x86_64.rpm

9.2.3 配置启动libvirtd

修改配置:

libvirtd.conf:

# cat libvirtd.conf  |grep -v ‘^#‘ |grep -v ‘^$‘

listen_tls = 0

listen_tcp = 1

tls_port = "16514"

tcp_port = "16509"

auth_tcp = "none"

# cat qemu.conf

vnc_listen = "0.0.0.0"

user = "root"

group = "root"

修改启动脚本:

ExecStart=/usr/sbin/libvirtd -d --listen$LIBVIRTD_ARGS

启动libvirtd:

# systemctl enable libvirtd.service

# systemctl start libvirtd.service

验证libvirtd:

[[email protected] system]# ss -nalp |grep16509

tcp   LISTEN     0      30                     *:16509                 *:*      users:(("libvirtd",22242,14))

tcp   LISTEN     0      30                    :::16509                :::*      users:(("libvirtd",22242,15))

[[email protected] system]# virsh -cqemu+tcp://localhost/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

替换libdevmapper.so.1.02

重启libvirtd;

[[email protected] lib64]# virsh -cqemu+tcp://localhost/system

Welcome to virsh, the virtualizationinteractive terminal.

Type: ‘help‘ for help with commands

‘quit‘ to quit

virsh # ^C

[[email protected] lib64]#

Libvirt OK

9.2.4 配置启动nova-compute

修改配置:

Nova.conf:

[libvirt]

images_type = rbd

images_rbd_pool = vms

images_rbd_ceph_conf = /etc/ceph/ceph.conf

disk_cachemodes ="network=writeback"

inject_password = false

inject_key = false

inject_partition = -2

live_migration_flag ="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"

nova-compute.conf:

重启nova的所有服务:

systemctlrestart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.serviceopenstack-nova-novncproxy.service

systemctl enableopenstack-nova-compute.service

systemctl restartopenstack-nova-compute.service

 

[[email protected] ~(keystone_admin)]# nova service-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary           | Host        | Zone     | Status | State | Updated_at                | Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1 | nova-cert        | controller1 |internal | enabled | up    |2016-05-31T09:14:30.000000 | -              |

| 2 | nova-consoleauth | controller1 | internal | enabled | up    | 2016-05-31T09:14:30.000000 | -               |

| 3 | nova-conductor   | controller1 |internal | enabled | up    |2016-05-31T09:14:29.000000 | -              |

| 4 | nova-scheduler   | controller1 |internal | enabled | up    |2016-05-31T09:14:30.000000 | -              |

| 5 | nova-compute     | compute1    | nova    | enabled | up    |2016-05-31T09:14:27.000000 | -              |

| 6  | nova-compute     | controller1 | nova     | enabled | up    | 2016-05-31T09:14:33.000000 | -               |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

9.2.5 验证controller1的虚拟机功能,将compute1节点的nova-compute stop掉

停掉compute1的nova-compute:

[[email protected] nova]#  systemctl stopopenstack-nova-compute.service

[[email protected] nova]# ps -A |grep nova

[root[email protected] ~(keystone_admin)]# novaservice-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary           | Host        | Zone     | Status | State | Updated_at                | Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1 | nova-cert        | controller1 |internal | enabled | up    |2016-05-31T09:16:40.000000 | -              |

| 2 | nova-consoleauth | controller1 | internal | enabled | up    | 2016-05-31T09:16:40.000000 | -               |

| 3  |nova-conductor   | controller1 | internal| enabled | up    |2016-05-31T09:16:39.000000 | -              |

| 4 | nova-scheduler   | controller1 |internal | enabled | up    |2016-05-31T09:16:40.000000 | -              |

| 5 | nova-compute     | compute1    | nova    | enabled | down  |2016-05-31T09:15:42.000000 | -              |

| 6 | nova-compute     | controller1 |nova     | enabled | up    | 2016-05-31T09:16:33.000000 | -               |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

创建虚拟机:

竟然启动失败!!!

qemu也要升级:

yum install qemu qemu-img

再次验证:

OK,虚拟机启动成功

保存配置:

9.3 虚拟机迁移

目前该虚拟机存在于controller1:

[[email protected] etc(keystone_admin)]#nova list

+--------------------------------------+------+--------+------------+-------------+------------------+

| ID                                   | Name |Status | Task State | Power State | Networks         |

+--------------------------------------+------+--------+------------+-------------+------------------+

| 662358c0-74c7-4825-9099-0b7f8cb58869 |cs   | ACTIVE | -          | Running     | int=192.168.0.12 |

+--------------------------------------+------+--------+------------+-------------+------------------+

[[email protected] etc(keystone_admin)]#nova show 662358c0-74c7-4825-9099-0b7f8cb58869

+--------------------------------------+----------------------------------------------------------+

| Property                             | Value                                                    |

+--------------------------------------+----------------------------------------------------------+

| OS-DCF:diskConfig                    | AUTO                                                    |

| OS-EXT-AZ:availability_zone          | nova                                                    |

| OS-EXT-SRV-ATTR:host                | controller1                                              |

| OS-EXT-SRV-ATTR:hypervisor_hostname | controller1                                             |

将compute1的nova-compute也启动

接下来验证热迁移:

迁移前:

迁移操作:

报错:

2016-05-31 18:38:35.448 20652 DEBUGnova.compute.resource_tracker [req-df78dec8-8491-4a29-8c6a-3871ba8fb94b - - - --] Migration instance not found: Instance 662358c0-74c7-4825-9099-0b7f8cb58869could not be found.

Traceback (most recent call last):

File"/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line447, in _object_dispatch

return getattr(target, method)(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py",line 171, in wrapper

result = fn(cls, context, *args, **kwargs)

File"/usr/lib/python2.7/site-packages/nova/objects/instance.py", line372, in get_by_uuid

use_slave=use_slave)

File "/usr/lib/python2.7/site-packages/nova/db/api.py", line645, in instance_get_by_uuid

columns_to_join, use_slave=use_slave)

File"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line216, in wrapper

return f(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py",line 1713, in instance_get_by_uuid

columns_to_join=columns_to_join, use_slave=use_slave)

File"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line1725, in _instance_get_by_uuid

raiseexception.InstanceNotFound(instance_id=uuid)

ceph的4个osd也有2个down掉

将四个节点都重启

重启不了

先将两个osd移除:

ceph osd out 0

ceph osd crushreweight osd.0 0

ceph osd crushremove osd.0

ceph auth delosd.0

ceph osd rm 0

然后重新添加osd

# ceph-deploy --overwrite-conf osd preparecontroller1:/dev/sdb1:/dev/hda5 compute1:/dev/sdb1:/dev/hda5

# ceph-deploy --overwrite-conf osd activatecontroller1:/dev/sdb1:/dev/hda5 compute1:/dev/sdb1:/dev/hda5

首先卸载:ceph-deploy purge controller1 compute1 controller2 compute2

ceph-deploy purge

然后清理所有的ceph目录

然后dd清零所有分区,然后重新分区

先做两个节点controller1和compute1,后面再加

 

9.4 重新做ceph和虚拟机迁移

9.4.1 ceph:先做两个节点

ceph-deploy new controller1 compute1

[[email protected] ceph]# cat ceph.conf

[global]

fsid = 8c0b942d-12da-4555-b6e3-6e4e426638be

mon_initial_members = controller1, compute1

mon_host = 10.192.44.148,10.192.44.149

auth_cluster_required = none

auth_service_required = none

auth_client_required = none

filestore_xattr_use_omap = true

osd_pool_default_size = 2

public network = 10.192.44.0/23

cluster network = 10.192.44.0/23

mon clock drift allowed = 20

mon clock drift warn backoff = 30

ceph-deploy install controller1 compute1

ceph-deploy --overwrite-conf   mon create-initial

ceph-deploy mon create controller1 compute1

ceph-deploy gatherkeys controller1 compute1

scp * 10.192.44.149:/etc/ceph

ceph-deploy osd prepare  controller1:/dev/sdb1:/dev/hda4compute1:/dev/sdb1:/dev/hda4

ceph-deploy osd activatecontroller1:/dev/sdb1:/dev/hda4 compute1:/dev/sdb1:/dev/hda4

[[email protected] ceph]# ceph -s

cluster 8c0b942d-12da-4555-b6e3-6e4e426638be

health HEALTH_OK

monmap e1: 2 mons at{compute1=10.192.44.149:6789/0,controller1=10.192.44.148:6789/0}

election epoch 4, quorum 0,1 controller1,compute1

osdmap e9: 2 osds: 2 up, 2 in

pgmap v14: 64 pgs, 1 pools, 0 bytes data, 0 objects

67920 kB used, 3724 GB / 3724 GB avail

64 active+clean

9.4.2 创建资源池

ceph osd pool create image 64

ceph osd pool create volumes 64

ceph osd pool create vms 64

9.4.3 重启nova、cinder、glance服务

Cinder-volume:

计算节点和控制节点都要开启cinder-volume:

# systemctlenable openstack-cinder-volume.service target.service

# systemctl restartopenstack-cinder-volume.service target.service

Glance服务:

systemctl restart openstack-glance-api.serviceopenstack-glance-registry.service

nova服务:

systemctl restartopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl restartopenstack-nova-compute.service

 

9.4.4 验证每个nova-compute

将控制节点nova-compute stop:启动OK

将计算节点的nova-compute stop: 启动OK

9.4.5 验证热迁移

正在排查

打印:

2016-05-31 21:09:30.440 29567 ERRORnova.virt.libvirt.driver [req-a72377d6-3b59-4616-8f92-b1d449946939cfca3361950644de990b52ad341a06f0 617e98e151b245d081203adcbb0ce7a4 - - -][instance: b4a2b79c-0a1e-4b69-a2d9-77d44c9318e9] Live Migration failure:internal
error: Attempt to migrate guest to the same host 78563412-3412-7856-90ab-cddeefaabbcc

2016-05-31 21:09:30.602 29567 ERRORnova.virt.libvirt.driver [req-a72377d6-3b59-4616-8f92-b1d449946939cfca3361950644de990b52ad341a06f0 617e98e151b245d081203adcbb0ce7a4 - - -][instance: b4a2b79c-0a1e-4b69-a2d9-77d44c9318e9] Migration operation hasaborted

解决方法:

libvirtd.conf中需要设置host_uuid:

host_uuid="184932ba-df0c-4822-b106-7c704f3acd25"

listen_tls = 0

listen_tcp = 1

tls_port = "16514"

tcp_port = "16509"

auth_tcp = "none"

可以通过uuidgen生成:

hanwei6(韩卫6) 05-31 21:10:48

问题

Live Migration failure:internal error: Attempt to migrate guest to the same host00020003-0004-0005-0006-000700080009

办法

#uuidgen                                   //获取一个uuid

#vi/etc/libvirt/libvirtd.conf           //编辑配置文件

host_uuid =“00000000-0000-0000-0000-000000000000″ //将之前获取的uuid填在这里。

#service libvirt-binrestart               //重启服务 应该就可以在线迁移了。

10. 配置高可用

附录:一些备注

1.    glance镜像上传因网络还没安装,暂时未验证:已通过

2.    nova service-list中看不到nova-compute服务:已解决,见5.14

安装nova-compute会依赖安装libvirtd,从而安装device-mapper的库

原因:安装libvirtd会替换device-mapper的库

手动安装nova-compute有个库需要强制安装,导致qemu部分东西没有被安装,然后libvirt调用qemu出错

解决方法是yum install qemu qemu-img

3.    配置都改为参考萤石云的:改为packstack的更全面

4 关于OVS的配置

(1)网络节点OVS必须使用另外一张网卡(br-ex),这个br-ex用来通外网

计算节点不需要两个网口,网络节点必须要两个网口,不然br-ex绑定eth0后,eth0不通

br-ex作为外网网卡

(2)计算节点的ovs不可以设置为eth0管理口的ip,否则计算节点的网络不通

(3)比较正规的做法:暂时不考虑存储专用网络:

网络划分:


主机


管理网(eth0)


虚拟机网络(ovs网络、隧道口)


外部网络(eth1)


Node1(network2)


10.192.44.148


10.192.45.148


10.xx.xx.xx


Node2(network2)


10.192.44.149


10.192.45.149


10.xx.xx.xx


Node3


10.192.44.150


10.192.45.150


Node4


10.192.44.151


10.192.45.151

[[email protected](keystone_admin)]# ovs-vsctl list-br

br-ex

br-int

# ovs-vsctl list-ports br-ex

eth1

eth1:

[[email protected](keystone_admin)]# ifconfig eth3

eth1:flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

inet 10.192.44.152  netmask255.255.254.0  broadcast 10.192.45.255

这里的eth0连接管理网络,openstack组件的通用网络

eth1连接外网,配置的是外网的网络

open vSwitch配置的是内部网络,隧道口,可以随意配置为192.163.xx.xx的网络,和10.192.xx.xx的网络没有关系

open vSwitch的网络通过ovs的br-ex和外界连通

那么假设10.192.44.xx和10.192.45.xx都是通外网的,即管理网络也可以通外网,是不是可以这么配置:


主机


管理网(eth0)


虚拟机网络(ovs网络、隧道口)


外部网络(ethx)


Node1(network2)


10.192.44.148


192.168.0.148


10.192.45.148


Node2(network2)


10.192.44.149


192.168.0.149


10.192.45.149


Node3


10.192.44.150


192.168.0.150


Node4


10.192.44.151


192.168.0.151

这里采用192.168.0.xxx作为隧道网络(ovs配置的网络),或者虚拟机的固定IP

如果这样可以,那么具体对于具体设置:

[ovs]

local_ip = 192.168.0.148

bridge_mappings = external:br-ex

是不是这样才对?

另一个问题:

那么虚拟机如果要通外网,是不是还要配置浮动IP?

为什么虚拟机还要配置浮动IP?不是可以和br-ex与外网连通么?

最后:br-ex的state DOWN是什么原因?ovs没启动?

10: ovs-system: <BROADCAST,MULTICAST>mtu 1500 qdisc noop state DOWN

link/ether 26:54:45:3f:0a:fa brd ff:ff:ff:ff:ff:ff

11: br-int: <BROADCAST,MULTICAST> mtu1500 qdisc noop state DOWN

link/ether 0e:03:65:4a:e8:4d brd ff:ff:ff:ff:ff:ff

12: br-ex: <BROADCAST,MULTICAST> mtu1500 qdisc noop state DOWN

link/ether 88:00:00:01:02:13 brd ff:ff:ff:ff:ff:ff

13: virbr0:<NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN

link/ether 02:fb:12:71:ea:e7 brd ff:ff:ff:ff:ff:ff

inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

valid_lft forever preferred_lft forever

[[email protected] ml2(keystone_admin)]#ovs-vsctl show

ba32f48c-535c-4edd-b366-9c3ca159d756

Bridge br-int

fail_mode: secure

Port br-int

Interface br-int

type: internal

Bridge br-ex

Port "eth3"

Interface "eth3"

Port br-ex

Interface br-ex

type: internal

ovs_version: "2.4.0"

5      千万不要安装libvird:手动安装nova-compute的rpm包,参考5.14

原因:安装libvirtd会替换device-mapper的库

安装nova-compute会依赖安装libvirtd,从而安装device-mapper的库

6 需要安装qemu、qemu-img:yum install qemu qemu-img

安装原因:

原生的qemu-img  -v会报错,导致nova-compute报错:

libvirtError: no connection driveravailable for qemu:///system

7      Mysql删除掉原来的重装:参考4.1

重装原因:原来storos自带的没有启动,后续可能不会出现该问题

但是如果做高可用,需要使用galera版本,这里需要和北京确认:

[[email protected] ~(keystone_admin)]# mysql-uroot -pf478ed694b4d4c45

Welcome to the MariaDB monitor.  Commands end with ; or \g.

Your MariaDB connection id is 23

Server version: 5.5.40-MariaDB-wsrepMariaDB Server,
wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDBCorporation Ab and others.

Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ toclear the current input statement.

MariaDB [(none)]>

做高可用的版本必须这里有wsrep_xxx

8 保存配置及文档到git:

9. rpm包的统计

控制+网络:

[[email protected](keystone_admin)]# tree |grep rpm |wc -l

342

[[email protected](keystone_admin)]#

计算:

# tree |grep rpm|wc -l

178

有些包是两边都有的,共520个,并集大概<=500个。

 

10. 验证lvm需要改lvm.conf,后续ceph不需要:改为ceph后恢复这里

filter =[ "a/hda4/", "r/.*/"]

[[email protected]]# pvcreate /dev/hda4

Physical volume "/dev/hda4"successfully created

[[email protected]]# vgcreate cinder-volumes /dev/hda4

Volume group "cinder-volumes"successfully created

[[email protected]]# vgs

VG            #PV #LV #SN Attr   VSize  VFree

cinder-volumes   1  0   0 wz--n- 89.94g 89.94g

修改cinder后端:

[lvm]

volume_driver= cinder.volume.drivers.lvm.LVMVolumeDriver

volume_group= cinder-volumes

iscsi_protocol= iscsi

重启cinder-volume服务:

systemctlrestart openstack-cinder-volume.service target.service

服务是down的:

[[email protected] ~(keystone_admin)]#cinder service-list

+------------------+--------------+------+---------+-------+----------------------------+-----------------+

|     Binary      |     Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |

+------------------+--------------+------+---------+-------+----------------------------+-----------------+

| cinder-scheduler | controller1  | nova | enabled |   up  |2016-05-27T02:19:59.000000 |        -        |

| cinder-volume   |   compute1  | nova | enabled |  down |             -              |       -        |

| cinder-volume   | [email protected] |nova | enabled |  down |2016-05-27T03:33:58.000000 |        -        |

+------------------+--------------+------+---------+-------+----------------------------+-----------------+

排查:参考packstack安装的cinder.conf:

systemctlrestart openstack-cinder-api.service openstack-cinder-scheduler.service

11. 时钟不同步可能会导致某些服务启动正常,但service list显示down:参考6.2.2.1

12 numpy相关的包:yum search numpy需要重装,否则cinder创建云硬盘会失败,具体见6.2.2.2

Cinder-api和cinder-volume所在的节点都要重装

# yum install -y numpy-f2py python-numpydocpython34-numpy-f2py netcdf4-python numpy python-Bottleneck python-numdisplaypython-numexpr python34-numpy

13 Nova服务不正常:/var/lock/nova目录丢失导致服务异常

#mkdir /var/lock/nova

# chmod 777 nova/

# chown nova:root nova/

解决;

systemctlrestart openstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

14 创建网络时,要在管理员下创建,才会有flat的选择

这里使用vxlan,不用flat external

15 开机/重启后路由可能有问题

备注:注意修改下网络配置文件,否则起来后路由有问题

将主网卡的网关信息保留,其他的都去掉

16 热迁移功能需要配置libvirtd.conf中host_uuid

#uuidgen //生成uuid

184932ba-df0c-4822-b106-7c704f3acd25

[[email protected] ~]# cat/etc/libvirt/libvirtd.conf

host_uuid="184932ba-df0c-4822-b106-7c704f3acd25"

listen_tls = 0

listen_tcp = 1

tls_port = "16514"

tcp_port = "16509"

auth_tcp = "none"

如果不配置Uuid,会出现如下错误:

Live Migration failure: internal error:Attempt to migrate guest to the same host 78563412-3412-7856-90ab-cddeefaabbcc

17. 单点故障虚拟机的主机迁移

17.1 虚拟机热迁移与故障迁移

热迁移是必须在两个节点的nova-compute都正常工作的情况下才可以进行。

但是如果一台主机出现单点故障,比如掉电,如何将该虚拟机迁移到其他主机?

现在界面上还没有该功能?但是如果是共享存储的话,是有后台命令完成虚拟机迁移的。

目前只能后台操作,操作方法:

host-evacuate-live          Livemigrate all instances of the specified

host to otheravailable hosts.

host-evacuate              Evacuate all instances from failed host.

这里有一个 host-evacuate,用于从故障主机撤离到正常主机

但是界面上还没有这个操作,需要后续加上。

参考:

http://blog.csdn.net/tantexian/article/details/44960671

evacuate:虚拟机所在的host宕机了,可以使用evacuate将虚拟机在另外一个host上启起来,其实利用这个接口配合host监控工具,可以实现虚拟机的HA能力。

现在如何做:

Openstack VM的HA

到目前为止并没有一个完整的使用说明。但是从目前实现的功能来看,Openstack自身已经具备了一些HA的功能。

1.在nova中提供了Evacuate命令来实现,将VM从失败的Compute节点在目标节点上rebuild。这一功能的实现需要依赖源节点和目标节点间有共享存储。

2.在VM的HA当中,对于Compute节点是否故障的判断需要非常的精细,目前在Openstack中每个nova-compute服务启动时都会启动一个定时器,定期的将心跳写入到数据库中,这样可以从控制节点方便的知道Compute节点的状态。

但是Openstack仅仅拥有这些功能还不足以完成对VM HA功能的完美支持。

1.只是通过nova-compute服务来确定Compute节点的状态时不可靠的,例如仅仅是nova-compute服务失效,或者网络闪断时,也会造成心跳的过期,从而对是否进行HA不能进行准确的判断。因此需要通过其他方式来确保准确获得节点的状态。

2.Openstack没有对VM进行加锁,因此在进行Evacuate命令时,会出现脑裂(同一个disk启动多个VM的情况)。

3.对于需要保护的虚拟机需要提供一个列表,用来表明哪些VM是用来保护的。目前的Evacuate命令会奖失败主机上的所有虚拟机无差别进行rebuild这样的实现也是不太合理的。

17.2 网上的操作示例

1- Shutdown compute2

2- From controller node list VMs on compute2

nova list --host compute2 --all-tenants

+--------------------------------------+-----------+--------+-----------------------------+

| ID | Name | Status | Networks |

+--------------------------------------+-----------+--------+-----------------------------+

| 17f1e573-ce5d-4a4a-90cc-1a9cd13d1d3c | vm-01| ACTIVE | NET-111=10.225.33.72 |

+--------------------------------------+-----------+--------+-----------------------------+

3- From controller node move vm-01from dead compute2 to compute1

isc-ctl-0114:16:06 ~ # nova evacuate vm-01
compute1 --on-shared-storage

ERROR: No server with a name or ID of ‘pts-vm-01‘ exists

isc-ctl-0114:16:43 ~ # nova evacuate 17f1e573-ce5d-4a4a-90cc-1a9cd13d1d3c
compute1 --on-shared-storage

4- Verify if vm-01 is nowrunning on compute1

isc-ctl-0114:17:02 ~ # nova list --host compute2 --all-tenants

isc-ctl-0114:17:08 ~ # nova list --host compute1 --all-tenants

+--------------------------------------+---------------+--------+---------------------------------+

| ID | Name | Status | Networks |

+--------------------------------------+---------------+--------+---------------------------------+

| 79b599e9-85cd-45a0-a7ef-e3e2551f9077 | vm-00| ACTIVE | NET-112=10.225.33.99 |

| 6db67ab6-04ac-4d8c-bc26-c43af1ab6d4d | vm-02| ACTIVE | NET-111=10.225.33.71 |

| 17f1e573-ce5d-4a4a-90cc-1a9cd13d1d3c | vm-03| ACTIVE | NET-111=10.225.33.72 |

+--------------------------------------+---------------+--------+---------------------------------+

17.3 实际验证手动host-evacuate

(1)nova host-list

[[email protected] ~(keystone_admin)]# novahost-list

+-------------+-------------+----------+

| host_name   | service    | zone     |

+-------------+-------------+----------+

| controller1 | cert        | internal |

| controller1 | consoleauth | internal |

| controller1 | conductor   | internal |

| controller1 | scheduler   | internal |

| compute1    | compute     | nova    |

| controller1 | compute     | nova    |

+-------------+-------------+----------+

(2)

[[email protected] ~(keystone_admin)]# novalist --host compute1

+--------------------------------------+------+--------+------------+-------------+------------------+

| ID                                   | Name |Status | Task State | Power State | Networks         |

+--------------------------------------+------+--------+------------+-------------+------------------+

| 5bdfe1b2-1345-4f70-bdd4-de1bb08304ac |cs   | ACTIVE | -          | Running     | int=192.168.0.20 |

+--------------------------------------+------+--------+------------+-------------+------------------+

(3)将compute1的nova-compute停掉

[[email protected] nova]# systemctl  stop openstack-nova-compute.service

[[email protected] nova]# ps -A |grep nova

[[email protected] nova]#

(5)      nova service-list 检查compute1的nova-compute是否down掉

[[email protected]~(keystone_admin)]# nova service-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id |Binary           | Host        | Zone     | Status | State | Updated_at                | Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1  | nova-cert        | controller1 | internal | enabled |up    | 2016-06-01T02:35:46.000000 |-               |

| 2  | nova-consoleauth | controller1 | internal |enabled | up    |2016-06-01T02:35:45.000000 | -              |

| 3  | nova-conductor   | controller1 | internal | enabled | up    | 2016-06-01T02:35:45.000000 | -               |

| 4  | nova-scheduler   | controller1 | internal | enabled | up    | 2016-06-01T02:35:46.000000 | -               |

| 5  | nova-compute     | compute1    | nova    | enabled | down  |2016-06-01T02:34:29.000000 | -              |

| 6  | nova-compute     | controller1 | nova     | enabled | up    | 2016-06-01T02:35:52.000000 | -               |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

(6)      执行nova evacuate进行故障虚拟机恢复

[[email protected] ~(keystone_admin)]#nova  evacuate5bdfe1b2-1345-4f70-bdd4-de1bb08304ac controller1 --on-shared-storage

+-----------+-------+

| Property | Value |

+-----------+-------+

| adminPass | -     |

+-----------+-------+

[[email protected] ~(keystone_admin)]# novalist --host controller1

+--------------------------------------+------+--------+------------+-------------+------------------+

| ID                                   | Name |Status | Task State | Power State | Networks         |

+--------------------------------------+------+--------+------------+-------------+------------------+

| 5bdfe1b2-1345-4f70-bdd4-de1bb08304ac |cs   | ACTIVE | -          | Running     | int=192.168.0.20 |

+--------------------------------------+------+--------+------------+-------------+------------------+

[[email protected] ~(keystone_admin)]# novalist --host compute1

+----+------+--------+------------+-------------+----------+

| ID | Name | Status | Task State | PowerState | Networks |

+----+------+--------+------------+-------------+----------+

+----+------+--------+------------+-------------+----------+

可以看到,虚拟机已经从compute1节点迁移到了controller1节点

时间: 2024-10-09 17:24:00

openstack高可用环境搭建(一):非高可用环境的搭建的相关文章

搭建一个redis高可用系统

一.单个实例 当系统中只有一台redis运行时,一旦该redis挂了,会导致整个系统无法运行. 单个实例 二.备份 由于单台redis出现单点故障,就会导致整个系统不可用,所以想到的办法自然就是备份(一般工业界认为比较安全的备份数应该是3份).当一台redis出现问题了,另一台redis可以继续提供服务. 备份 三.自动故障转移 虽然上面redis做了备份,看上去很完美.但由于redis目前只支持主从复制备份(不支持主主复制),当主redis挂了,从redis只能提供读服务,无法提供写服务.所以

基于keepalived搭建MySQL的高可用集群

http://www.cnblogs.com/ivictor/p/5522383.html 基于keepalived搭建MySQL的高可用集群 MySQL的高可用方案一般有如下几种: keepalived+双主,MHA,MMM,Heartbeat+DRBD,PXC,Galera Cluster 比较常用的是keepalived+双主,MHA和PXC. 对于小公司,一般推荐使用keepalived+双主,简单. 下面来部署一下 配置环境: 角色                          

[ Openstack ] Openstack-Mitaka 高可用之 Pacemaker+corosync+pcs 高可用集群

目录 Openstack-Mitaka 高可用之 概述    Openstack-Mitaka 高可用之 环境初始化    Openstack-Mitaka 高可用之 Mariadb-Galera集群部署    Openstack-Mitaka 高可用之 memcache    Openstack-Mitaka 高可用之 Pacemaker+corosync+pcs高可用集群    Openstack-Mitaka 高可用之 认证服务(keystone)    Openstack-Mitaka

搭建 RabbitMQ Server 高可用集群

阅读目录: 准备工作 搭建 RabbitMQ Server 单机版 RabbitMQ Server 高可用集群相关概念 搭建 RabbitMQ Server 高可用集群 搭建 HAProxy 负载均衡 因为公司测试服务器暂不能用,只能在自己电脑上重新搭建一下 RabbitMQ Server 高可用集群,正好把这个过程记录下来,以便日后查看. 公司测试服务器上的 RabbitMQ 集群,我搭建的是三台服务器,因为自己电脑空间有限,这边只能搭建两台服务器用作高可用集群,用的是 Vagrant 虚拟机

Oracle Compute云快速搭建MySQL Keepalived高可用架构

最近有个客户在测试Oracle Compute云,他们的应用需要使用MySQL数据库,由于是企业级应用一定要考虑高可用架构,因此有需求要在Oracle Compute云上搭建MySQL高可用集群.客户根据自身的技术储备想要使用Keepalived组件来配合MySQL实现.今天结合Oracle Compute刚刚宣布terraform支持的架构即代码方式,交付给客户一个快速搭建MySQL+Keepalived高可用架构,来帮助他们快速搭建测试环境甚至将来使用到正式环境. MySQL主主复制模式 M

搭建LVS+Keepalived高可用负载集群

搭建LVS+Keepalived高可用负载集群 最近,本屌接到公司的任务,公司新上20台服务器,需要搭建一整套架构来运行公司的业务,其中有应用服务器,认证服务器,数据库服务器等.服务器基础架构中的应用服务器集群要有高可用性,且需要负载均衡.当我接到这个任务的时候,脑子里第一个想法就是LVS+Keepalived. 由于公司资金有限,直接上硬件的负载均衡设备是不可能的了,所以只好使用软件来实现,LVS在负载均衡集群中无疑是一种很好的方案,使用LVS可以同时分发10台以下的设备,用在我们这个项目中是

为一个支持GPRS的硬件设备搭建一台高并发服务器用什么开发比较容易?

高并发服务器开发,硬件socket发送数据至服务器,服务器对数据进行判断,需要实现心跳以保持长连接. 同时还要接收另外一台服务器的消支付成功消息,接收到消息后控制硬件执行操作. 查了一些资料,java的netty,go,或者是用C/C++不知道该用哪个,想问一下哪个比较适合,学习更容易一些. 为一个支持GPRS的硬件设备搭建一台高并发服务器用什么开发比较容易? >> golang 这个答案描述的挺清楚的:http://www.goodpm.net/postreply/golang/101000

Ubuntu 12.04下PHP环境的搭建(LAMP)--非编译

1.首先打开命令行,切换到root身份,获得最新的软件包su root sudo apt-get install update 2.安装MySQL数据库sudo apt-get install mysql-server mysql-client下图为提示输入数据库密码,然后回车,之后还有提示,再重复输入一次,再回车 最后安装完后进数据库测试一下是否安装成功,看到欢迎信息就ok了mysql -uroot -p 3.安装Apache服务器sudo apt-get install apache2 安装

大型技术网站的技术( 高并发、大数据、高可用、分布式....)(一)

面对高并发.大流量.高可用.海量数据.用户分布广泛.网络情况复杂这类网站系统我们如何应对??? 第一阶段   一台服务器不行就上多台服务器    1.应用程序与数据服务分离        将应用程序.数据库.文件等资源放在一台服务器上,面对海量用户的访问只可能是崩崩崩的挂掉. so? 我们知道的是应用服务器.数据库服务器.文件服务器这三块对服务器的要求是不同的,应用服务器就需要大大的CPU来处理复杂的业务逻辑,数据库服务器需要快速磁盘检索      和 数据缓存也就是要大内存,而文件服务器要求的