Openstack计算主机安装配置流程二

Openstack计算主机安装配置流程

1.环境配置

Hosts配置
  修改/etc/hosts文件,增加wtcontroller、wtcompute1、wtcompute2:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.10.100 wtcontroller
172.16.10.101 wtcompute1
172.16.10.102 wtcompute2

  修改本机hostname(以计算主机wtcompute1为例)

echo "wtcompute1"> /etc/hostname

1.1修订yum源

本示例使用的时163的yum源:

CentOS7-Base-163.repo
将以上文件拷贝至/etc/yum.repos.d目录下
备份该目录下CentOS-Base.repo文件
修改CentOS7-Base-163.repo为CentOS-Base.repo
执行以下命令:
yum clean all         #清除缓存
yum makecache       #生成缓存
yum list #显示所有已经安装和可以安装的程序包

  关闭该服务,否者yum clean会卡死,属于系统bug

systemctl stop initial-setup-text 

1.2防火墙操作

systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld.service

1.3关闭selinux安全服务

setenforce 0
getenforce
sed -i ‘s#SELINUX=enforcing#SELINUX=disabled#g‘ /etc/sysconfig/selinux
grep SELINUX=disabled /etc/sysconfig/selinux

1.4安装时间同步NTP服务

yum install chrony -y
vim /etc/chrony.conf
--参考网络配置,确保以下配置打开:
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
并修改以下配置,打开以下网段节点向控制节点校时:
allow 172.16.10.0/24
重启服务并设置服务自启动
systemctl restart chronyd.service
systemctl status chronyd.service
systemctl enable chronyd.service
systemctl list-unit-files |grep chronyd.service

  修订时区

timedatectl set-timezone Asia/Shanghai
chronyc sources

1.5安装openstack-更新yum

yum install centos-release-openstack-rocky -y
yum clean all
yum makecache

1.6安装客户端软件

yum install python-openstackclient openstack-selinux -y

2安装流程

2.1安装nova

yum install openstack-nova-compute python-openstackclient openstack-utils -y
快速修改配置文件(/etc/nova/nova.conf)
openstack-config --set  /etc/nova/nova.conf DEFAULT my_ip 192.168.3.170
openstack-config --set  /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set  /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set  /etc/nova/nova.conf DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set  /etc/nova/nova.conf DEFAULT transport_url  rabbit://openstack:[email protected]@wtcontroller
openstack-config --set  /etc/nova/nova.conf api auth_strategy  keystone
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_url http://wtcontroller:5000/v3
openstack-config --set  /etc/nova/nova.conf keystone_authtoken memcached_servers wtcontroller:11211
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set  /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_name  service
openstack-config --set  /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set  /etc/nova/nova.conf keystone_authtoken password [email protected]
openstack-config --set  /etc/nova/nova.conf vnc enabled True
openstack-config --set  /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set  /etc/nova/nova.conf vnc server_proxyclient_address  ‘$my_ip‘
openstack-config --set  /etc/nova/nova.conf vnc novncproxy_base_url  http://wtcontroller:6080/vnc_auto.html
openstack-config --set  /etc/nova/nova.conf glance api_servers http://wtcontroller:9292
openstack-config --set  /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set  /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set  /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set  /etc/nova/nova.conf placement project_name service
openstack-config --set  /etc/nova/nova.conf placement auth_type password
openstack-config --set  /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set  /etc/nova/nova.conf placement auth_url http://wtcontroller:5000/v3
openstack-config --set  /etc/nova/nova.conf placement username placement
openstack-config --set  /etc/nova/nova.conf placement password [email protected]

  检查修订有效性

egrep -v "^#|^$" /etc/nova/nova.conf

  配置文件 应如下(以节点ip为172.16.10.101为例):

[DEFAULT]
my_ip = 172.16.10.101
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:[email protected]@wtcontroller
instances_path=$state_path/instances
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://wtcontroller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://wtcontroller:5000/v3
memcached_servers = wtcontroller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = [email protected]
[libvirt]
inject_password = true
inject_partition = -1
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://wtcontroller:9696
auth_url = http://wtcontroller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = [email protected]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://wtcontroller:5000/v3
username = placement
password = [email protected]
[placement_database]
[powervm]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://wtcontroller:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]

  配置虚拟机的硬件加速

  #首先确定您的计算节点是否支持虚拟机的硬件加速。

egrep -c ‘(vmx|svm)‘ /proc/cpuinfo

  #如果返回位0,表示计算节点不支持硬件加速,需要配置libvirt使用QEMU方式管理虚拟机,使用以下命令:

openstack-config --set  /etc/nova/nova.conf libvirt virt_type  qemu
egrep -v "^#|^$" /etc/nova/nova.conf|grep ‘virt_type‘

  #如果返回为其他值,表示计算节点支持硬件加速且不需要额外的配置,使用以下命令:

openstack-config --set  /etc/nova/nova.conf libvirt virt_type  kvm

  若后续在计算节点支持硬件加速的情况下依然出现创建实例报错的情况下,则需要进一步确认硬件加速是否被打开:

dmesg | grep kvm
如果有显示[    3.692481] kvm: disabled by bios
则需要在bios中打开虚拟化选项

  启动nova相关服务,并配置为开机自启动
  #需要启动2个服务

systemctl start libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl list-unit-files |grep libvirtd.service
systemctl list-unit-files |grep openstack-nova-compute.service

  登陆到控制节点进行配置
  #以下命令在控制节点操作:

. admin-openrc 

  #检查确认数据库有新的计算节点

openstack compute service list --service nova-compute

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

#设置新创建节点自动注册的任务(已经添加到配置文件中)

[scheduler]
discover_hosts_in_cells_interval = 300

  在控制节点中进行验证

1)应用管理员环境变量脚本
. admin-openrc
2)列表查看安装的nova服务组件
#验证是否成功注册并启动了每个进程
openstack compute service list
3)在身份认证服务中列出API端点以验证其连接性
openstack catalog list
4)在镜像服务中列出已有镜像已检查镜像服务的连接性
openstack image list
5)检查nova各组件的状态
#检查placement API和cell服务是否正常工作
nova-status upgrade check

2.2 Neutron安装

yum install openstack-neutron-openvswitch ebtables ipset -y (计算节点)
快速配置/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url  rabbit://openstack:[email protected]@wtcontroller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri  http://wtcontroller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://wtcontroller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers wtcontroller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password [email protected]
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

  查看生效的配置

egrep -v ‘(^$|^#)‘ /etc/neutron/neutron.conf
快速配置/etc/neutron/plugins/ml2/openvswitch_agent.ini
openstack-config --set  /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types  vxlan
openstack-config --set  /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population  True
penstack-config --set  /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip  172.16.20.81
penstack-config --set  /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs tunnel_bridge  br-tun
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group True 

  查看生效的配置

egrep -v "^#|^$" /etc/neutron/plugins/ml2/openvswitch_agent.ini

  快速配置/etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf neutron url http://wtcontroller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://wtcontroller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password [email protected] 

  #查看生效的配置

egrep -v ‘(^$|^#)‘ /etc/nova/nova.conf

  重启计算节点

systemctl restart openstack-nova-compute.service
systemctl status openstack-nova-compute.service

  启动neutron网络组件,并配置开机自启动
#需要启动1个服务,网桥代理

systemctl restart neutron-openvswitch-agent.service
systemctl status neutron-openvswitch-agent.service
systemctl enable neutron-openvswitch-agent.service
systemctl list-unit-files |grep neutron* |grep enabled

在控制节点检查确认neutron服务安装成功
获取管理权限

source admin-openrc

列表查看加载的网络插件

openstack extension list --network

或者使用另一种方法:显示简版信息

neutron ext-list

查看网络代理列表

openstack network agent list

#正常情况下:控制节点有3个服务,计算节点有1个服务,如果不是,需要检查计算节点配置:网卡名称,IP地址,端口,密码等要素

原文地址:https://blog.51cto.com/1969518/2485161

时间: 2024-08-21 23:17:58

Openstack计算主机安装配置流程二的相关文章

Openstack控制主机安装配置流程三

Openstack控制主机安装配置流程三 1.环境配置 ??Hosts配置??修改/etc/hosts文件,增加wtcontroller.wtcompute1.wtcompute2: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.10.

Hadoop2.3.0+Hbase0.96.1.1+Hive0.14.0+Zookeeper3.4.6+Sqoop1.99.3安装配置流程

Hadoop2.3.0+Hbase0.96.1.1+Hive0.14.0+Zookeeper3.4.6+Sqoop1.99.3安装配置流程 linux环境:Oracle Linux Server release 6.3 (64位) 编译环境:由于公司服务器不能联网,故在本机win8下的虚拟机中编译(虚拟机中同样是Oracle Linux Server release 6.3 (64位)) 节点:一共配置了三个节点,主节点qzj05(ip:192.168.10.199),子节点qzj02(ip:1

Ubuntu14.04 下 DeepDive 的安装配置流程

DeepDive 是斯坦福大学提供的一个开源知识挖掘系统,Github地址为:https://github.com/HazyResearch/deepdive,项目主页:http://deepdive.stanford.edu/.其代码以及详细介绍请参考这两个链接.本文主要介绍Ubuntu14.04 下安装配置DeepDive的流程. 一.安装所有依赖 依赖: Java (1.7.0_45 版本或以上) Python 2.X (已预装) PostgreSQL (9.1 以上版本) SBT Gnu

Oracle安装配置流程

Oracle安装流程 第一次自己动手安装oracle,之前对oracle安装配置一窍不通,最后最终弄好.总结下. 1.  安装oracle10gserver端 2.  安装oracle10gclient. server端和client不一定要安装在同一个机器上. 3.  启动server端的NetConfiguration Assistant,配置监听程序 4.  使用DatabaseConfiguration Assistant向导创建数据库 5.  安装PL/SQLDeveloper. 6.

Icingaweb2监控oracle数据库的安装配置流程

Icinga2安装配置check_oracle_health流程 1.安装 由于check_oracle_health是使用perl语言编写的,因此在安装该插件之前,首先要安装oracle的客户端实例,分别是basic,sqlplus,sdk包括perl的oracle插件(DBI和DBD). 第一步: 下载Oracle Instant Client Oracle Instant Client的主页在http://www.oracle.com/technology/tech/oci/instant

So Easy! Oracle在Linux上的安装配置系列二

本篇是So Easy!Oracle11gr2在linux上的安装配置的第二篇,本篇将讲述oracle11g r2的安装 oracle安装的前期准备 oracle的安装 工具软件rlwrap-0.42.tar.gz的安装 oracle环境变量设置 文档位置: http://docs.oracle.com/cd/E11882_01/install.112/e24326/toc.htm 1.oracle安装的前期准备 7安装Linux的一些基础开发包 # export LANG=en_US.UTF-8

kickstart无人值守安装配置(二)

1)安装DHCP软件包 yum install dhcpd -y 配置dhcpd.conf ddns-update-style none; ignore client-updates; allow booting; allow bootp; subnet 192.168.18.0 netmask 255.255.255.0 { range 192.168.18.20 192.168.18.40; option subnet-mask 255.255.255.0; default-lease-ti

RobotFramework 安装配置(二)

前面已经写了一篇关于RF的安装配置了,那是在做自动化工具调研的时候搭建RF总结的,基于win32的系列软件安装的过程.经过1个月的调研,做成了demo,也大致学RF的使用和python的基础语法,暂时就选定了用RF作为接口自动化测试的工具,现在需要确定RF的各类基础软件的版本了,于是我又开始了各种折腾. 第一:考虑RF相关的4个基础软件(python,wxPython,robotframework,robotframework-ride)版本选择问题. A. python版本选哪个?之前做dem

SharePoint 2016 安装配置流程及需要注意的地方

1. 安装域, 安装后创建一个用户用于之后的安装配置, 例如 [email protected] 2. 安装sql server 2016 将要安装sql server 的服务器加入域,   并将域账号[email protected]添加至此服务器的本地administrator组. 安装sql server, 将域账号[email protected]添加至sql的管理员账号. 3. 安装SharePoint 2016 将要安装SharePoint 的服务器加入域,   并将域账号[emai