Openstack组件部署 — Nova_Install and configure a compute node

目录

  • 目录
  • 前文列表
  • Prerequisites 先决条件
  • Install and configure a compute node
    • Install the packages
    • Edit the etcnovanovaconf file
  • Finalize installation

前文列表

Openstack组件部署 — Overview和前期环境准备

Openstack组建部署 — Environment of Controller Node

Openstack组件部署 — Keystone功能介绍与认证实现流程

Openstack组件部署 — Keystone Install & Create service entity and API endpoints

Openstack组件部署 — keystone(domain, projects, users, and roles)

Openstack组件实现原理 — Keystone认证功能

Openstack组建部署 — Glance Install

Openstack组件实现原理 — Glance架构(V1/V2)

Openstack组件部署 — Nova overview

Openstack组件部署 — Nova_安装和配置Controller Node

Prerequisites 先决条件

从这一篇博文开始,Openstack组建部署进入了多节点的阶段,我们再重新回顾一下当初我们拟定的Network拓扑

IP Address Config:

多节点部署首先要确保节点之间能够通信和成功解析主机名,并且按照建议重新浏览Openstack组件部署 — Overview和前期环境准备来对节点进行部署环境初始化操作。

Step1.关闭防火墙

systemctl mask iptables.service
systemctl mask ip6tables.service
systemctl mask ebtables.service
systemctl mask firewalld.service 

Step2.设置主机名

hostnamectl set-hostname compute1.jmilk.com

Step3.关闭Selinux

Step4.按照IP Address Config来设置Static_IP

nmcli connection modify eth0 ipv4.addresses "192.168.1.10/24 192.168.1.1" ipv4.dns "192.168.1.5" ipv4.method manual

注意:当我们需要连接到外网下载RDO的时,我们需要将DNS IP指向外网DNS Server。Example:

vim /etc/resolv.conf

search jmilk.com
nameserver 202.106.195.68
nameserver 202.106.46.151

**Step5.**Install OpenStack预备包

#1. 安装yum-plugin-priorities包,防止高优先级软件被低优先级软件覆盖
yum install yum-plugin-priorities 

#2. 安装EPEL扩展yum源,是一个RHEL系列的高质量软件源,可能版本号会被修改
yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-7.noarch.rpm 

#3. 安装extras repository 和 RDO repository
yum install centos-release-openstack-mitaka
yum install https://rdoproject.org/repos/rdo-release.rpm

#4. 更新系统
yum update -y

#5. 重启系统
reboot

#6. 安装openstack-selinux自动管理SELinux
yum install  openstack-selinux 

#7. 安装Openstack client
yum install python-openstackclient -y

Step6.配置DNS service或修改hosts文件,添加所有的Network拓扑节点的IP域名解析。

Install and configure a compute node

官档:This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or VMs. For simplicity, this configuration uses the QEMU hypervisor with the KVM extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes.

粗译:这个章节讲述了怎样在Compute Node上安装和配置Compute serviceCompute service能够支持使用多种类型的hypervisors(虚拟化管理系统)技术来部署虚拟机。为了方便起见,Compute Node会配置使用QEMU hypervisorKVM extension去实现虚拟机的hardware acceleration(硬件加速)。一般在旧的硬件设备中,会更多的使用QEMU hypervisor。你可以按照下述的指令来水平扩展你的环境和部署更多的Compute Nodes

Note:This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the example architectures section. Each additional compute node requires a unique IP address.

注意:该章节假设你是在第一个Compute Node上配置Compute service。如果你希望配置更多的Compute Node,可以使用类似的方式去部署更多的Compute Node。每一个额外的Compute Node都需要有一个唯一的IP Address。

Install the packages

yum install openstack-nova-compute

Edit the /etc/nova/nova.conf file

In the [DEFAULT] and [oslo_messaging_rabbit] sections, configure RabbitMQ message queue access:

vim /etc/nova/nova.conf

[DEFAULT]
rpc_backend = rabbit

[oslo_messaging_rabbit]
rabbit_host = controller.jmilk.com
rabbit_userid = openstack
rabbit_password = fanguiju

In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:

[DEFAULT]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller.jmilk.com:5000
auth_url = http://controller.jmilk.com:35357
memcached_servers = controller.jmilk.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = fanguiju

Note: Comment out or remove any other options in the [keystone_authtoken] section.

注意:注释或删除[keystone_authtoken]配置节点中所有的其他选项。

In the [DEFAULT] section, configure the my_ip option:

[DEFAULT]
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

Note:Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in the example architecture.

Example:

[DEFAULT]
my_ip = 192.168.1.10

In the [DEFAULT] section, enable support for the Networking service:

[DEFAULT]
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

Note:By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriver firewall driver.

注意:默认的,计算机使用一个内部的防火墙服务,由于Neteorking包含了一个防火墙服务,你必须通过nova.virt.firewall.NoopFirewallDriver防火墙驱动来关闭这个计算机防火墙服务

In the [vnc] section, enable and configure remote console access:

[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller.jmilk.com:6080/vnc_auto.html

The server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node.

这个服务器的监听组件会监听所有的IP Address,代理的监听组件只会监听Compute Node中的管理接口IP Address。这个novncproxy_base_url会表明你可以使用一个Web浏览器去远程访问这台Compute Node上实例的接口控制台位置。

Note:If the web browser to access remote consoles resides on a host that cannot resolve the controller hostname, you must replace controller with the management interface IP address of the controller node.

注意:如果Web浏览器访问远程的控制台存在于一个不能够被成功解析controller hostname的Host主机上,你必须将controller参数值替换成Controller Node的管理接口IP Address。

In the [glance] section, configure the location of the Image service API:

[glance]
api_servers = http://controller.jmilk.com:9292

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

Finalize installation

Determine whether your compute node supports hardware acceleration for virtual machines:

egrep -c ‘(vmx|svm)‘ /proc/cpuinfo

If this command returns a value of one or greater, your compute node supports hardware acceleration which typically requires no additional configuration.

如果这条指令返回一个或大于一个的值,你的Compute Node就支持hardware acceleration(硬件加速),这样的话通常不需要进行额外的配置。

If this command returns a value of zero, your compute node does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.

如果这条指令执行后不返回值,你的Compute Node就不支持hardware acceleration(硬件加速),所以你必须配置libvirt去使用QEMU来代替KVM虚拟化。

Example:

[root@compute1 ~]# egrep -c ‘(vmx|svm)‘ /proc/cpuinfo
0
  • Edit the [libvirt] section in the /etc/nova/nova.conf file as follows:

    vim /etc/nova/nova.conf

[libvirt]
virt_type = qemu

Start the Compute service including its dependencies and configure them to start automatically when the system boots:

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
时间: 2024-10-27 08:14:40

Openstack组件部署 — Nova_Install and configure a compute node的相关文章

Openstack组件部署 — Nova_安装和配置Controller Node

目录 目录 前文列表 Prerequisites 先决条件 To create the databases To create the service credentials Create the Compute service API endpoints Install and configure components Install the packages Edit the etcnovanovaconf file Populate the Compute databases Finali

Openstack组件部署 — Networking service_Compute Node

目录 目录 前文列表 安装组件 配置通用组件 配置自服务网络选项 配置Linux 桥接代理 配置Nova使用网络 完成安装 验证操作Execute following commands on Controller Node 前文列表 Openstack组件部署 - Overview和前期环境准备 Openstack组建部署 - Environment of Controller Node Openstack组件部署 - Keystone功能介绍与认证实现流程 Openstack组件部署 - Ke

Openstack组件部署 — Networking service_安装并配置Controller Node

目录 目录 前文列表 前提条件 完成下面的步骤以创建数据库 创建service credentials服务凭证 创建Neutron的API Endpoints 配置自服务网络 安装网络组件 配置服务组件 配置 Modular Layer 2 ML2 插件 配置Linux 桥接代理 配置layer-3代理 配置DHCP代理 配置元数据代理 配置计算使用网络 完成安装 前文列表 Openstack组件部署 - Overview和前期环境准备 Openstack组建部署 - Environment o

Openstack组件部署 — Keystone Install & Create service entity and API endpoints

目录 目录 前文列表 Install and configure Prerequisites 先决条件 Create the database for identity service 生成一个随机数 Install and configure components Configure the Apache HTTP server Create the service entity and API endpoints Prerequisites 先决条件 Create the service e

Openstack组件部署 — keystone(domain, projects, users, and roles)

目录 目录 前文列表 Create a domain projects users and roles domain projects users and roles的意义和作用 Create the default domain Create the service projecttenant 创建用于管理的用户租户和角色 Create the admin projecttenant Create the admin user Create the admin role Add the adm

Openstack组件部署 — 将一个 New Service 添加到 Keystone

目录 目录 Keystone 认证流程 让 Keystone 为一个新的项目 Service 提供验证功能 最后 Keystone 认证流程 User 使用凭证(username/password) 到 keystone 验证并获得一个临时的 Token 和 Generic catalog(全局目录),临时的 Token 会存储在 keystone-client(cache UUID locally) 和 keystone-backend 中. User 使用这个临时 Token 发送给 key

Openstack组建部署 — Glance Install

目录 目录 前文列表 Image service overview Openstack Image service包含的组件 Install and configure Prerequisites 先决条件 To create the database To create the service credentials Install and configure components Install the packages Edit the etcglanceglance-apiconf fi

OpenStack IceHouse 部署 - 4 - 计算节点部署

Nova计算服务(计算节点) 参考 本页内容依照官方安装文档进行,具体参见Configure a compute node(nova service) 前置工作 数据库 由于我们在Nova(计算管理)部署配置中使用了mysql数据库,所以移除本地sqlite数据库 sudo rm /var/lib/nova/nova.sqlite 修改vmlinuz权限 For security reasons, the Linux kernel is not readable by normal users

openstack (六) nova 计算组件部署

1.组件详解 功能:托管和管理虚拟主机 选主机: 用户请求 -> nova-api -> queue -> nova-scheduler -> nova-db(过滤+权重) -> nova-scheduler -> queue 创建VM: nova-compute -> queue -> nova-conductor -> nava-db -> nova-conductor -> queue -> nova-compute ->