openstack安装配置—— compute node配置

    计算节点需要配置的主要是nova和neutron的客户端,控制节点在进行资源调度及配置时需要计算节点配合方能实现的,计算节点配置内容相对较少,实际生产环境中,需要配置的计算节点数量相当庞大,那么我们就需要借助ansible或者puppet这样的自动化工具进行了,   废话不多讲,直接进入配置状态。


compute节点基础配置

[[email protected] ~]# lscpu

Architecture:          x86_64

CPU op-mode(s):        32-bit, 64-bit

Byte Order:            Little Endian

CPU(s):                8

On-line CPU(s) list:   0-7

Thread(s) per core:    1

Core(s) per socket:    1

Socket(s):             8

NUMA node(s):          1

Vendor ID:             GenuineIntel

CPU family:            6

Model:                 44

Model name:            Westmere E56xx/L56xx/X56xx (Nehalem-C)

Stepping:              1

CPU MHz:               2400.084

BogoMIPS:              4800.16

Virtualization:        VT-x

Hypervisor vendor:     KVM

Virtualization type:   full

L1d cache:             32K

L1i cache:             32K

L2 cache:              4096K

NUMA node0 CPU(s):     0-7

[[email protected] ~]# free -h

total        used        free      shared  buff/cache   available

Mem:            15G        142M         15G        8.3M        172M         15G

Swap:            0B          0B          0B

[[email protected] ~]# lsblk

NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT

sr0              11:0    1  1024M  0 rom

vda             252:0    0   400G  0 disk

├─vda1          252:1    0   500M  0 part /boot

└─vda2          252:2    0 399.5G  0 part

├─centos-root 253:0    0    50G  0 lvm  /

├─centos-swap 253:1    0   3.9G  0 lvm

└─centos-data 253:2    0 345.6G  0 lvm  /data

[[email protected] ~]# ifconfig

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

inet 192.168.10.31  netmask 255.255.255.0  broadcast 192.168.10.255

inet6 fe80::5054:ff:fe18:bb1b  prefixlen 64  scopeid 0x20<link>

ether 52:54:00:18:bb:1b  txqueuelen 1000  (Ethernet)

RX packets 16842  bytes 1460696 (1.3 MiB)

RX errors 0  dropped 1416  overruns 0  frame 0

TX packets 747  bytes 199340 (194.6 KiB)

TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

inet 10.0.0.31  netmask 255.255.0.0  broadcast 10.0.255.255

inet6 fe80::5054:ff:fe28:e0a7  prefixlen 64  scopeid 0x20<link>

ether 52:54:00:28:e0:a7  txqueuelen 1000  (Ethernet)

RX packets 16213  bytes 1360633 (1.2 MiB)

RX errors 0  dropped 1402  overruns 0  frame 0

TX packets 23  bytes 1562 (1.5 KiB)

TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

inet 111.40.215.9  netmask 255.255.255.240  broadcast 111.40.215.15

inet6 fe80::5054:ff:fe28:e07a  prefixlen 64  scopeid 0x20<link>

ether 52:54:00:28:e0:7a  txqueuelen 1000  (Ethernet)

RX packets 40  bytes 2895 (2.8 KiB)

RX errors 0  dropped 0  overruns 0  frame 0

TX packets 24  bytes 1900 (1.8 KiB)

TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536

inet 127.0.0.1  netmask 255.0.0.0

inet6 ::1  prefixlen 128  scopeid 0x10<host>

loop  txqueuelen 0  (Local Loopback)

RX packets 841  bytes 44167 (43.1 KiB)

RX errors 0  dropped 0  overruns 0  frame 0

TX packets 841  bytes 44167 (43.1 KiB)

TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[[email protected] ~]# getenforce

Disabled

[[email protected] ~]# iptables -vnL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)

pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)

pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)

pkts bytes target     prot opt in     out     source               destination

[[email protected] ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.10 controller

192.168.10.20 block

192.168.10.31 compute1

192.168.10.32 compute2

[[email protected] ~]#

配置时间同步服务

[[email protected] ~]# yum install -y chrony

[[email protected] ~]# vim /etc/chrony.conf

[[email protected] ~]# grep -v ^# /etc/chrony.conf | tr -s [[:space:]]

server controller iburst

stratumweight 0

driftfile /var/lib/chrony/drift

rtcsync

makestep 10 3

bindcmdaddress 127.0.0.1

bindcmdaddress ::1

keyfile /etc/chrony.keys

commandkey 1

generatecommandkey

noclientlog

logchange 0.5

logdir /var/log/chrony

[[email protected] ~]# systemctl enable chronyd.service

[[email protected] ~]# systemctl start chronyd.service

[[email protected] ~]# chronyc sources

210 Number of sources = 1

MS Name/IP address         Stratum Poll Reach LastRx Last sample

===============================================================================

^* controller                    3   6    17    52    -15us[ -126us] +/-  138ms

[[email protected] ~]#

安装 OpenStack 客户端

[[email protected] ~]# yum install -y python-openstackclient

安装配置nova客户端

[[email protected] ~]# yum install -y openstack-nova-compute

[[email protected] ~]# cp /etc/nova/nova.conf{,.bak}

[[email protected] ~]# vim /etc/nova/nova.conf

[[email protected] ~]# grep -v ^# /etc/nova/nova.conf | tr -s [[:space:]]

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 192.168.10.31

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]

[barbican]

[cache]

[cells]

[cinder]

[conductor]

[cors]

[cors.subdomain]

[database]

[ephemeral_storage_encryption]

[glance]

api_servers = http://controller:9292

[guestfs]

[hyperv]

[image_file_url]

[ironic]

[keymgr]

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = NOVA_PASS

[libvirt]

[matchmaker_redis]

[metrics]

[neutron]

[osapi_v21]

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = RABBIT_PASS

[oslo_middleware]

[oslo_policy]

[rdp]

[serial_console]

[spice]

[ssl]

[trusted_computing]

[upgrade_levels]

[vmware]

[vnc]

enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = $my_ip

novncproxy_base_url = http://controller:6080/vnc_auto.html

[workarounds]

[xenserver]

[[email protected] ~]# egrep -c ‘(vmx|svm)‘ /proc/cpuinfo  //检验是否支持虚拟机的硬件加速

8

[[email protected] ~]#

如果此处检验结果为0就请参考openstack环境准备一文中kvm虚拟机如何开启嵌套虚拟化栏目内容

[[email protected] ~]# systemctl enable libvirtd.service openstack-nova-compute.service

Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.

[[email protected] ~]# systemctl start libvirtd.service openstack-nova-compute.service  //计算节点上不会启动相应端口,只能通过服务状态进行查看

[[email protected] ~]# systemctl status libvirtd.service openstack-nova-compute.service

● libvirtd.service - Virtualization daemon

Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)

Active: active (running) since Sun 2017-07-16 19:10:26 CST; 12min ago

Docs: man:libvirtd(8)

http://libvirt.org

Main PID: 1002 (libvirtd)

CGroup: /system.slice/libvirtd.service

└─1002 /usr/sbin/libvirtd

Jul 16 19:10:26 compute1 systemd[1]: Starting Virtualization daemon...

Jul 16 19:10:26 compute1 systemd[1]: Started Virtualization daemon.

Jul 16 19:21:06 compute1 systemd[1]: Started Virtualization daemon.

● openstack-nova-compute.service - OpenStack Nova Compute Server

Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)

Active: active (running) since Sun 2017-07-16 19:21:11 CST; 1min 21s ago

Main PID: 1269 (nova-compute)

CGroup: /system.slice/openstack-nova-compute.service

└─1269 /usr/bin/python2 /usr/bin/nova-compute

Jul 16 19:21:06 compute1 systemd[1]: Starting OpenStack Nova Compute Server...

Jul 16 19:21:11 compute1 nova-compute[1269]: /usr/lib/python2.7/site-packages/pkg_resources/__init__.py:187: RuntimeWarning: You have...

Jul 16 19:21:11 compute1 nova-compute[1269]: stacklevel=1,

Jul 16 19:21:11 compute1 systemd[1]: Started OpenStack Nova Compute Server.

Hint: Some lines were ellipsized, use -l to show in full.

[[email protected] ~]#

前往controller节点验证计算服务配置

安装配置neutron客户端

控制节点网络配置完成后开始继续以下步骤

[[email protected] ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset

[[email protected] ~]# cp /etc/neutron/neutron.conf{,.bak}

[[email protected] ~]# vim /etc/neutron/neutron.conf

[[email protected] ~]# grep -v ^# /etc/neutron/neutron.conf | tr -s [[:space:]]

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

[agent]

[cors]

[cors.subdomain]

[database]

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = NEUTRON_PASS

[matchmaker_redis]

[nova]

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

[oslo_messaging_amqp]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = RABBIT_PASS

[oslo_policy]

[qos]

[quotas]

[ssl]

[[email protected] ~]#

linuxbridge代理配置

[[email protected] ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}

[[email protected] ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[[email protected] ~]# grep -v ^# /etc/neutron/plugins/ml2/linuxbridge_agent.ini | tr -s [[:space:]]

[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings = provider:eth1

[securitygroup]

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]

enable_vxlan = True

local_ip = 192.168.10.31

l2_population = True

[[email protected] ~]#

再次编辑nova配置文件,追加网络配置

[[email protected] ~]# vim /etc/nova/nova.conf

url = http://controller:9696

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = NEUTRON_PASS

重启计算节点服务,启用并启动linuxbridge代理服务

[[email protected] ~]# systemctl restart openstack-nova-compute.service

[[email protected] ~]# systemctl enable neutron-linuxbridge-agent.service

Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.

[[email protected] ~]# systemctl start neutron-linuxbridge-agent.service

前往controller节点验证网络服务配置

时间: 2024-10-13 06:30:20

openstack安装配置—— compute node配置的相关文章

openstack安装配置—— block node配置

    对于云主机来说,机器可以随时销毁再创建,但数据一定不能,所以就需要数据的持久存储,openstack官方给出的数据存储方案就是cinder模块,cinder模块需要cinder服务端和cinder存储节点共同构成,在本实验中,我们把cinder服务端一并安装在了controller节点上,另行配置一台cinder存储节点,也就是我们的block节点. block节点基础配置 [[email protected] ~]# lscpu Architecture:          x86_6

openstack安装配置—— controller node配置

    实际生产环境中,每个服务模块很有可能都是一个集群,但我们这里只是带大家配置了一个实验环境,所以我们这里把keystone.nova.neutron.glance.dashboard都安装在了contoller节点上. controller节点基础配置 [[email protected] ~]# hostname controller [[email protected] ~]# lscpu Architecture:          x86_64 CPU op-mode(s):  

《转》 Openstack Grizzly 指定 compute node 创建 instance

声明:此文档仅仅做学习交流使用,请勿用作其它商业用途 作者:朝阳_tony 邮箱:[email protected] 2013年6月4日9:37:44 星期二 转载请注明出处:http://blog.csdn.net/linzhaolove 有时我们有几个 compute node .openstack默认是进行平衡创建,但有是我们想将某个instance 特定创建在某个compute node ,我们该如何做呢? 1.查看挂接的节点 查看一下我们挂接了哪些compute node,如我已经成功

Openstack组件部署 — Networking service_安装并配置Controller Node

目录 目录 前文列表 前提条件 完成下面的步骤以创建数据库 创建service credentials服务凭证 创建Neutron的API Endpoints 配置自服务网络 安装网络组件 配置服务组件 配置 Modular Layer 2 ML2 插件 配置Linux 桥接代理 配置layer-3代理 配置DHCP代理 配置元数据代理 配置计算使用网络 完成安装 前文列表 Openstack组件部署 - Overview和前期环境准备 Openstack组建部署 - Environment o

Openstack组件部署 — Nova_安装和配置Controller Node

目录 目录 前文列表 Prerequisites 先决条件 To create the databases To create the service credentials Create the Compute service API endpoints Install and configure components Install the packages Edit the etcnovanovaconf file Populate the Compute databases Finali

OpenStack安装与配置2

第二部分 OpenStack安装与配置 一.引言 本章内容讲解如何在3台物理机上搭建最小化云平台,这3台机器分为称为Server1.Server2和Client1,之后的各章也是如此.Server1承载着Nova.Glance.Swift.Keystone及Horizon(OpenStack的Web UI)服务.Server2只用来运行实例管理的nova运算工作站.由于OpenStack组件采用分布式结构,其中的任何一部分或几个部分都可以安装在任意服务器上. Client1并不是安装所必须的,在

openstack安装配置(二)

增加image - 前期准备(controller) image又叫做glance,是用来管理镜像的一个组件,我们用镜像来安装操作系统.glance支持让用户自己管理自定义镜像. 创建glance库和用户 mysql -uroot -ptn1Pi6Ytm > CREATE database  glance; > GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost'    IDENTIFIED BY 'Zznky4tP0'; >

openstack安装配置(三)

增加Networking - 前期准备(controller) Networking又叫做Neutron,是Openstack必不可少的组件,它其实是网络虚拟化的实现工具,可以让我们模拟出路由器.交换机.网卡等网络设备. Neutron支持两种网络模式,第一种是非常简单的网络架构,它仅支持是让实例连接外网,不支持自定义网络.路由器以及浮动ip.只有管理员或者授权的用户有权限去管理网络.第二种网络功能比较强大,支持自定义网络管理,支持自建路由器并且也支持浮动ip.即使没有授权的用户也可以管理网络,

Oracle VM + centos7.1+openstack kilo 多结点安装教程---基础环境配置(4)

声明:最近在进行openstack的kilo版本的安装,发现现有的网络教程非常少,而且多数教程并不能安装成功,故写此教程.openstack的安装较为复杂,本教程并不能保证在不同环境下也能将其安装成功.个人安装教程,也难免出错.同时,安装是在虚拟机环境下,真实安装环境需要进行更改. 转载请声明出处: 作者:张某人ER 原文链接:http://blog.csdn.net/xinxing__8185/article/details/51103863 第一部分 基础环境配置 (4) 接下来 配置com