OpenStack mi

赶着下班:先把初稿发出来,后期会整理

本文是本人基于真实环境部署一步步整理而来,如有不解可邮件联系我:[email protected]

本人是基于自定义yum源部署,因而速度很快,如有需要可以邮件联系我,给你发mitaka最新软件包

操作系统:CentOS Linux release 7.2.1511 (Core)

内核:3.10.0-327.el7.x86_64

效果图:

OpenStack mitaka部署

约定:

1.在修改配置的时候,切勿在某条配置后加上注释,可以在配置的上面或者下面加注释

2.相关配置一定是在标题后追加,不要在原有注释的基础上修改

PART1:环境准备

一:

1:每台机器设置固定ip,每台机器添加hosts文件解析,为每台机器设置主机名,关闭firewalld,selinux

/etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.16.209.115 controller01

172.16.209.117 compute01

172.16.209.119 network02

2.每台机器配置yum源

[mitaka]

name=mitaka repo

baseurl=http://172.16.209.19/mitaka/mitaka-rpms/

enabled=1

gpgcheck=0

3.每台机器

yum makeache && yum install vim && yum update -y

4.ntp服务部署

所有节点:

yum install chrony  ntpdate -y

控制节点:

修改配置:

/etc/chrony.conf

server NTP_SERVER iburst

allow 管理网络网段ip/24

同步时间:

ntpdate 0.centos.pool.ntp.org

启服务:

systemctl enable chronyd.service

systemctl start chronyd.service

其余节点:

修改配置:

/etc/chrony.conf

server 控制节点ip iburst

同步时间:

ntpdate 控制节点ip

启服务

systemctl enable chronyd.service

systemctl start chronyd.service

时区不是Asia/Shanghai需要改时区:

# timedatectl set-local-rtc 1 # 将硬件时钟调整为与本地时钟一致, 0 为设置为 UTC 时间

# timedatectl set-timezone Asia/Shanghai # 设置系统时区为上海

其实不考虑各个发行版的差异化, 从更底层出发的话, 修改时间时区比想象中要简单:

# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

验证:

每台机器执行:

chronyc sources

在S那一列包含*号,代表同步成功(可能需要花费几分钟去同步,时间不同步不要往下做)

二:获取软件包

如果使用自定义源,那么下列centos和redhat的操作可以省略

centos:

yum install yum-plugin-priorities -y

yum install centos-release-openstack-mitaka -y

redhat:

yum install yum-plugin-priorities -y

yum install https://rdoproject.org/repos/rdo-release.rpm -y

红帽系统请去掉epel源

yum upgrade

yum install python-openstackclient -y

yum install openstack-selinux -y

三:部署mariadb数据库

控制节点:

yum install mariadb mariadb-server python2-PyMySQL -y

编辑:

/etc/my.cnf.d/openstack.cnf

[mysqld]

bind-address = 控制节点管理网络ip

default-storage-engine = innodb

innodb_file_per_table

max_connections = 4096

collation-server = utf8_general_ci

character-set-server = utf8

启服务:

systemctl enable mariadb.service

systemctl start mariadb.service

mysql_secure_installation

四:为Telemetry 服务部署MongoDB

控制节点:

yum install mongodb-server mongodb -y

编辑:/etc/mongod.conf

bind_ip = 控制节点管理网络ip

smallfiles = true

启动服务:

systemctl enable mongod.service

systemctl start mongod.service

五:部署消息队列rabbitmq

控制节点:

yum install rabbitmq-server -y

启动服务:

systemctl enable rabbitmq-server.service

systemctl start rabbitmq-server.service

新建rabbitmq用户密码:

rabbitmqctl add_user openstack che001

为新建的用户openstack设定权限:

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

六:部署memcached缓存(为keystone服务缓存tokens)

控制节点:

yum install memcached python-memcached -y

启动服务:

systemctl enable memcached.service

systemctl start memcached.service

PART2:认证服务keystone部署

一:安装和配置服务

1.建库建用户

mysql -u root -p

CREATE DATABASE keystone;

GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone‘@‘localhost‘ \

IDENTIFIED BY ‘che001‘;

GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone‘@‘%‘ \

IDENTIFIED BY ‘che001‘;

flush privileges;

2.yum install openstack-keystone httpd mod_wsgi -y

3.编辑/etc/keystone/keystone.conf

[DEFAULT]

admin_token = che001

[database]

connection = mysql+pymysql://keystone:[email protected]/keystone

[token]

provider = fernet

4.同步修改到数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

5.初始化fernet keys

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

6.配置apache服务

编辑:/etc/httpd/conf/httpd.conf

ServerName controller01

编辑:/etc/httpd/conf.d/wsgi-keystone.conf

新增配置

Listen 5000

Listen 35357

<VirtualHost *:5000>

WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-public

WSGIScriptAlias / /usr/bin/keystone-wsgi-public

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat "%{cu}t %M"

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>

Require all granted

</Directory>

</VirtualHost>

<VirtualHost *:35357>

WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-admin

WSGIScriptAlias / /usr/bin/keystone-wsgi-admin

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat "%{cu}t %M"

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>

Require all granted

</Directory>

</VirtualHost>

7.启动服务:

systemctl enable httpd.service

systemctl start httpd.service

二:创建服务实体和访问端点

1.实现配置管理员环境变量,用于获取后面创建的权限

export OS_TOKEN=che001

export OS_URL=http://controller01:35357/v3

export OS_IDENTITY_API_VERSION=3

2.基于上一步给的权限,创建认证服务实体(目录服务)

openstack service create \

--name keystone --description "OpenStack Identity" identity

3.基于上一步建立的服务实体,创建访问该实体的三个api端点

openstack endpoint create --region RegionOne \

identity public http://controller01:5000/v3

openstack endpoint create --region RegionOne \

identity internal http://controller01:5000/v3

openstack endpoint create --region RegionOne \

identity admin http://controller01:35357/v3

三:创建域,租户,用户,角色,把四个元素关联到一起

建立一个公共的域名:

openstack domain create --description "Default Domain" default

管理员:admin

openstack project create --domain default \

--description "Admin Project" admin

openstack user create --domain default \

--password-prompt admin

openstack role create admin

openstack role add --project admin --user admin admin

普通用户:demo

openstack project create --domain default \

--description "Demo Project" demo

openstack user create --domain default \

--password-prompt demo

openstack role create user

openstack role add --project demo --user demo user

为后续的服务创建统一租户service

解释:后面每搭建一个新的服务都需要在keystone中执行四种操作:1.建租户 2.建用户 3.建角色 4.做关联

后面所有的服务公用一个租户service,都是管理员角色admin,所以实际上后续的服务安装关于keysotne

的操作只剩2,4

openstack project create --domain default \

--description "Service Project" service

四:验证操作:

编辑:/etc/keystone/keystone-paste.ini

在[pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3] 三个地方

移走:admin_token_auth

unset OS_TOKEN OS_URL

openstack --os-auth-url http://controller01:35357/v3 \

--os-project-domain-name default --os-user-domain-name default \

--os-project-name admin --os-username admin token issue

Password:

+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

| Field      | Value                                                                                                                                                                                   |

+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

| expires    | 2016-08-17T08:29:18.528637Z                                                                                                                                                             |

| id         | gAAAAABXtBJO-mItMcPR15TSELJVB2iwelryjAGGpaCaWTW3YuEnPpUeg799klo0DaTfhFBq69AiFB2CbFF4CE6qgIKnTauOXhkUkoQBL6iwJkpmwneMo5csTBRLAieomo4z2vvvoXfuxg2FhPUTDEbw-DPgponQO-9FY1IAEJv_QV1qRaCRAY0 |

| project_id | 9783750c34914c04900b606ddaa62920                                                                                                                                                        |

| user_id    | 8bc9b323a3b948758697cb17da304035                                                                                                                                                        |

+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

五:新建客户端脚本文件

管理员:admin-openrc

export OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=che001

export OS_AUTH_URL=http://controller01:35357/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

普通用户demo:demo-openrc

export OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=demo

export OS_USERNAME=demo

export OS_PASSWORD=che001

export OS_AUTH_URL=http://controller01:5000/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

效果:

source admin-openrc

[[email protected] ~]# openstack token issue

part3:部署镜像服务

一:安装和配置服务

1.建库建用户

mysql -u root -p

CREATE DATABASE glance;

GRANT ALL PRIVILEGES ON glance.* TO ‘glance‘@‘localhost‘ \

IDENTIFIED BY ‘che001‘;

GRANT ALL PRIVILEGES ON glance.* TO ‘glance‘@‘%‘ \

IDENTIFIED BY ‘che001‘;

flush privileges;

2.keystone认证操作:

上面提到过:所有后续项目的部署都统一放到一个租户service里,然后需要为每个项目建立用户,建管理员角色,建立关联

. admin-openrc

openstack user create --domain default --password-prompt glance

openstack role add --project service --user glance admin

建立服务实体

openstack service create --name glance \

--description "OpenStack Image" image

建端点

openstack endpoint create --region RegionOne \

image public http://controller01:9292

openstack endpoint create --region RegionOne \

image internal http://controller01:9292

openstack endpoint create --region RegionOne \

image admin http://controller01:9292

3.安装软件

yum install openstack-glance -y

4.修改配置:

编辑:/etc/glance/glance-api.conf

[database]

connection = mysql+pymysql://glance:[email protected]/glance

[keystone_authtoken]

auth_uri = http://controller01:5000

auth_url = http://controller01:35357

memcached_servers = controller01:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = che001

[paste_deploy]

flavor = keystone

[glance_store]

stores = file,http

default_store = file

filesystem_store_datadir = /var/lib/glance/images/

编辑:/etc/glance/glance-registry.conf

[glance_store]

stores = file,http

default_store = file

filesystem_store_datadir = /var/lib/glance/images/

[database]

connection = mysql+pymysql://glance:[email protected]/glance

[keystone_authtoken]

auth_uri = http://controller01:5000

auth_url = http://controller01:35357

memcached_servers = controller01:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = che001

[paste_deploy]

flavor = keystone

新建目录:

mkdir /var/lib/glance/images/

chown glance. /var/lib/glance/images/

同步数据库:(此处会报一些关于future的问题,自行忽略)

su -s /bin/sh -c "glance-manage db_sync" glance

启动服务:

systemctl enable openstack-glance-api.service \

openstack-glance-registry.service

systemctl start openstack-glance-api.service \

openstack-glance-registry.service

二:验证操作:

. admin-openrc

wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

openstack image create "cirros" \

--file cirros-0.3.4-x86_64-disk.img \

--disk-format qcow2 --container-format bare \

--public

openstack image list

part4:部署compute服务

一:控制节点配置

1.建库建用户

CREATE DATABASE nova_api;

CREATE DATABASE nova;

GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova‘@‘localhost‘ \

IDENTIFIED BY ‘che001‘;

GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova‘@‘%‘ \

IDENTIFIED BY ‘che001‘;

GRANT ALL PRIVILEGES ON nova.* TO ‘nova‘@‘localhost‘ \

IDENTIFIED BY ‘che001‘;

GRANT ALL PRIVILEGES ON nova.* TO ‘nova‘@‘%‘ \

IDENTIFIED BY ‘che001‘;

flush privileges;

2.keystone相关操作

. admin-openrc

openstack user create --domain default \

--password-prompt nova

openstack role add --project service --user nova admin

openstack service create --name nova \

--description "OpenStack Compute" compute

openstack endpoint create --region RegionOne \

compute public http://controller01:8774/v2.1/%\(tenant_id\)s

openstack endpoint create --region RegionOne \

compute internal http://controller01:8774/v2.1/%\(tenant_id\)s

openstack endpoint create --region RegionOne \

compute admin http://controller01:8774/v2.1/%\(tenant_id\)s

3.安装软件包:

yum install openstack-nova-api openstack-nova-conductor \

openstack-nova-console openstack-nova-novncproxy \

openstack-nova-scheduler -y

4.修改配置:

编辑/etc/nova/nova.conf

[DEFAULT]

enabled_apis = osapi_compute,metadata

rpc_backend = rabbit

auth_strategy = keystone

#下面的为管理ip

my_ip = 172.16.209.115

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]

connection = mysql+pymysql://nova:[email protected]/nova_api

[database]

connection = mysql+pymysql://nova:[email protected]/nova

[oslo_messaging_rabbit]

rabbit_host = controller01

rabbit_userid = openstack

rabbit_password = che001

[keystone_authtoken]

auth_uri = http://controller01:5000

auth_url = http://controller01:35357

memcached_servers = controller01:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = che001

[vnc]

#下面的为管理ip

vncserver_listen = 172.16.209.115

#下面的为管理ip

vncserver_proxyclient_address = 172.16.209.115

[glance]

api_servers = http://controller01:9292

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

5.同步数据库:(此处会报一些关于future的问题,自行忽略)

su -s /bin/sh -c "nova-manage api_db sync" nova

su -s /bin/sh -c "nova-manage db sync" nova

6.启动服务

systemctl enable openstack-nova-api.service \

openstack-nova-consoleauth.service openstack-nova-scheduler.service \

openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service \

openstack-nova-consoleauth.service openstack-nova-scheduler.service \

openstack-nova-conductor.service openstack-nova-novncproxy.service

二:计算节点配置

1.安装软件包:

yum install openstack-nova-compute -y

2.修改配置:

编辑/etc/nova/nova.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

#计算节点管理网络ip

my_ip = 172.16.209.117

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[oslo_messaging_rabbit]

rabbit_host = controller01

rabbit_userid = openstack

rabbit_password = che001

[keystone_authtoken]

auth_uri = http://controller01:5000

auth_url = http://controller01:35357

memcached_servers = controller01:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = che001

[vnc]

enabled = True

vncserver_listen = 0.0.0.0

#计算节点管理网络ip

vncserver_proxyclient_address = 172.16.209.117

#控制节点管理网络ip

novncproxy_base_url = http://172.16.209.115:6080/vnc_auto.html

[glance]

api_servers = http://controller01:9292

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

3.如果在不支持虚拟化的机器上部署nova,请确认

egrep -c ‘(vmx|svm)‘ /proc/cpuinfo结果为0

则编辑/etc/nova/nova.conf

[libvirt]

virt_type = qemu

4.启动服务

systemctl enable libvirtd.service openstack-nova-compute.service

systemctl start libvirtd.service openstack-nova-compute.service

三:验证

控制节点

[[email protected] ~]# source admin-openrc

[[email protected] ~]# openstack compute service list

+----+------------------+--------------+----------+---------+-------+----------------------------+

| Id | Binary           | Host         | Zone     | Status  | State | Updated At                 |

+----+------------------+--------------+----------+---------+-------+----------------------------+

|  1 | nova-consoleauth | controller01 | internal | enabled | up    | 2016-08-17T08:51:37.000000 |

|  2 | nova-conductor   | controller01 | internal | enabled | up    | 2016-08-17T08:51:29.000000 |

|  8 | nova-scheduler   | controller01 | internal | enabled | up    | 2016-08-17T08:51:38.000000 |

| 12 | nova-compute     | compute01    | nova     | enabled | up    | 2016-08-17T08:51:30.000000 |

part5:部署网络服务

一:控制节点配置

1.建库建用户

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron‘@‘localhost‘ \

IDENTIFIED BY ‘che001‘;

GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron‘@‘%‘ \

IDENTIFIED BY ‘che001‘;

flush privileges;

2.keystone相关

. admin-openrc

openstack user create --domain default --password-prompt neutron

openstack role add --project service --user neutron admin

openstack service create --name neutron \

--description "OpenStack Networking" network

openstack endpoint create --region RegionOne \

network public http://controller01:9696

openstack endpoint create --region RegionOne \

network internal http://controller01:9696

openstack endpoint create --region RegionOne \

network admin http://controller01:9696

3.安装软件包

yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which  -y

4.配置服务器组件

编辑 /etc/neutron/neutron.conf文件,并完成以下动作:

在[数据库]节中,配置数据库访问:

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

rpc_backend = rabbit

auth_strategy = keystone

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

[oslo_messaging_rabbit]

rabbit_host = controller01

rabbit_userid = openstack

rabbit_password = che001

[database]

connection = mysql+pymysql://neutron:[email protected]/neutron

[keystone_authtoken]

auth_uri = http://controller01:5000

auth_url = http://controller01:35357

memcached_servers = controller01:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = che001

[nova]

auth_url = http://controller01:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = che001

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,l2population

extension_drivers = port_security

[ml2_type_flat]

flat_networks = provider

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_ipset = True

编辑/etc/nova/nova.conf文件:

[neutron]

url = http://controller01:9696

auth_url = http://controller01:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = che001

service_metadata_proxy = True

5.创建连接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

6.同步数据库:(此处会报一些关于future的问题,自行忽略)

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \

--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

7.重启nova服务

systemctl restart openstack-nova-api.service

8.启动neutron服务

systemctl enable neutron-server.service

systemctl start neutron-server.service

二:网络节点配置

1. 编辑 /etc/sysctl.conf

net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

2.执行下列命令,立即生效

sysctl -p

3.安装软件包

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y

4.配置组件

编辑/etc/neutron/neutron.conf文件

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

rpc_backend = rabbit

auth_strategy = keystone

[oslo_messaging_rabbit]

rabbit_host = controller01

rabbit_userid = openstack

rabbit_password = che001

[keystone_authtoken]

auth_uri = http://controller01:5000

auth_url = http://controller01:35357

memcached_servers = controller01:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = che001

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

5、编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,l2population

extension_drivers = port_security

[ml2_type_flat]

flat_networks = provider

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_ipset = True

6、编辑 /etc/neutron/plugins/ml2/openvswitch_agent.ini文件:

[ovs]

#下面ip为网络节点数据网络ip

local_ip=1.1.1.119

bridge_mappings=external:br-ex

[agent]

tunnel_types=gre,vxlan

l2_population=True

prevent_arp_spoofing=True

[securitygroup]

firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

enable_security_group=True

7.配置L3代理。编辑 /etc/neutron/l3_agent.ini文件:

[DEFAULT]

interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver

external_network_bridge=br-ex

8.配置DHCP代理。编辑 /etc/neutron/dhcp_agent.ini文件:

[DEFAULT]

interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver

dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata=True

9.配置元数据代理。编辑 /etc/neutron/metadata_agent.ini文件:

[DEFAULT]

nova_metadata_ip=controller01

metadata_proxy_shared_secret=che001

10.创建连接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

11.启动服务

控制节点:

systemctl restart openstack-nova-api.service

网路节点:

systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \

neutron-dhcp-agent.service neutron-metadata-agent.service

systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \

neutron-dhcp-agent.service neutron-metadata-agent.service

12.建网桥

ovs-vsctl add-br br-ex

ovs-vsctl add-port br-ex eth2

注意,如果网卡数量有限,想用网路节点的管理网络网卡作为br-ex绑定的物理网卡

#那么需要将网络节点管理网络网卡ip去掉,建立br-ex的配置文件,ip使用原管理网ip

ovs-vsctl add-br br-ex

[[email protected] ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

TYPE=Ethernet

ONBOOT="yes"

BOOTPROTO="none"

[[email protected] ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex

DEVICE=br-ex

TYPE=Ethernet

ONBOOT="yes"

BOOTPROTO="none"

HWADDR=bc:ee:7b:78:7b:a7

IPADDR=172.16.209.10

GATEWAY=172.16.209.1

NETMASK=255.255.255.0

DNS1=202.106.0.20

DNS1=8.8.8.8

NM_CONTROLLED=no

systemctl restart network

ovs-vsctl add-port br-ex eth0

三:计算节点配置

1. 编辑 /etc/sysctl.conf

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

2.sysctl -p

3.yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y

4.编辑 /etc/neutron/neutron.conf文件

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

[oslo_messaging_rabbit]

rabbit_host = controller01

rabbit_userid = openstack

rabbit_password = che001

[keystone_authtoken]

auth_uri = http://controller01:5000

auth_url = http://controller01:35357

memcached_servers = controller01:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = che001

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

5.编辑 /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs]

#下面ip为计算节点数据网络ip

local_ip = 1.1.1.117

#bridge_mappings = vlan:br-vlan

[agent]

tunnel_types = gre,vxlan

l2_population = True

prevent_arp_spoofing = True

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

enable_security_group = True

6.编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,l2population

extension_drivers = port_security

[ml2_type_flat]

flat_networks = provider

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_ipset = True

7.编辑 /etc/nova/nova.conf

[neutron]

url = http://controller01:9696

auth_url = http://controller01:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = che001

8.启动服务

systemctl enable neutron-openvswitch-agent.service

systemctl start neutron-openvswitch-agent.service

systemctl restart openstack-nova-compute.service

part6:部署控制面板dashboard

在控制节点

1.安装软件包

yum install openstack-dashboard -y

2.配置/etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller01"

ALLOWED_HOSTS = [‘*‘, ]

SESSION_ENGINE = ‘django.contrib.sessions.backends.cache‘

CACHES = {

‘default‘: {

‘BACKEND‘: ‘django.core.cache.backends.memcached.MemcachedCache‘,

‘LOCATION‘: ‘controller01:11211‘,

}

}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {

"identity": 3,

"image": 2,

"volume": 2,

}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

OPENSTACK_NEUTRON_NETWORK = {

‘enable_router‘: False,

‘enable_quotas‘: False,

‘enable_distributed_router‘: False,

‘enable_ha_router‘: False,

‘enable_lb‘: False,

‘enable_firewall‘: False,

‘enable_vpn‘: False,

‘enable_fip_topology_check‘: False,

}

TIME_ZONE = "UTC"

3.启动服务

systemctl enable httpd.service memcached.service

systemctl restart httpd.service memcached.service

4.验证;

http://172.16.209.115/dashboard

网路故障排查:

网络节点:

[[email protected] ~]# ip netns show

qdhcp-e63ab886-0835-450f-9d88-7ea781636eb8

qdhcp-b25baebb-0a54-4f59-82f3-88374387b1ec

qrouter-ff2ddb48-86f7-4b49-8bf4-0335e8dbaa83

[[email protected] ~]# ip netns exec qrouter-ff2ddb48-86f7-4b49-8bf4-0335e8dbaa83 bash

[[email protected] ~]# ping -c2 www.baidu.com

PING www.a.shifen.com (61.135.169.125) 56(84) bytes of data.

64 bytes from 61.135.169.125: icmp_seq=1 ttl=52 time=33.5 ms

64 bytes from 61.135.169.125: icmp_seq=2 ttl=52 time=25.9 ms

如果无法ping通,那么退出namespace

ovs-vsctl del-br br-ex

ovs-vsctl del-br br-int

ovs-vsctl del-br br-tun

ovs-vsctl add-br br-int

ovs-vsctl add-br br-ex

ovs-vsctl add-port br-ex eth0

systemctl restart neutron-openvswitch-agent.service neutron-l3-agent.service \

neutron-dhcp-agent.service neutron-metadata-agent.service

时间: 2024-10-03 20:38:20

OpenStack mi的相关文章

【玩转React】关于React你需要知道的事儿

前言 随着前端技术的迅猛发展,各种前端框架也随势崛起,但归根结底,支撑每一款web框架流行的强大因素都是它能更好地服务于业务. React 自然也不例外,它的开发者当初正在开发Facebook的一个广告系统,由于不满足于当下任何的 MVC 框架,所以就自己写了一套 UI 框架,于是乎大名鼎鼎的 React 就由此诞生了. React 的出现无疑为 web 开发带来了颠覆性的改变,多少开发者夜以继日只为体验一把 React 带来的快感.本文就将带领大家一起领略一番 React 的理念.特色与情怀.

Openstack 实战讲解之-----09-启动实例

检查 在进行实例的启动的时候,我们要先确认各个服务是否都启动了,可以通过下面的命令来看端口和服务是否启动 1.ps aux|grep python 2.netstat -lntup 检查镜像服务 [[email protected] ~]# openstack image list ^L+--------------------------------------+--------+--------+ | ID                                   | Name  

Openstack的环境的Mitaka部署dashboard 云主机 镜像(2)

九.Dashboard配置 1.编辑文件 /etc/openstack-dashboard/local_settings 2.重启web服务器以及会话存储服务 浏览器访问dashboard 进入设置成中文重新登陆 3.开启mi.nano云主机 4.创建网络 5.创建镜像 十.Networking 服务neutron私有网络 1.安装服务 2.编辑``/etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:

CentOS7下利用cobbler搭建openstack本地源

前面提到了使用cobbler进行自动化部署系统,下面我们介绍下如何利用cobbler快速搭建openstack本地源(这里我以我的测试环境中的openstack的Mitaka版本为例). 操作步骤如下: 1.添加openstack源: [[email protected] ~]# cobbler repo add --name=openstack-mitaka --mirror=http://mirrors.163.com/centos/7.3.1611/cloud/x86_64/opensta

OpenStack组件系列?Keystone搭建

一:版本信息 官网:http://docs.openstack.org/newton/install-guide-rdo/keystone.html 二:部署keystone 官网文档:http://docs.openstack.org/newton/install-guide-rdo/ 查看系统信息: [[email protected] ~]# cat /etc/redhat-release CentOS Linux release 7.0.1406 (Core) [[email prote

OpenStack各组件介绍

OpenStack是一个开源的云计算管理平台项目,由几个组件组合起来完成具体的工作. 先列出其中的3个核心项目: 1. 控制台 服务名:Dashboard 项目名:Horizon 功能:web方式管理云平台,就像你登录aliyun账号一样所见和操作,建立主机,分配带宽,加云盘 2.计算 服务名:Compute 项目名:Nova 功能:负责响应虚拟机的创建请求.调度.销毁 3.网络 服务名:Networking 项目名:Neutron 功能:实现SDN(软件定义网络),提供一整套API,用户可以通

58 Openstack基础、openstack之glance、openstack之keystone

03 openstack之keystone 配置环境 Controller CentOS release 6.7 controller eth0:仅主机 192.168.28.121 eth1:桥接 192.168.1.121 node2 192.168.1.122 CentOS release 6.7 compute1 eth0:仅主机,eth1:VMnet2 不会直接与外部网络通信 node3 192.168.1.123 CentOS release 6.7 networking eth0:

云计算之openstack基础服务之一keystone服务最佳实践

1.openstack简介 Openstack是一个项目,该项目支持所有类型的云环境的一个开源云计算平台,该项目的目的是为了实现简单,大规模可扩展性,以及丰富功能集,来自世界各地的云计算专家项目作出贡献.Openstack提供了一个基础架构即服务(Iaas)并通过各种配套服务的解决方案,每个服务提供一个应用编程接口来完成整个openstack的结合. 架构图如下: 相关服务介绍: 服务名称 项目名称 描述 Dashboard Horizon 基于openstackAPI接口使用Django开发的

【N版】openstack——走进云计算(一)

[N版]openstack--走进云计算 一.云计算 云计算是一种按使用量付费的模式,这种模式提供可用的.便捷的.按需的网络访问,进入可配置的计算资源共享池(资源包括:网络.服务器.存储.应用软件.服务),这些资源能够被快速提供,只需投入很少的管理工作,或与服务供应商进行很少的交互. 1.1云计算的特点和优势 1)云计算是一种使用模式 2)云计算必须通过网络访问 3)弹性计算,按需付费 1.2在云计算之前的模式或技术 1)IDC托管 2)IDC租用 3)虚拟主机(卖空间的) 4)VPS:虚拟专用