中小企业openstack私有云布署实践【12.2 网络Neutron-controller节点配置(办公网环境)】

网络这一块推荐使用的是 Neutron--LinuxBirdge的Ha高可用,此高可用方案对Public作用不是很大,Public只用到DHCP,而Private则会用到L3 Agent,则此方案是有用的,但要关掉和牺牲一个L3 population的特性--抑制ARP报文广播。原因在下文的配置文件有说明,并因我们布的是私有云,不像公有云的多租户private网络数量之大,这个特性牺牲在中小私有云是可接受的。

一、首先登录controller1创建neutron数据库,并赋于远程和本地访问的权限。

mysql -u root -p

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron‘@‘localhost‘  IDENTIFIED BY ‘venic8888‘;

GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron‘@‘%‘  IDENTIFIED BY ‘venic8888‘;

flush PRIVILEGES;

二、身份认证调用

其中一台controller创建身份认证调用

source admin-openrc.sh

openstack user create --domain default --password-prompt neutron

User Password:

Repeat User Password:

+-----------+----------------------------------+

| Field     | Value                            |

+-----------+----------------------------------+

| domain_id | default                          |

| enabled   | True                             |

| id        | b20a6692f77b4258926881bf831eb683 |

| name      | neutron                          |

+-----------+----------------------------------+

openstack role add --project service --user neutron admin

openstack service create --name neutron --description "OpenStack Networking" network

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Networking             |

| enabled     | True                             |

| id          | f71529314dab4a4d8eca427e701d209e |

| name        | neutron                          |

| type        | network                          |

+-------------+----------------------------------+

openstack endpoint create --region RegionOne network public http://controller:9696

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 85d80a6d02fc4b7683f611d7fc1493a3 |

| interface    | public                           |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | f71529314dab4a4d8eca427e701d209e |

| service_name | neutron                          |

| service_type | network                          |

| url          | http://controller:9696           |

+--------------+----------------------------------+

openstack endpoint create --region RegionOne network internal http://controller:9696

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 09753b537ac74422a68d2d791cf3714f |

| interface    | internal                         |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | f71529314dab4a4d8eca427e701d209e |

| service_name | neutron                          |

| service_type | network                          |

| url          | http://controller:9696           |

+--------------+----------------------------------+

openstack endpoint create --region RegionOne network admin http://controller:9696

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 1ee14289c9374dffb5db92a5c112fc4e |

| interface    | admin                            |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | f71529314dab4a4d8eca427e701d209e |

| service_name | neutron                          |

| service_type | network                          |

| url          | http://controller:9696           |

+--------------+----------------------------------+

三、下载安装neutron组件

1公有网络配置+私有网络配置

2台controller配置

# yum install openstack-neutron openstack-neutron-ml2  openstack-neutron-linuxbridge python-neutronclient ebtables ipset -y

修改新增内核参数:

vi  /etc/sysctl.conf

net.ipv4.ip_forward=1

net.ipv4.conf.default.rp_filter=0

net.ipv4.conf.all.rp_filter=0

sysctl -p

配置neutron.conf服务

在controller1

vi /etc/neutron/neutron.conf

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

rpc_backend = rabbit

auth_strategy = keystone

bind_host = 10.40.42.1

bind_port = 9696

l3_ha = True

max_l3_agents_per_router = 3

min_l3_agents_per_router = 2

allow_automatic_l3agent_failover = True

dhcp_agents_per_network = 2

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://controller:8774/v2

verbose = True

[database]

connection = mysql://neutron:[email protected]/neutron

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = neutron

[nova]

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = RegionOne

project_name = service

username = nova

password = nova

[oslo_messaging_rabbit]

rabbit_host=controller

rabbit_userid = openstack

rabbit_password = openstack

rabbit_retry_interval=1

rabbit_retry_backoff=2

rabbit_max_retries=0

rabbit_durable_queues=true

rabbit_ha_queues=true

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

[quotas]

quota_port = 10000

在kxcontroller2

vi /etc/neutron/neutron.conf

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

rpc_backend = rabbit

auth_strategy = keystone

bind_host = 10.40.42.2

bind_port = 9696

l3_ha = True

max_l3_agents_per_router = 3

min_l3_agents_per_router = 2

allow_automatic_l3agent_failover = True

dhcp_agents_per_network = 2

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://controller:8774/v2

verbose = True

[database]

connection = mysql://neutron:[email protected]/neutron

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = neutron

[nova]

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = RegionOne

project_name = service

username = nova

password = nova

[oslo_messaging_rabbit]

rabbit_host=controller

rabbit_userid = openstack

rabbit_password = openstack

rabbit_retry_interval=1

rabbit_retry_backoff=2

rabbit_max_retries=0

rabbit_durable_queues=true

rabbit_ha_queues=true

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

[quotas]

quota_port = 10000

2台controller上配置一样

vi  /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

extension_drivers = port_security

mechanism_drivers = linuxbridge

[ml2_type_flat]

flat_networks = public

[ml2_type_vxlan]

vni_ranges = 1:10000

vxlan_group = 239.2.1.1

[securitygroup]

enable_ipset = True

配置ML2服务配置,跟单服务的自由版本的配法不同,这次使用的l3 HA,不用使用l2_population机制,因有BGU还没修复(切换时,虽然publick的VRRP切换成功。private网段的网关不会触发更新,VM虚拟是PING不通private的网关)

在controller1上

vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings = public:bond0

[vxlan]

enable_vxlan = True

local_ip = 10.40.42.1

l2_population = False

[agent]

prevent_arp_spoofing = True

[securitygroup]

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

在controller2上

vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings = public:bond0

[vxlan]

enable_vxlan = True

local_ip = 10.40.42.2

l2_population = False

[agent]

prevent_arp_spoofing = True

[securitygroup]

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置L3-agent服务

2台kxcontroller配置一样

vi /etc/neutron/l3_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

external_network_bridge =

verbose = True

router_delete_namespaces = True

agent_mode = legacy

配置DHCP服务

2台controller配置一样

vi /etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = True

verbose = True

use_namespaces = True

dhcp_delete_namespaces = True

dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

额外配置DHCP MTU配置

2台controller配置一样

vi /etc/neutron/dnsmasq-neutron.conf

dhcp-option-force=26,1450

配置metadata agent

2台controller配置一样

vi /etc/neutron/metadata_agent.ini

[DEFAULT]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_region = RegionOne

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = neutron

nova_metadata_ip = controller

metadata_proxy_shared_secret = venicchina

新加配置nova与neutron关联-----在11节中我就有提到的那个紫色的配置,这里再提一次

vi /etc/nova/nova.conf

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron

service_metadata_proxy = True

metadata_proxy_shared_secret = venicchina

验证:

1、controller的软链接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

2、其中一台controller同步数据

# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

3、两台controller重启nova api

# systemctl restart openstack-nova-api.service

4、两台controller开启网络服务,加入开机自启

# systemctl enable neutron-server.service  neutron-linuxbridge-agent.service neutron-dhcp-agent.service  neutron-metadata-agent.service neutron-l3-agent.service

# systemctl start neutron-server.service  neutron-linuxbridge-agent.service neutron-dhcp-agent.service  neutron-metadata-agent.service neutron-l3-agent.service

时间: 2024-10-06 20:43:47

中小企业openstack私有云布署实践【12.2 网络Neutron-controller节点配置(办公网环境)】的相关文章

中小企业openstack私有云布署实践【4.1 上层代理haproxy配置 (科兴环境)】

官方文档上的高可用配置,它推荐的是使用haproxy的上层代理来实现服务组件的主备访问.或者负载均衡访问 一开始我也是使用haproxy来做的,但后来方式改了 测试环境:haproxy + nginx 科兴环境:haproxy 先抛开测试环境,等下我再在4.2节中解说一下配置 两边的kxcontroller主备控制节点均安装 yum install -y haproxy 创建目录 mkdir -p /home/haproxy/log   && mkdir -p /home/haproxy/

中小企业openstack私有云布署实践【9.3 主从controller单向同步glance-image目录】

采用Rysnc单向同步,而不用双方实时同步,原因是在历史的运行过程中,我们发现,有些镜像包太大,当在主用的glance将镜像保存时,并不是一时半会就把镜像保存好,当主用在保存时,备用节点又在实时同步那个正常拷贝保存状态中的不完整主用文件,因此我们会看到在备节点上,它删了又拷的方式,非常损耗机器的硬盘. 所以我们采用Rysnc单向同步, 做这一步时,我已布署好了之前的集群.下面的示例是以办公网测试环境为例,科兴的雷同,只是IP和主机名不同而已 使用root用户运行源服务器controller2的进

中小企业openstack私有云布署实践【9.1 Glance镜像管理(科兴环境)】

首先登录kxcontroller1创建kx_glance数据库,并赋于远程和本地访问的权限. mysql -u root -p CREATE DATABASE kx_glance; GRANT ALL PRIVILEGES ON kx_glance.* TO 'glance'@'localhost'  IDENTIFIED BY 'venic8888'; GRANT ALL PRIVILEGES ON kx_glance.* TO 'glance'@'%'  IDENTIFIED BY 'ven

中小企业openstack私有云布署实践【15 创建租户网络+实例】

这里以办公网测试环境为例, (一)创建租户demo的网络 使用admin用户 source admin-openrc.sh 创建public公网 neutron net-create 1040100 --shared --provider:physical_network public  --provider:network_type flat  --router:external 声明public网段DHCP等信息 neutron subnet-create 1040100 10.40.100

中小企业openstack私有云布署实践【11.1 计算nova - compute节点配置(科兴环境)】

这里我只使用kxcompute1节点配置为示例,其它节点的配置基本是一样的,只是声明的管理IP不同而已 计算节点 # yum install openstack-nova-compute sysfsutils 修改配置文件 vi /etc/nova/nova.conf [DEFAULT] vcpu_pin_set = 4-31 resume_guests_state_on_host_boot=True rpc_backend = rabbit auth_strategy = keystone m

中小企业openstack私有云布署实践【4.2 上层代理haproxy+nginx配置 (办公网测试环境)】

续上一节说明 一开始我也是使用haproxy来做的,但后来方式改了,是因为物理机controller的高配置有些浪费,我需要1组高可用的上层nginx代理服务器来实现其它域名80代理访问,很多办公网测试的域名解析58.251.17.238的IP,都是复用走这组controller的nginx 测试环境:haproxy + nginx 所以,我需要将haproxy的dashboard占用的80剥离出来 两边的controller主备控制节点均安装 yum install -y haproxy 创建

中小企业openstack私有云布署实践【1 网络拓扑说明】

图1说明:办公网的openstack使用2台交换机,10.40.40.2是24口  10.40.40.6是48口,管理网段接10.40.40.2VLAN1002     虚拟机的public网段接10.40.40.6的VLAN1100  .因为10.40.40.6可以支持一个接口配置多个网段,而另一台交换机不支持 也是为了将来可以方便横向扩展. 图2图3说明:科兴交换机是复用之前的6台交换机,两两做堆叠,并且使用trunk打通VLAN,这个配置VLAN 42 和VLAN 1200 VLAN 42

中小企业openstack私有云布署实践【16.3 Windows Server2008 R2 只有C盘分区镜像制作】

之所以要只有C盘分区镜像,是因为在创建VM或者调整云主机的硬盘大小时,它能自动扩容.无需人工介入 参考http://www.iyunv.com/thread-45149-1-1.html的灵感 在原来的物理机10.40.41.1的CentOS 6.7上制作镜像. 宿主机坱要安装KVM相关软件: yum groupinstall Virtualization "Virtualization Client" -y yum install libvirt libguestfs-tools q

中小企业openstack私有云布署实践【11.3 计算nova - compute节点-nova用户免密登录(用于云主机冷迁移+扩展云主机大小)】

云主机迁移+扩展云主机大小 ,官方说它依赖nova用户之间的免密登录.确保每个resion区域的compute节点服务器他们可以相互SSH免密 compute1-7     他们相互SSH免密 kxcompute1-9  他们相互SSH免密 1.注意!是每台机器上的nova用户向另一台机器的nova用户的免密登录 每台compute节点修改ssh配置,目的是为了不让其提示输入yes保存密钥 vi /etc/ssh/ssh_config 尾部添加 StrictHostKeyChecking no