openstack-ansible -- 3 Target hosts

Installing the operating system

Install the Ubuntu Server 14.04 (Trusty Tahr) LTS 64-bit operating system

至少一个network interface可以访问外网

locale to en_US.UTF-8

Configuring the operating system

Deployment host到taget host无密码登陆:

Copy Deployment hos的public key到taget /root/.ssh/authorized_keys

ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]

kernel版本为3.13.0-34-generic or later 

$ uname -a
Linux rpc-3 3.13.0-46-generic #79-Ubuntu SMP Tue Mar 10 20:06:50 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
# apt-get install bridge-utils debootstrap ifenslave ifenslave-2.6   lsof lvm2 ntp ntpdate openssh-server sudo tcpdump vlan

加入kernel 模块到/etc/modules来enable VLAN和接口bond

# echo ‘bonding‘ >> /etc/modules
# echo ‘8021q‘ >> /etc/modules

Configure NTP

Reboot the host to activate the changes

Configuring LVM

OpenStack-Ansible会自动配置LVM,手动配置如下:

# pvcreate --metadatasize 2048 physical_volume_device_path
# vgcreate cinder-volumes physical_volume_device_path

Designing the network

下表描述了container 网络如何连接host bridge和物理网络接口:

Target hosts 包含以下 network bridges:

  • LXC internal lxcbr0:
    • 必须的,自动生成,containers的外网连接,不连接到host上任何物理/逻辑接口,由iptable来做连通,它连接到container里面的eth0。
      配置在openstack_user_config.yml in theprovider_networks dictionary.
  • Container management br-mgmt:
    • 必须的.
    • Provides management of and communication among infrastructure and OpenStack services.
    • 手动创建,连接到物理/逻辑接口(一般是bond0上的一个vlan子接口),连接容器的eth1.
    • container network interface配置在openstack_user_config.yml
  • Storage br-storage:
    • Optional.但推荐
    • Provides segregated access to block storage devices between Compute and Block Storage hosts.
    • 手动创建,连接到物理/逻辑接口(一般是bond0上的一个vlan子接口),连接容器的eth2.
  • OpenStack Networking tunnel/overlay br-vxlan:
    • Mandatory.
    • Provides infrastructure for VXLAN tunnel/overlay networks.
    • 手动创建,连接到物理/逻辑接口(一般是bond1上的一个vlan子接口),连接容器的eth10.
  • OpenStack Networking provider br-vlan:
    • Mandatory.
    • Provides infrastructure for VLAN and flat networks.
    • 手动创建连接到物理/逻辑接口(一般是bond1上的一个vlan子接口),连接容器的eth11.  Does not contain an IP address because it only handles layer 2 connectivity.

compute service直接部署在物理机器而不是容器上。

how to use bridges for network connectivity

其中,计算节点有br-vxlan和br-vlan来分别做vxlan和vlan的联通,br-vxlan连到物理vlan的一个子节点,br-vlan不需要。

以下是网络节点上,DHCP agent, L3 agent, and Linux Bridge agent都部署在networking-agents container

以下是计算节点上vm如何连接:

Reference architecture

Bridge name Best configured on With a static IP
br-mgmt On every node Always
br-storage On every storage node When component is deployed on metal
On every compute node Always
br-vxlan On every network node When component is deployed on metal
On every compute node Always
br-vlan On every network node Never
On every compute node Never

host management节点的网络配置文件:

Physical interfaces:

# Physical interface 1
auto eth0
iface eth0 inet manual
    bond-master bond0
    bond-primary eth0

# Physical interface 2
auto eth1
iface eth1 inet manual
    bond-master bond1
    bond-primary eth1

# Physical interface 3
auto eth2
iface eth2 inet manual
    bond-master bond0

# Physical interface 4
auto eth3
iface eth3 inet manual
    bond-master bond1

Bonding interfaces:

# Bond interface 0 (physical interfaces 1 and 3)
auto bond0
iface bond0 inet static
    bond-slaves eth0 eth2
    bond-mode active-backup
    bond-miimon 100
    bond-downdelay 200
    bond-updelay 200
    address HOST_IP_ADDRESS
    netmask HOST_NETMASK
    gateway HOST_GATEWAY
    dns-nameservers HOST_DNS_SERVERS

# Bond interface 1 (physical interfaces 2 and 4)
auto bond1
iface bond1 inet manual
    bond-slaves eth1 eth3
    bond-mode active-backup
    bond-miimon 100
    bond-downdelay 250
    bond-updelay 250

Logical (VLAN) interfaces:

# Container management VLAN interface
iface bond0.CONTAINER_MGMT_VLAN_ID inet manual
    vlan-raw-device bond0

# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
iface bond1.TUNNEL_VLAN_ID inet manual
    vlan-raw-device bond1

# Storage network VLAN interface (optional)
iface bond0.STORAGE_VLAN_ID inet manual
    vlan-raw-device bond0

Bridge devices:

# Container management bridge
auto br-mgmt
iface br-mgmt inet static
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    # Bridge port references tagged interface
    bridge_ports bond0.CONTAINER_MGMT_VLAN_ID
    address CONTAINER_MGMT_BRIDGE_IP_ADDRESS
    netmask CONTAINER_MGMT_BRIDGE_NETMASK
    dns-nameservers CONTAINER_MGMT_BRIDGE_DNS_SERVERS

# OpenStack Networking VXLAN (tunnel/overlay) bridge
auto br-vxlan
iface br-vxlan inet static
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    # Bridge port references tagged interface
    bridge_ports bond1.TUNNEL_VLAN_ID
    address TUNNEL_BRIDGE_IP_ADDRESS
    netmask TUNNEL_BRIDGE_NETMASK

# OpenStack Networking VLAN bridge
auto br-vlan
iface br-vlan inet manual
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    # Bridge port references untagged interface
    bridge_ports bond1

# Storage bridge (optional)
auto br-storage
iface br-storage inet static
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    # Bridge port reference tagged interface
    bridge_ports bond0.STORAGE_VLAN_ID
    address STORAGE_BRIDGE_IP_ADDRESS
    netmask STORAGE_BRIDGE_NETMASK

Example for 3 controller nodes and 2 compute nodes

  • VLANs:

    • Host management: Untagged/Native
    • Container management: 10
    • Tunnels: 30
    • Storage: 20
  • Networks:
    • Host management: 10.240.0.0/22
    • Container management: 172.29.236.0/22
    • Tunnel: 172.29.240.0/22
    • Storage: 172.29.244.0/22
  • Addresses for the controller nodes:
    • Host management: 10.240.0.11 - 10.240.0.13
    • Host management gateway: 10.240.0.1
    • DNS servers: 69.20.0.164 69.20.0.196
    • Container management: 172.29.236.11 - 172.29.236.13
    • Tunnel: no IP (because IP exist in the containers, when the components aren’t deployed directly on metal)
    • Storage: no IP (because IP exist in the containers, when the components aren’t deployed directly on metal)
  • Addresses for the compute nodes:
    • Host management: 10.240.0.21 - 10.240.0.22
    • Host management gateway: 10.240.0.1
    • DNS servers: 69.20.0.164 69.20.0.196
    • Container management: 172.29.236.21 - 172.29.236.22
    • Tunnel: 172.29.240.21 - 172.29.240.22
    • Storage: 172.29.244.21 - 172.29.244.22

Simple architecture: A single target host

时间: 2024-08-01 10:00:28

openstack-ansible -- 3 Target hosts的相关文章

ansible自动化运维必备工具详解

第2章 ansible简单介绍: 2.1 ansible是什么? ansible是新出现的 自动化 运维工具 , 基于Python研发 . 糅合了众多老牌运维工具的优点实现了批量操作系统配置.批量程序的部署.批量运行命令等功能. 仅需在管理工作站上安装 ansible 程序配置被管控主机的 IP 信息,被管控的主机无客户端. ansible 应用程序存在于 epel( 第三方社区 ) 源,依赖于很多 python 组件 python语言是运维人员必会的语言!ansible是一个基于Python开

OpenStack 常用命令

常用操作:常用的查询命令#keystone user-list 查询用户信息#keystone role-list 查询角色信息#keystone tenant-list 查询租户信息#glance index   查询当前存在的镜像信息#nova image-list    查看当前存在的镜像状态#nova secgroup-list 查看当前存在的安全组#nova keypair-list  查看当前存在的密钥#nova flavor-list   查看当前可以创建的实例类型#nova li

在openstack环境中安装rackspace private cloud --2 overview

Target hosts 包含以下 network bridges: LXC internal lxcbr0: 必须的,自动生成,containers的外网连接,不连接到host上任何物理/逻辑接口,连接到container里面的eth0 Container management br-mgmt: Mandatory必须的. Provides management of and communication among infrastructure and OpenStack services.

在openstack环境中安装rackspace private cloud --5 Deployment configuration

运行Ansible playbooks之前,需要配置taget host Prerequisites 1. cp -r /opt/openstack-ansible/etc/openstack_deploy /etc/ 2. cd /etc/openstack_deploy cp openstack_user_config.yml.example openstack_user_config.yml Configuring target host networking Configuring ta

自动化运维工具Ansible之Playbooks循环语句

在使用ansible做自动化运维的时候,免不了的要重复执行某些操作,如:添加几个用户,创建几个MySQL用户并为之赋予权限,操作某个目录下所有文件等等.好在playbooks支持循环语句,可以使得某些需求很容易而且很规范的实现. with_items是playbooks中最基本也是最常用的循环语句. - name: add several users   user: name={{ item }} state=present groups=wheel   with_items:      - t

自动化运维之ansible详解

1.ansible安装以及配置认证 ansible也是有Python开发的. ansible特点: 不需要安装客户端,通过sshd去通信 基于模块工作,模块可以由任何语言开发 不仅支持命令行使用模块,也支持编写yaml格式的playbook 支持sudo 又提供UI(浏览器图形化)www.ansible.com/tower 10台主机以内免费 开源UI http://github.com/alaxli/ansible_ui 文档http://download.csdn.net/detail/li

自动化运维工具ansible的基本应用

在很多场景中我们都需要在多个主机上执行相同的命令或者是做相同的配置工作,此时,为了简化操作,我们可以借助一些自动化的工具来完成我们的需求,这种工具我们称之为自动化运维工具.ansible就是其中之一,下面我们就来用ansible来实现一些简单操作. 下面是ansible可以实现很多工具的功能,框架图如下所示:ansible不能实现操作系统的安装 ansible作者就是早期puppet和func的维护者之一,因为ansible充分吸取了puppet和func的优势,又力图避免他们的劣势. OS P

ansible基础—安装与常用模块

ansible介绍: ansible是一个基于python开发的轻量级自动化运维管理工具,可以用来批量执行命令,安装程序,支持playbook编排.它通过ssh协议来连接主机,省去了在每一台主机安装客户端的麻烦,相对比puppet和saltstack,显得更为简单和轻量. ansible命令参数: Usage: ansible <host-pattern> [options] Options:   -a MODULE_ARGS, --args=MODULE_ARGS              

Deploy Helion Openstack 2.0 KVM for Ceph

目  录 1. 安装Lifecycle Manager 1 2. 配置Lifcecycle Manager 1 3. 配置环境 2 4. 安装部署裸机 2 5. 部署Helion 3 6. 配置Ceph存储 5 7. 安装后配置 10 附1:配置public证书 12 1. 安装Lifecycle Manager 1 VM 拓扑 基于此拓扑及下面的配置,可以完成部署,可以在ceph 集群上创建卷并挂载到VM 上. 在第一台控制节点上从Helion光盘启动操作系统 输入install启动安装 选择