openstack三个节点icehouse-gre模式部署

一、环境准备

1、架构

创建3台虚拟机,分别作为controll节点、network节点和compute1节点。

Controller节点:1processor,2G memory,5G storage。

Network节点:1processor,2G memory,5G storage。

Comute1节点:1processor,2G memory,5G storage。

架构图:

外部网络:提供上网业务,外界登录openstack(在上图为蓝色模块)

管理网络:三节点通信比如keystone,认证,rabbitmq消息队列。(在上图为红色模块)

业务网络:网络节点和计算节点中虚拟机数据通信,比如dhcp,L2,L3。(在上图为绿色模块)

2、三个节点网卡配置

Controller节点:一张网卡,配置eth0为管理网络

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 10.1.101.11
netmask 255.255.255.0
gateway 10.1.101.254
dns-nameservers 10.1.101.51

配置/etc/hosts如下:

root@ubuntu:~# cat /etc/hosts
127.0.0.1       localhost
#127.0.1.1      ubuntu
#controller
10.1.101.11    controller

#network
10.1.101.21    network

#compute1
10.1.101.31    compute1

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Network节点:三张网卡,配置eth0为管理网络,eth1为业务网络,eth2为外部网络,需特殊配置。

root@ubuntu:~# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 10.1.101.21
netmask 255.255.255.0
gateway 10.1.101.254
dns-nameservers 10.1.101.51

auto eth1
iface eth1 inet static
address 10.0.1.21
netmask 255.255.255.0

# The external network interface
auto eth2
iface eth2 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down

配置/etc/hosts如下:

root@ubuntu:~# cat /etc/hosts
127.0.0.1       localhost
#127.0.1.1      ubuntu
#network
10.1.101.21    network

#controller
10.1.101.11    controller

#compute1
10.1.101.31    compute1

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Comput节点:两张网卡,配置eth0为管理网络,配置eth1为业务网络。

root@ubuntu:~# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 10.1.101.31
netmask 255.255.255.0
gateway 10.1.101.254
#dns-nameservers 192.168.1.3
dns-nameservers 10.1.101.51

auto eth1
iface eth1 inet static
address 10.0.1.31
netmask 255.255.255.0

配置/etc/hosts如下:

root@ubuntu:~# cat /etc/hosts
127.0.0.1       localhost
#127.0.1.1      ubuntu

#compute1
10.1.101.31    compute1

#controller
10.1.101.11    controller

#network
10.1.101.21    network

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

3、确认网络已经配置好

Controller节点:

# ping -c 4 openstack.org【ping通外网】

# ping -c 4 network【ping通网络节点的管理网络】

# ping -c 4 compute1【ping通计算节点的管理网络】

Network节点:

# ping -c 4 openstack.org【ping通外网】

# ping -c 4 controller【ping 通控制节点的管理网络】

# ping -c 4 10.0.1.31【ping 通计算节点的tunnel网络】

Compute节点:

# ping -c 4 openstack.org【ping外网通】

# ping -c 4 controller【ping 控制节点的管理网络通】

# ping -c 4 10.0.1.21【ping 通网络节点的tunnel网络】

二、基础环境配置

1、设置全局环境变量

为了方便配置后续配置,先设置全局的环境变量。

controller节点设置:

cat > /root/novarc << EOF
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0
export SERVICE_ENDPOINT="http://controller:35357/v2.0"
export SERVICE_TOKEN=servicetoken
export MYSQL_PASS=password
export SERVICE_PASSWORD=password
export RABBIT_PASSWORD=password
export MASTER="10.1.101.11"
EOF
cat /root/novarc >> /etc/profile
source /etc/profile

compute节点设置:

# Create the environment variables
cat > /root/novarc << EOF
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export MYSQL_PASS=password
export SERVICE_PASSWORD=password
export RABBIT_PASSWORD=password
export SERVICE_TOKEN=stackinsider
export CONTROLLER_IP=controller
export MASTER=compute
export LOCAL_IP="$(/sbin/ifconfig eth1        | awk ‘/inet addr/ {print $2}‘ | cut -f2 -d ":")"
EOF

# Update the global environment variables.
cat /root/novarc >> /etc/profile
source /etc/profile

2、更新系统

在三个节点都要执行下面操作。

第一步、安装Ubuntu Cloud Archive

# apt-get install python-software-properties
# add-apt-repository cloud-archive:icehouse

Ubuntu Cloud Archive是一个特殊的库允许你安装Ubuntu支持的稳定的最新发布的OpenStack。

第二步、更新系统

# apt-get update
# apt-get dist-upgrade 【//lxy:需要十分钟,耐心等待】

第三步,安装Ubuntu 13.10 backported kernel

Ubuntu12.04需要安装这个Linux kernel来提升系统的稳定性。

# apt-get install linux-image-generic-lts-saucy  

第四步,重启系统生效

# reboot

3、安装NTP(Network Time Protocal)

为做到每个节点的时间同步,需要在每个节点都安装ntp,然后修改配置,将/etc/ntp.conf添加controller为时间源。

在controller节点:

第一步、安装

# apt-get install ntp

第二步、配置/etc/ntp.conf

# Use Ubuntu‘s ntp server as a fallback.
server ntp.ubuntu.comserver 127.127.1.0
fudge 127.127.1.0 stratum 10

将ntp.ubuntu.com作为时间源,此外添加一个本地时间源,以防网络时间服务中断,其中server 127.127.1.0表示本机是ntp服务器。

或者执行下面命令:

sed -i ‘s/server ntp.ubuntu.com/ server ntp.ubuntu.com server 127.127.1.0 fudge 127.127.1.0 stratum 10/g‘ /etc/ntp.conf

第三步,重启ntp服务。

#service ntp restart

在controller之外的节点,

第一步,安装

# apt-get install ntp

第二步,配置/etc/ntp.conf,将controller作为时间源。

# Use Ubuntu‘s ntp server as a fallback.
server controller

或者执行命令:

sed -i -e " s/server ntp.ubuntu.com/server controller/g" /etc/ntp.conf

第三步:重启NTP服务。

4、安装数据库

每个节点都要安装python-mysqldb组件,用于数据库连接,只有主控需要安装mysqlserver。

controller节点:

第一步安装:

# apt-get install python-mysqldb mysql-server

Note:安装过程终端会提醒输入mysql root账户的密码,这里设置为password。

第二步,配置/etc/mysql/my.conf文件

将[mysqld]模块中bind-address设置为controller节点管理网络的ip,确保其他节点通过管理网络获取Mysql服务。也可以设置为0.0.0.0,就是将mysql服务绑定到所有网卡。

[mysqld]
...
bind-address = 10.1.101.11

在[mysqld]模块bind-address后面增加如下配置,来设置UTF-8字符集和InnoDB。

[mysqld]
...
default-storage-engine = innodb
collation-server = utf8_general_ci
init-connect = ‘SET NAMES utf8‘
character-set-server = utf8

第三步,重启mysql服务使设置生效

# service mysql restart

第四步,删除匿名用户

数据库第一次启动时会创建一些匿名用户,必须将这些用户删除,否则后面数据库连接会出错。

# mysql_secure_installation

Note:

1、该命令提供一堆选择给你来改善mysql数据库的安全性,除了不要改密码,其他都选yes,除非有你自己的理由。

2、如果mysql_secure_installation命令失败则执行

# mysql_install_db

# mysql_secure_installation

第五步,创建OpenStack中的Database,Users,Privileges

mysql -uroot -p$MYSQL_PASS << EOF
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO ‘nova‘@‘%‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
GRANT ALL PRIVILEGES ON nova.* TO ‘nova‘@‘localhost‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
GRANT ALL PRIVILEGES ON nova.* TO ‘nova‘@‘controller‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO ‘glance‘@‘%‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
GRANT ALL PRIVILEGES ON glance.* TO ‘glance‘@‘localhost‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
GRANT ALL PRIVILEGES ON glance.* TO ‘glance‘@‘controller‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone‘@‘%‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone‘@‘localhost‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone‘@‘controller‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘%‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘localhost‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘controller‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron‘@‘%‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron‘@‘localhost‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron‘@‘controller‘ IDENTIFIED BY ‘$MYSQL_PASS‘;
FLUSH PRIVILEGES;
EOF

在controller之外的节点安装python-mysqldb

# apt-get install python-mysqldb

5、安装消息代理RabbitMQ

第一步,安装

# apt-get -y install rabbitmq-server

第二步,改密码

RabbitMQ默认创建一个用户,用户名密码都是guest,执行以下命令将guest用户的密码改为password

#rabbitmqctl change_password guest $RABBIT_PASSWORD

在用到RabbitMQ的openstack服务配置文件中都要修改rabbit_password。

三、安装OpenStack服务

1、安装keystone

在controller节点安装OpenStack认证服务.

第一步、安装keystone

# apt-get install keystone

第二步、配置/etc/keystone/keystone.conf

sed -i -e " s/#admin_token=ADMIN/admin_token=$SERVICE_TOKEN/g; s/#public_bind_host=0.0.0.0/public_bind_host=0.0.0.0/g; s/#admin_bind_host=0.0.0.0/admin_bind_host=0.0.0.0/g; s/#public_port=5000/public_port=5000/g; s/#admin_port=35357/admin_port=35357/g; s/#compute_port=8774/compute_port=8774/g; s/#verbose=false/verbose=True/g; s/#idle_timeout=3600/idle_timeout=3600/g" /etc/keystone/keystone.conf

更新keystone.conf中MySQL连接

[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone

或者执行命令

sed -i ‘/connection = .*/{s|sqlite:///.*|mysql://‘"keystone"‘:‘"$MYSQL_PASS"‘@‘"$MASTER"‘/keystone|g}‘      /etc/keystone/keystone.conf

第三步、删除keystone.db

默认情况,Ubuntu包创建了一个SQLite数据库。删除/var/lib/keystone/目录下的keystone.db文件确保后面不会出错。

# rm /var/lib/keystone/keystone.db

第四步、重启keystone并同步数据库

# service keystone restart
# keystone-manage db_sync

第五步、创建OpenStack中的users, tenants, services

首先创建Keystone数据导入脚本Ksdata.sh ,内容如下:

vi Ksdata.sh 

#!/bin/sh
#
# Keystone Datas
#
# Description: Fill Keystone with datas.

# Mainly inspired by http://www.hastexo.com/resources/docs/installing-openstack-essex-20121-ubuntu-1204-precise-pangolin
# Written by Martin Gerhard Loschwitz / Hastexo
# Modified by Emilien Macchi / StackOps
#
# Support: openstack@lists.launchpad.net
# License: Apache Software License (ASL) 2.0
#

#ADMIN_PASSWORD=${ADMIN_PASSWORD:-password}
ADMIN_PASSWORD=${ADMIN_PASSWORD:-$OS_PASSWORD}
#SERVICE_PASSWORD=${SERVICE_PASSWORD:-$ADMIN_PASSWORD}
#export SERVICE_TOKEN="password"
export SERVICE_ENDPOINT="http://localhost:35357/v2.0"
SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}

get_id () {
    echo `$@ | awk ‘/ id / { print $4 }‘`
}

# Tenants
ADMIN_TENANT=$(get_id keystone tenant-create --name=admin)
SERVICE_TENANT=$(get_id keystone tenant-create --name=$SERVICE_TENANT_NAME)
DEMO_TENANT=$(get_id keystone tenant-create --name=demo)
INVIS_TENANT=$(get_id keystone tenant-create --name=invisible_to_admin)

# Users
ADMIN_USER=$(get_id keystone user-create --name=admin --pass="$ADMIN_PASSWORD" --email=admin@domain.com)
DEMO_USER=$(get_id keystone user-create --name=demo --pass="$ADMIN_PASSWORD" --email=demo@domain.com)

# Roles
ADMIN_ROLE=$(get_id keystone role-create --name=admin)
KEYSTONEADMIN_ROLE=$(get_id keystone role-create --name=KeystoneAdmin)
KEYSTONESERVICE_ROLE=$(get_id keystone role-create --name=KeystoneServiceAdmin)

# Add Roles to Users in Tenants
keystone user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $ADMIN_TENANT
keystone user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $DEMO_TENANT
keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONEADMIN_ROLE --tenant-id $ADMIN_TENANT
keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONESERVICE_ROLE --tenant-id $ADMIN_TENANT

# The Member role is used by Horizon and Swift
MEMBER_ROLE=$(get_id keystone role-create --name=Member)
keystone user-role-add --user-id $DEMO_USER --role-id $MEMBER_ROLE --tenant-id $DEMO_TENANT
keystone user-role-add --user-id $DEMO_USER --role-id $MEMBER_ROLE --tenant-id $INVIS_TENANT

# Configure service users/roles
NOVA_USER=$(get_id keystone user-create --name=nova --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=nova@domain.com)
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $ADMIN_ROLE

GLANCE_USER=$(get_id keystone user-create --name=glance --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=glance@domain.com)
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $GLANCE_USER --role-id $ADMIN_ROLE

SWIFT_USER=$(get_id keystone user-create --name=swift --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=swift@domain.com)
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $SWIFT_USER --role-id $ADMIN_ROLE

RESELLER_ROLE=$(get_id keystone role-create --name=ResellerAdmin)
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $RESELLER_ROLE

NEUTRON_USER=$(get_id keystone user-create --name=neutron --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=neutron@domain.com)
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NEUTRON_USER --role-id $ADMIN_ROLE

CINDER_USER=$(get_id keystone user-create --name=cinder --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=cinder@domain.com)
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $CINDER_USER --role-id $ADMIN_ROLE

运行脚本

#bash Ksdata.sh 

第六步,创建endpoints

首先创建脚本Ksendpoints.sh

#vi Ksendpoints.sh
#!/bin/sh
#
# Keystone Endpoints
#
# Description: Create Services Endpoints

# Mainly inspired by http://www.hastexo.com/resources/docs/installing-openstack-essex-20121-ubuntu-1204-precise-pangolin
# Written by Martin Gerhard Loschwitz / Hastexo
# Modified by Emilien Macchi / StackOps
#
# Support: openstack@lists.launchpad.net
# License: Apache Software License (ASL) 2.0
#

# MySQL definitions
MYSQL_USER=keystone
MYSQL_DATABASE=keystone
MYSQL_HOST=$MASTER
MYSQL_PASSWORD=$MYSQL_PASS

# Keystone definitions
KEYSTONE_REGION=RegionOne
#SERVICE_TOKEN=password
SERVICE_ENDPOINT="http://localhost:35357/v2.0"

# other definitions
#MASTER="192.168.0.1"

while getopts "u:D:p:m:K:R:E:S:T:vh" opt; do
  case $opt in
    u)
      MYSQL_USER=$OPTARG
      ;;
    D)
      MYSQL_DATABASE=$OPTARG
      ;;
    p)
      MYSQL_PASSWORD=$OPTARG
      ;;
    m)
      MYSQL_HOST=$OPTARG
      ;;
    K)
      MASTER=$OPTARG
      ;;
    R)
      KEYSTONE_REGION=$OPTARG
      ;;
    E)
      export SERVICE_ENDPOINT=$OPTARG
      ;;
    S)
      SWIFT_MASTER=$OPTARG
      ;;
    T)
      export SERVICE_TOKEN=$OPTARG
      ;;
    v)
      set -x
      ;;
    h)
      cat <<EOF
Usage: $0 [-m mysql_hostname] [-u mysql_username] [-D mysql_database] [-p mysql_password]
       [-K keystone_master ] [ -R keystone_region ] [ -E keystone_endpoint_url ]
       [ -S swift_master ] [ -T keystone_token ]

Add -v for verbose mode, -h to display this message.
EOF
"Ksendpoints_havana.sh" 149L, 5243C                                                                                                                                                                                        1,1           Top

if [ -z "$KEYSTONE_REGION" ]; then
  echo "Keystone region not set. Please set with -R option or set KEYSTONE_REGION variable." >&2
  missing_args="true"
fi

if [ -z "$SERVICE_TOKEN" ]; then
  echo "Keystone service token not set. Please set with -T option or set SERVICE_TOKEN variable." >&2
  missing_args="true"
fi

if [ -z "$SERVICE_ENDPOINT" ]; then
  echo "Keystone service endpoint not set. Please set with -E option or set SERVICE_ENDPOINT variable." >&2
  missing_args="true"
fi

if [ -z "$MYSQL_PASSWORD" ]; then
  echo "MySQL password not set. Please set with -p option or set MYSQL_PASSWORD variable." >&2
  missing_args="true"
fi

if [ -n "$missing_args" ]; then
  exit 1
fi

keystone service-create --name nova --type compute --description ‘OpenStack Compute Service‘
keystone service-create --name cinder --type volume --description ‘OpenStack Volume Service‘
keystone service-create --name glance --type image --description ‘OpenStack Image Service‘
keystone service-create --name swift --type object-store --description ‘OpenStack Storage Service‘
keystone service-create --name keystone --type identity --description ‘OpenStack Identity‘
keystone service-create --name ec2 --type ec2 --description ‘OpenStack EC2 service‘
keystone service-create --name neutron --type network --description ‘OpenStack Networking service‘

create_endpoint () {
  case $1 in
    compute)
    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl ‘http://‘"$MASTER"‘:8774/v2/$(tenant_id)s‘ --adminurl ‘http://‘"$MASTER"‘:8774/v2/$(tenant_id)s‘ --internalurl ‘http://‘"$MASTER"‘:8774/v2/$(tenant_id)s‘
    ;;
    volume)
    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl ‘http://‘"$MASTER"‘:8776/v1/$(tenant_id)s‘ --adminurl ‘http://‘"$MASTER"‘:8776/v1/$(tenant_id)s‘ --internalurl ‘http://‘"$MASTER"‘:8776/v1/$(tenant_id)s‘
    ;;
    image)
    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl ‘http://‘"$MASTER"‘:9292/v2‘ --adminurl ‘http://‘"$MASTER"‘:9292/v2‘ --internalurl ‘http://‘"$MASTER"‘:9292/v2‘
    ;;
    object-store)
    if [ $SWIFT_MASTER ]; then
      keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl ‘http://‘"$SWIFT_MASTER"‘:8080/v1/AUTH_$(tenant_id)s‘ --adminurl ‘http://‘"$SWIFT_MASTER"‘:8080/v1‘ --internalurl ‘http://‘"$SWIFT_MASTER"‘:8080/v1/AUTH
_$(tenant_id)s‘
    else
      keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl ‘http://‘"$MASTER"‘:8080/v1/AUTH_$(tenant_id)s‘ --adminurl ‘http://‘"$MASTER"‘:8080/v1‘ --internalurl ‘http://‘"$MASTER"‘:8080/v1/AUTH_$(tenant_id)s‘
    fi
    ;;
    identity)
    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl ‘http://‘"$MASTER"‘:5000/v2.0‘ --adminurl ‘http://‘"$MASTER"‘:35357/v2.0‘ --internalurl ‘http://‘"$MASTER"‘:5000/v2.0‘
    ;;
    ec2)
    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl ‘http://‘"$MASTER"‘:8773/services/Cloud‘ --adminurl ‘http://‘"$MASTER"‘:8773/services/Admin‘ --internalurl ‘http://‘"$MASTER"‘:8773/services/Cloud‘
    ;;
    network)
    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl ‘http://‘"$MASTER"‘:9696/‘ --adminurl ‘http://‘"$MASTER"‘:9696/‘ --internalurl ‘http://‘"$MASTER"‘:9696/‘
    ;;
  esac
}

for i in compute volume image object-store identity ec2 network; do
  id=`mysql -h "$MYSQL_HOST" -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" "$MYSQL_DATABASE" -ss -e "SELECT id FROM service WHERE type=‘"$i"‘;"` || exit 1
  create_endpoint $i $id
done

运行脚本

# bash Ksendpoints.sh 

第七步,验证

现在keystone已经安装完成,验证身份认证服务安装是否正确。

# keystone user-list
# keystone user-role-list --user admin --tenant admin

2、安装OpenStack客户端

安装完后,可以通过命令来调用OpenStack各个服务的api。

# apt-get install python-pip
# pip install python-keystoneclient
# pip install python-cinderclient
# pip install python-novaclient
# pip install python-glanceclient
# pip install python-neutronclient
# 也可以用到时再安装
# pip install python-swiftclient
# pip install python-heatclient
# pip install python-ceilometerclient
# pip install python-troveclient

3、安装Glance(镜像服务)

在controller节点安装image服务

第一步、安装glance。

# apt-get install glance

第二步、配置

因为glance包含两类服务所以修改配置文件/etc/glance/glance-api.conf和/etc/glance/glanceregistry.conf

sed -i -e " s/%SERVICE_TENANT_NAME%/service/g; s/%SERVICE_USER%/glance/g; s/%SERVICE_PASSWORD%/$SERVICE_PASSWORD/g; " /etc/glance/glance-api.conf  /etc/glance/glance-registry.conf

修改两个文件中[database]模块数据库连接,

connection = mysql://glance:password@controller/glance

或直接执行命令

sed -i ‘/#connection = <None>/i\connection = mysql://‘glance‘:‘"$MYSQL_PASS"‘@‘"$MASTER"‘/glance‘ /etc/glance/glance-registry.conf /etc/glance/glance-api.conf

在[DEFAULT]中增加以下配置

[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS

设置flavor为keystone

sed -i ‘s/#flavor=/flavor=keystone/g‘ /etc/glance/glance-api.conf /etc/glance/glance-registry.conf

第三步、删除glance.sqlite

# rm /var/lib/glance/glance.sqlite

第四步、检查配置

[keystone_authtoken]
#auth_host = 127.0.0.1
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = password

第五步、重启glance相关服务并同步数据库

#service glance-api restart
#service glance-registry restart
#glance-manage db_sync

第六步、下载镜像测试glance服务

#wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
#wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img

添加cirros镜像

#glance add name=cirros-0.3.2-x86_64 is_public=true container_format=bare \
       disk_format=qcow2 < cirros-0.3.2-x86_64-disk.img

查看镜像

#glance index

4、安装cinder(块存储)

块存储,cinder用作虚拟机存储管理,管理卷,卷快照,卷类型。包括cinder-ap、cinder-volume、 cinder-scheduler daemon、 Messaging queue。在controller节点安装cider。

第一步、安装cinder组件

# apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget \
    open-iscsi iscsitarget-dkms python-cinderclient linux-headers-`uname -r`

第二步,修改iscsitarget配置文件并重启服务

# sed -i ‘s/false/true/g‘ /etc/default/iscsitarget
# service iscsitarget start
# service open-iscsi start

第三步,配置cinder文件

# cat >/etc/cinder/cinder.conf <<EOF
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
sql_connection = mysql://cinder:$MYSQL_PASS@$MASTER:3306/cinder
iscsi_helper = ietadm
volume_group = cinder-volumes
rabbit_password= $RABBIT_PASSWORD
logdir=/var/log/cinder
verbose=true
auth_strategy = keystone
EOF

# sed -i -e " s/%SERVICE_TENANT_NAME%/service/g; \
     s/%SERVICE_USER%/cinder/g; s/%SERVICE_PASSWORD%/$SERVICE_PASSWORD/g; "      /etc/cinder/api-paste.ini

第四步、同步cinder数据库,并重启相关服务

# cinder-manage db sync
# service cinder-api restart
# service cinder-scheduler restart
# service cinder-volume restart 

5、安装nova(计算服务)

controller节点:

第一步、安装nova组件

# apt-get install nova-api nova-cert nova-conductor nova-consoleauth \
nova-novncproxy nova-scheduler python-novaclient

第二步、修改配置/etc/nova/nova.conf

cat >/etc/nova/nova.conf <<EOF
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata
rpc_backend = rabbit
rabbit_host = $MASTER
rabbit_userid = guest
rabbit_password = $RABBIT_PASSWORD
my_ip = $MASTER
vncserver_listen = $MASTER
vncserver_proxyclient_address = $MASTER
auth_strategy = keystone
novncproxy_base_url = http://$MASTER:6080/vnc_auto.html
glance_host = $MASTER
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://$MASTER:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = $SERVICE_PASSWORD
neutron_admin_auth_url = http://$MASTER:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron
service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = $SERVICE_TOKEN

[database]
connection = mysql://nova:$MYSQL_PASS@$MASTER/nova

[keystone_authtoken]
auth_uri = http://$MASTER:5000
auth_host = $MASTER
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = $SERVICE_PASSWORD
EOF

第三步、删除nova.sqlite数据库

# rm /var/lib/nova/nova.sqlite  

第四步、同步数据库、重启服务

# nova-manage db sync
# service nova-conductor restart
# service nova-api restart
# service nova-cert restart
# service nova-consoleauth restart
# service nova-scheduler restart
# service nova-novncproxy restart

第五步、检查nova服务是否安装成功(确保nova-cert、nova-consoleauth、nova-scheduler和nova-conductor均开启)

# nova-manage service list

Compute节点:

第一步、安装nova组件

# apt-get install nova-compute-kvm python-guestfs

第二步、make the current kernel readable for qemu and libguestfs

# dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)

第三步、Enable this override for all future kernel updates

cat > /etc/kernel/postinst.d/statoverride <<EOF
#!/bin/sh
version="\$1"
# passing the kernel version is required
[ -z "\${version}" ] && exit 0
dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-\${version}
EOF

#make the file executable
chmod +x /etc/kernel/postinst.d/statoverride

第四步、配置 /etc/nova/nova.conf

cat >/etc/nova/nova.conf <<EOF
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata
rpc_backend = rabbit
rabbit_host = $CONTROLLER_IP
rabbit_userid = guest
rabbit_password = $RABBIT_PASSWORD
my_ip = $MASTER
vncserver_listen = $MASTER
vncserver_proxyclient_address = $MASTER
auth_strategy = keystone
novncproxy_base_url = http://$CONTROLLER_IP:6080/vnc_auto.html
glance_host = $CONTROLLER_IP
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://$CONTROLLER_IP:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = $SERVICE_PASSWORD
neutron_admin_auth_url = http://$CONTROLLER_IP:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron
service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = $SERVICE_TOKEN

[database]
connection = mysql://nova:$MYSQL_PASS@$CONTROLLER_IP/nova

[keystone_authtoken]
auth_uri = http://$CONTROLLER_IP:5000
auth_host = $CONTROLLER_IP
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = $SERVICE_PASSWORD
EOF

第五步、删除nova.sqlite数据库

# rm /var/lib/nova/nova.sqlite

第六步、配置/etc/nova/nova-compute.conf,使用qemu而非kvm。

# vi /etc/nova/nova-compute.conf
[DEFAULT]
compute_driver=libvirt.LibvirtDriver
[libvirt]
virt_type=qemu

第七步、重启服务

# service nova-compute restart

第八步、检查nova服务是否安装成功(确保nova-cert、nova-consoleauth、nova-scheduler、nova-conductor和nova-compute均开启)

# nova-manage service list

6、安装neutron(网络服务)

controller节点:

第一步、安装neutron 组件

# apt-get install neutron-server neutron-plugin-ml2

第二步、配置/etc/neutron/neutron.conf

需要配置包括数据库,认证,消息代理,拓扑改变通知和plugin。

数据库连接

sed -i ‘/connection = .*/{s|sqlite:///.*|mysql://‘"neutron"‘:‘"$MYSQL_PASS"‘@‘"$CONTROLLER_IP"‘/neutron|g}‘         /etc/neutron/neutron.conf

身份验证

sed -i ‘s/# auth_strategy = keystone/auth_strategy = keystone/g‘         /etc/neutron/neutron.conf

sed -i -e " s/%SERVICE_TENANT_NAME%/service/g; s/%SERVICE_USER%/neutron/g;           s/%SERVICE_PASSWORD%/$SERVICE_PASSWORD/g;           s/auth_host = 127.0.0.1/auth_host = $CONTROLLER_IP/g" /etc/neutron/neutron.conf

配置消息代理

sed -i -e " s/# rpc_backend = neutron.openstack.common.rpc.impl_kombu/rpc_backend = neutron.openstack.common.rpc.impl_kombu/g;           s/# rabbit_host = localhost/rabbit_host = $CONTROLLER_IP/g;           s/# rabbit_password = guest/rabbit_password = $SERVICE_PASSWORD/g;           s/# rabbit_userid = guest/rabbit_userid = guest/g"           /etc/neutron/neutron.conf

配置网络拓扑改变通知compute

service_id=`keystone tenant-get service | awk ‘$2~/^id/{print $4}‘`

sed -i -e " s/# notify_nova_on_port_status_changes = True/notify_nova_on_port_status_changes = True/g;             s/# notify_nova_on_port_data_changes = True/notify_nova_on_port_data_changes = True/g;             s/# nova_url = http:\/\/127.0.0.1:8774\/v2/nova_url = http:\/\/$MASTER:8774\/v2/g;             s/# nova_admin_username =/nova_admin_username = nova/g;         s/# nova_admin_tenant_id =/nova_admin_tenant_id = $service_id/g;             s/# nova_admin_password =/nova_admin_password = $SERVICE_PASSWORD/g;         s/# nova_admin_auth_url =/nova_admin_auth_url = http:\/\/$MASTER:35357\/v2.0/g"             /etc/neutron/neutron.conf其中,# keystone tenant-get service用来获得service租户的id

配置ML2 plug-in

sed -i -e ‘s/core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin/core_plugin = ml2/g‘ /etc/neutron/neutron.conf
sed -i -e ‘s/# service_plugins =/service_plugins = router/g‘ /etc/neutron/neutron.conf
sed -i -e ‘s/# allow_overlapping_ips = False/allow_overlapping_ips = True/g‘ /etc/neutron/neutron.conf

第三步、配置/etc/neutron/plugins/ml2/ml2_conf.ini

ML2代理使用OVS代理来创建虚拟网络架构。然而,controller节点不需要OVS代理或服务,因为controller节点不处理虚拟机网络通信。

在[ml2]模块,[ml2_type_gre]模块增加以下配置,并增加[securitygroup]新模块的配置。

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers =openvswitch

[ml2_type_gre]
tunnel_id_ranges =1:1000

[securitygroup]
# Controls if neutron security group is enabled or not.
# It should be false when you use nova security group.
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True

第四步、配/etc/nova/nova.conf

默认情况,虚拟机会使用legacy networking,所以必须配置。确认按照下面配置。

network_api_class = nova.network.neutronv2.api.API
neutron_url = http://10.1.101.11:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = password
neutron_admin_auth_url = http://10.1.101.11:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron

因为默认虚拟机使用内部防火墙服务,因为Networking有防火墙,所以需要要配置防火墙为firewall_driver = nova.virt.firewall.NoopFirewallDriver

第五步、完成安装

1. Restart the Compute services:
# service nova-api restart
# service nova-scheduler restart
# service nova-conductor restart
2. Restart the Networking service:
# service neutron-server restart

network节点:

第一步、在安装OpenStack之前需要开启一些核心的网络服务,IP转发。

echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
echo "net.ipv4.conf.default.rp_filter=0" >> /etc/sysctl.conf
sysctl -p

第二步、安装neutron组件

# apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent
openvswitch-datapath-dkms neutron-l3-agent neutron-dhcp-agent

Tip:

【查看Ubuntu版本
root@ubuntu:~# cat /etc/issue
Ubuntu 12.04.2 LTS \n \l
root@ubuntu:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 12.04.2 LTS
Release:        12.04
Codename:       precise
root@ubuntu:~# uname -a
Linux ubuntu 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
如果Ubuntu用的Linux内核版本在3.11或以上就不需要安装openvswitch-datapath-dkms package包。
】

第三步、配置/etc/neutron/neutron.conf

[DEFAULT]模块和[keystone_authtoken]模块

[DEFAULT]
...
auth_strategy = keystonerpc_backend = neutron.openstack.common.rpc.impl_komburabbit_host = controllerrabbit_password = RABBIT_PASScore_plugin = ml2service_plugins = routerallow_overlapping_ips = True
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_host = controller
auth_protocol = http
auth_port = 35357
admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON_PASS

注释掉[service_providers]所有行

第四步、配置L3agent /etc/neutron/l3_agent.ini

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True

debug = True

第五步、配置DHCP agent /etc/neutron/dhcp_agent.ini

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
debug = True

第六步、配置metadata agent  /etc/neutron/metadata_agent.ini

Metadate代理为远程访问虚拟机授权提供配置信息

[DEFAULT]
...
auth_url = http://controller:5000/v2.0【一定要配置对】
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON_PASS
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET

下面两步在controller节点完成

1、编辑/etc/nova/nova.conf在[DEFAULT]加上, METADATA_SECRET为对应密码,我改为password。

[DEFAULT]
...
service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = METADATA_SECRET

2、在controller节点,重启Compute API服务。

# service nova-api restart

第七步、配置ML2 plug-in网络/etc/neutron/plugins/ml2/ml2_conf.ini

在[ml2]模块增加

[ml2]
...
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

在[ml2_type_gre]模块增加

[ml2_type_gre]
...
tunnel_id_ranges = 1:1000

新增[ovs]模块并增加下面内容,其中INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS替换为network节点虚拟机tunnels网络的网卡ip地址。这里为10.0.1.21

[ovs]
...
local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
tunnel_type = gre
enable_tunneling = True

新增[securitygroup]模块并增加

[securitygroup]
...
firewall_driver = neutron.agent.linux.iptables_firewall.
OVSHybridIptablesFirewallDriver
enable_security_group = True

第八步,配置OVS服务

OVS提供底层虚拟机网络架构。br-int处理虚拟机内部网络通信。br-ext处理虚拟机外部网络通信。br-ext需要一个port在物理外网网卡来为虚拟机提供外部网络通信。这个port 桥接虚拟网络和物理外网。

重启OVS服务。

# service openvswitch-switch restart

添加集成网桥

 # ovs-vsctl add-br br-int

添加外部网桥

# ovs-vsctl add-br br-ex

添加外部网卡借以通外网

INTERFACE_NAME  替换为当前网卡的名字。比如eth2或者ens256.我的是eth2.

# ovs-vsctl add-port br-ex INTERFACE_NAME

然后要配置network节点的/etc/network/interface中的br-ex网卡,完整内容如下:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 10.1.101.21
netmask 255.255.255.0
gateway 10.1.101.254
dns-nameservers 10.1.101.51

auto eth1
iface eth1 inet static
address 10.0.1.21
netmask 255.255.255.0

# The external network interface
auto eth2
iface eth2 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down

auto br-ex
iface br-ex inet static
    address 192.168.100.21
    netmask 255.255.255.0
    up ip link set $IFACE promisc on
    down ip link set $IFACE promisc off

重启服务

/etc/init.d/networking restart

第九步、完成安装

# service neutron-plugin-openvswitch-agent restart
# service neutron-l3-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart

compute节点:

第一步,开启ip转发

echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
echo "net.ipv4.conf.default.rp_filter=0" >> /etc/sysctl.conf
sysctl -p

第二步、安装neutron组件

# apt-get install neutron-common neutron-plugin-ml2 neutron-plugin-openvswitch-agent
超过3.11的Ubuntu内核版本都不需要安装openvswitch-datapath-dkms,
因为我的Ubuntu版本是3.12所以不需要安装openvswitch-datapath-dkms

第三步、配置/etc/neutron/neutron.conf

需要配置认证服务,消息代理和plugin

认证

[DEFAULT]
...
auth_strategy = keystone

在[keystone_authtoken]模块增加下面内容,要修改NEUTRON_PASS为对应密码,我的是password

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_host = controller
auth_protocol = http
auth_port = 35357
admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON_PASS

消息代理,在[DEFAULT]模块增加下面内容,注意要替换RABBIT_PASSRabbitMQ的密码。

[DEFAULT]
...
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = controller
rabbit_password = RABBIT_PASS

配置ML2,在[DEFAULT]模块增加下面内容

[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
verbose = True

注释掉[service_providers]模块所有行

第四步、配置ML2 plug-in /etc/neutron/plugins/ml2/ml2_conf.ini

ML2 plug-in用OVS来建立虚拟机网络。

在[ml2]模块增加

[ml2]
...
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

在[ml2_type_gre]模块增加

[ml2_type_gre]
...
tunnel_id_ranges = 1:1000

增加[ovs]模块,并增加下面内容,注意INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS要替换成compute节点虚拟机tunnels网卡的ip,这里是10.0.1.31

[ovs]
...
local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
tunnel_type = gre
enable_tunneling = True

增加[securitygroup]模块,并添加下面内容。

[securitygroup]
...
firewall_driver = neutron.agent.linux.iptables_firewall.
OVSHybridIptablesFirewallDriver
enable_security_group = True

第五步、配置OVS服务

OVS提供底层的虚拟网络框架。Br-int处理虚拟机内部网络流量。

重启OVS服务

# service openvswitch-switch restart

添加集成网桥

# ovs-vsctl add-br br-int

第六步、配置计算服务nova使用neutron /etc/nova/nova.conf

默认虚拟机会使用legacy networking。所以要进行配置使其使用Neutron。

在[DEFAULT]模块增加下面内容:注意要修改NEUTRON_PASS为真正的密码,我的是password。

[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://controller:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = NEUTRON_PASS
neutron_admin_auth_url = http://controller:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron

默认虚拟机会使用内部防火墙,这里为了让它使用Neutron的防火墙,所以配置。nova.virt.firewall.NoopFirewallDriver

第七步、完成配置

重启计算服务

# service nova-compute restart

重启OVS代理

# service neutron-plugin-openvswitch-agent restart

7、安装dashboard

在controller节点安装dashboard

第一步、安装

# apt-get -y install apache2 libapache2-mod-wsgi openstack-dashboard memcached python-memcache

删除openstack-dashboard-ubuntu-theme这个软件包,因为它对一些功能有阻碍作用

# apt-get remove --purge openstack-dashboard-ubuntu-theme

第二步、配置/etc/openstack-dashboard/local_settings.py

修改[‘default‘][‘LOCATION‘]中的CACHES来匹配/etc/memcached.conf的内容。

CACHES = {
‘default‘: {
‘BACKEND‘ : ‘django.core.cache.backends.memcached.MemcachedCache‘,
‘LOCATION‘ : ‘127.0.0.1:11211‘
}
}

修改OPENSTACK_HOST选项为认证服务的机子。

OPENSTACK_HOST = "controller"

第三步、启动apache web服务和memcached

# service apache2 restart
# service memcached restart

第四步、重启keyston服务,并同步数据库

# service keystone restart
# keystone-manage db_sync

现在基本配置都已经完成,可以使用OpenStack了。

以上为我个人配置笔记,仅作参考,更详细介绍请参考官方文档。

时间: 2024-10-08 20:41:41

openstack三个节点icehouse-gre模式部署的相关文章

OpenStack三个节点icehouse

一.环境准备 1.架构 创建3台虚拟机,分别作为controll节点.network节点和compute1节点. Controller节点:1processor,2G memory,5G storage. Network节点:1processor,2G memory,5G storage. Comute1节点:1processor,2G memory,5G storage. 架构图: 外部网络:提供上网业务,外界登录openstack(在上图为蓝色模块) 管理网络:三节点通信比如keystone

Ubuntu 12.04 Server OpenStack Havana多节点(OVS+GRE)安装

1.需求 节点角色 NICs 控制节点 eth0(10.10.10.51)eth1(192.168.100.51) 网络节点 eth0(10.10.10.52)eth1(10.20.20.52)eth2(192.168.100.52) 计算结点 eth0(10.10.10.53)eth1(10.20.20.53) 注意1:你总是可以使用dpkg -s <packagename>确认你是用的是Havana版本 注意2:这个是当前网络架构 2.控制节点 2.1.准备Ubuntu 安装好Ubuntu

Openstack 单控制节点部署实例

一.前期准备 1.openstack集群设备列表 10.240.216.101 os-node1    控制节点(ceph mon mds节点) 10.240.216.102 os-node2    计算节点(ceph mon mds节点) 10.240.216.103 os-node3    计算节点(ceph mon mds节点) 10.240.216.104 os-node4    存储节点(ceph osd节点) 10.240.216.105 os-node5    存储节点(ceph

理解 OpenStack Swift (1):OpenStack + 三节点Swift 集群+ HAProxy + UCARP 安装和配置

本系列文章着重学习和研究OpenStack Swift,包括环境搭建.原理.架构.监控和性能等. (1)OpenStack + 三节点Swift 集群+ HAProxy + UCARP 安装和配置 (2)Swift 原理和架构 (3)Swift 监控 (4)Swift 性能 要实现的系统的效果图: 特点: 使用三个对等物理节点,每个节点上部署所有Swift 服务 使用开源的 UCARP 控制一个 VIP,它会被绑定到三个物理网卡中的一个. 使用开源的 HAProxy 做负载均衡 开启 Swift

OpenStack 入门学习之三:单节点icehouse网桥的配置

部署完icehouse,安装完实例之后,如果虚拟主机需要和外部进行通信,还需要对宿主机的网桥进行配置 具体配置如下 修改ifcfg-em1的内容为以下内容: DEVICE=em1 ONBOOT=yes DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-ex #HWADDR=F8:B1:56:AE:3A:84 #TYPE=Ethernet #UUID=6f49b547-f1f8-4b21-a0fc-68791a5237dd #BOOTPROTO=static #I

Packstack –answer-file方式部署多节点openstack环境--双节点

4.1 centos环境准备 目标:修改answer-file,将controller+network放在一个节点,compute-storage放在另一个节点,使用packstack部署双节点环境 hostname IP Floating ip Function lxp-node1 192.168.11.8 10.33.41.135 Controller+network lxp-node2 192.168.11.9 10.33.41.136 Compute+storage 在/etc/host

rancher三节点k8s集群部署例子

rancher三节点k8s集群部署例子 待办 https://rorschachchan.github.io/2019/07/25/使用Rancher2-1部署k8s/ 原文地址:https://www.cnblogs.com/lishikai/p/12310449.html

高可靠对称节点(双星模式)

高可靠对称节点(双星模式) 概览 双星模式是一对具有主从机制的高可靠节点.任一时间,某个节点会充当主机,接收所有客户端的请求:另一个则作为一种备机存在.两个节点会互相监控对方,当主机从网络中消失时,备机会替代主机的位置. 双星模式由Pieter Hintjens和Martin Sustrik设计,应用在iMatix的OpenAMQ服务器中.它的设计理念是: 提供一种简明的高可靠性解决方案: 易于理解和使用: 能够进行可靠的故障切换. 假设我们有一组双星模式的服务器,以下是可能发生的故障: 主机发

Redis(二)CentOS7之Redis单节点与集群部署安装

一 Redis单机安装 1 Redis下载安装 1.1 检查依赖环境(Redis是C语言开发,编译依赖gcc环境) [[email protected] redis-4.0.10]$ gcc -v -bash: gcc: command not found [[email protected] redis-4.0.10]$ yum install -y gcc 1.2 解压文件到指定目录 [[email protected] software]$ wget http://download.red