openstack controller HA

global

log 127.0.0.1 local3

maxconn 4000

chroot  /var/lib/haproxy

pidfile  /var/run/haproxy.pid

nbproc 16

#ulimit-n 231097

#tune.ssl.default-dh-param 1024

user haproxy

group haproxy

daemon

defaults

log     global

mode    http

option  httplog

option  dontlognull

option  forwardfor

retries 3

option redispatch

maxconn 8000

timeout connect 5s

timeout client 5m

timeout server 5m

timeout check 1s

timeout http-request 10s

timeout http-keep-alive 10s

listen Stats *:10000

mode http

stats enable

stats uri /

stats refresh 15s

stats show-node

stats show-legends

stats hide-version

listen dashboard

bind 10.203.3.9:80

balance  source

capture  cookie vgnvisitor= len 32

cookie  SERVERID insert indirect nocache

mode  http

option  forwardfor

option  httpchk

option  httpclose

option  httplog

rspidel  ^Set-cookie:\ IP=

timeout  client 3h

timeout  server 3h

server controller1 10.203.3.10:90 cookie controller1 check inter 2000 rise 2 fall 3

server controller2 10.203.3.11:90 cookie controller2 check inter 2000 rise 2 fall 3

listen keystone_common

bind 10.203.3.9:5000

balance  source

option  tcpka

option  httpchk

server keystone1 10.203.3.10:6000 check inter 2000 rise 2 fall 3

server keystone2 10.203.3.11:6000 check inter 2000 rise 2 fall 3

listen keystone_admin

bind 10.203.3.9:35357

balance  source

option  tcpka

option  httpchk

server keystone1 10.203.3.10:45357 check inter 2000 rise 2 fall 3

server keystone2 10.203.3.11:45357 check inter 2000 rise 2 fall 3

listen nova_compute_api

bind 10.203.3.9:8774

balance  source

option  tcpka

option  httpchk

server nova1 10.203.3.10:9774 check inter 2000 rise 2 fall 3

server nova2 10.203.3.11:9774 check inter 2000 rise 2 fall 3

listen novncproxy

bind 10.203.3.9:6080

balance  source

option  tcpka

server nova1 10.203.3.10:7080 check inter 2000 rise 2 fall 3

server nova2 10.203.3.11:7080 check inter 2000 rise 2 fall 3 backup

listen nova_metadata_api

bind 10.203.3.9:8775

balance  source

option  tcpka

server nova1 10.203.3.10:9775 check inter 2000 rise 2 fall 3

server nova2 10.203.3.11:9775 check inter 2000 rise 2 fall 3

listen cinder_api

bind 10.203.3.9:8776

balance  source

option  tcpka

server cinder1 10.203.3.10:9776 check inter 2000 rise 2 fall 3

server cinder2 10.203.3.11:9776 check inter 2000 rise 2 fall 3

listen glance_api

bind 10.203.3.9:9292

balance  source

option  tcpka

option  httpchk

server glance1 10.203.3.10:10292 check inter 2000 rise 2 fall 3

server glance2 10.203.3.11:10292 check inter 2000 rise 2 fall 3

listen glance_registry

bind 10.203.3.9:9191

balance  source

option  tcpka

server glance1 10.203.3.10:10191 check inter 2000 rise 2 fall 3

server glance2 10.203.3.11:10191 check inter 2000 rise 2 fall 3

listen neutron_api

bind 10.203.3.9:9696

balance  source

option  tcpka

option  httpchk

server cinder1 10.203.3.10:10696 check inter 2000 rise 2 fall 3

server cinder2 10.203.3.11:10696 check inter 2000 rise 2 fall 3

listen rdb_mysql

bind 10.203.3.9:3306

balance  source

mode  http

option  mysql-check

server mysql1 10.203.3.10:4306 check inter 2000 rise 2 fall 3

server mysql2 10.203.3.11:4306 check inter 2000 rise 2 fall 3 backup

net.ipv4.ip_nonlocal_bind = 1

时间: 2024-12-13 05:55:14

openstack controller HA的相关文章

openstack controller ha测试环境搭建记录(一)——操作系统准备篇

为了初步了解openstack controller ha的工作原理,搭建测试环境进行学习. 在学习该方面知识时,当前采用的操作系统版本是centos 7.1 x64.首先在ESXi中建立2台用于测试的虚机,最小化安装完成centos,配置IP分别为10.0.0.12.10.0.0.13,主机名分别为controller2.controller3. 关闭防火墙:# systemctl stop firewalld# systemctl disable firewalld 修改主机名:# host

openstack controller ha测试环境搭建记录(十三)——配置cinder(控制节点)

在任一控制节点创建用户:mysql -u root -pCREATE DATABASE cinder;GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '123456';GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '123456'; 在任一控制节点创建相关用户:source admin-openrc.shkeystone u

openstack controller ha测试环境搭建记录(六)——配置keystone

在所有节点的hosts文件添加:10.0.0.10 myvip 在所有节点安装# yum install -y openstack-keystone python-keystoneclient# yum install -y openstack-utils 在所有节点设置keystone.conf文件使用mysql集群地址:# openstack-config --set /etc/keystone/keystone.conf database connection mysql://keysto

openstack controller ha测试环境搭建记录(三)——配置haproxy

haproxy.cfg请备份再编辑:# /etc/haproxy/haproxy.cfg global    chroot /var/lib/haproxy    daemon    group haproxy    maxconn 4000    pidfile /var/run/haproxy.pid    user haproxy defaults    log global    maxconn 4000    option redispatch    retries 3    time

openstack controller ha测试环境搭建记录(十二)——配置neutron(计算节点)

在计算节点配置内核参数:vi /etc/sysctl.confnet.ipv4.conf.all.rp_filter=0net.ipv4.conf.default.rp_filter=0 在计算节点使内核参数立即生效:sysctl -p 在计算节点安装软件:yum install -y openstack-neutron-ml2 openstack-neutron-openvswitch 在计算节点修改配置文件:openstack-config --set /etc/neutron/neutro

openstack controller ha测试环境搭建记录(十四)——配置cinder(存储节点)

先简述cinder存储节点的配置:  1.IP地址是10.0.0.41:  2.主机名被设置为block1:  3.所有节点的hosts文件已添加相应条目:  4.已经配置了ntp时间同步:  5.已安装lvm2,并设置为开机自动启动:  6.已经挂载了新的存储设备/dev/sdb. 在存储节点执行下列命令:pvcreate /dev/sdb1vgcreate cinder-volumes /dev/sdb1 在存储节点和计算节点执行df命令:# df -hFilesystem         

openstack controller ha测试环境搭建记录(十一)——配置neutron(网络节点)

在网络节点配置内核参数:vi /etc/sysctl.confnet.ipv4.ip_forward=1net.ipv4.conf.all.rp_filter=0net.ipv4.conf.default.rp_filter=0 在网络节点使内核参数立即生效:sysctl -p 在网络节点安装软件:yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch 在网络节点修改配置文件:ope

openstack controller ha测试环境搭建记录(十五)——创建实例

# source demo-openrc.sh # ssh-keygenGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa):Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /root/.ssh

openstack controller ha测试环境搭建记录(四)——配置mysql数据库集群

内容正式开始前,我已经在集群中添加了新的节点controller1(IP地址为10.0.0.14). 安装软件:# yum install -y mariadb-galera-server xinetd rsync 创建用于健康检查的用户:# systemctl start mysqld.service# mysql -e "CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '123456';"# systemctl stop m