OpenStack Newton版本Ceph集成部署记录

2017年2月,OpenStack Ocata版本正式release,就此记录上一版本 Newton 结合Ceph Jewel版的部署实践。宿主机操作系统为CentOS 7.2 。

初级版:

192.168.0.0/24 与 192.168.1.0/24 为Ceph使用,分别为南北向网络(Public_Network)和东西向网络(Cluster_Network)。

10.0.0.0/24 为 OpenStack 管理网络。

172.16.0.0/24 为用于 OpenStack Neutron 建立OVS bridge 用于租户业务的provider/external网络。

将它们粗暴的合并为一个网络是可以的,但在生产环境不推荐。

部署基本的 IaaS层服务核心模块:认证Keystone、镜像Glance、计算Nova、网络Neutron、块存储Cinder、Dashboard Horizon。使用 ceph-deploy部署Ceph集群,作为镜像、计算及块存储后端。

Ceph配置样例

[global]
mon_initial_members = controller, network, storage
mon_host = 192.168.0.11,192.168.0.12,192.168.0.13
auth_cluster_required = none
auth_service_required = none
auth_client_required = none
filestore_xattr_use_omap = true
osd_pool_default_size = 2
mon_clock_drift_allowed = 2
mon_clock_drift_warn_backoff = 30
mon_pg_warn_max_per_osd = 1000
public_network = 192.168.0.0/24
cluster_network = 192.168.1.0/24

3 mon+ [磁盘数] osd 使用ceph-deploy快捷部署不使用 keyring , 创建3个pool : glance , nova , cinder

根据情况调整pg_num 与pgp_num

“ 公式:

Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool_count

结算的结果往上取靠近2的N次方的值。比如总共OSD数量是160,复制份数3,pool数量也是3,那么按上述公式计算出的结果是1777.7。取跟它接近的2的N次方是2048,那么每个pool分配的PG数量就是2048。”

OpenStack组件配置样例

  • controller
keystone.conf:
[database]
connection = mysql+pymysql://keystone:[email protected]/keystone
glance-api.conf:
[database]
connection = mysql+pymysql://glance:[email protected]/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = yourpasswd
[paste_deploy]
flavor = keystone
[glance_store]
stores = rbd
default_store = rbd
show_image_direct_url = True
rbd_store_pool = glance
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
glance-registry.conf
[database]
connection = mysql+pymysql://glance:[email protected]/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = yourpasswd
[paste_deploy]
flavor = keystone
nova.conf:
[DEFAULT]
enabled_apis = osapi_compute,metadata
[api_database]
connection = mysql+pymysql://nova:[email protected]/nova_api
[database]
connection = mysql+pymysql://nova:[email protected]/nova
[DEFAULT]
transport_url = rabbit://openstack:[email protected]
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = yourpasswd
[DEFAULT]
my_ip = 10.0.0.11
[DEFAULT]
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[vnc]
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.0.0.11
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = yourpasswd
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
[DEFAULT]
metadata_listen=10.0.0.11
metadata_listen_port=8775
[cinder]
os_region_name = RegionOne
neutron.conf:
[database]
connection = mysql+pymysql://neutron:[email protected]/neutron
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
[DEFAULT]
transport_url = rabbit://openstack:[email protected]
rpc_response_timeout = 180
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = yourpasswd
[DEFAULT]
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = yourpasswd
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
neutron/plugin.ini:
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider #若业务网络provider为flat则写这里
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
network_vlan_ranges = provider:1:1000  #若业务网络provider为vlan则写这里
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group=True
cinder.conf:
[DEFAULT]
enable_v1_api = True
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone
my_ip = 10.0.0.11
[database]
connection = mysql+pymysql://cinder:[email protected]/cinder
[key_manager]
[keystone_authtoken]
auth_uri = http://10.0.0.11:5000
auth_url = http://10.0.0.11:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = yourpasswd
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
  • network
neutron.conf:
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
transport_url = rabbit://openstack:[email protected]
rpc_response_timeout = 180
auth_strategy = keystone
[agent]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = yourpasswd
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
neutron/dhcp_agent.conf:
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
neutron/metadata_agent.ini:
[DEFAULT]
nova_metadata_ip = 10.0.0.11
nova_metadata_port = 8775
metadata_proxy_shared_secret = METADATA_SECRET
neutron/l3_agent.ini:
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge=
metadata_port = 9697
openvswitch_agent.ini:
[DEFAULT]
[agent]
[ovs]
[securitygroup]
[ovs]
local_ip=10.0.0.12
bridge_mappings=provider:br-provider
[agent]
tunnel_types=vxlan
l2_population=True
prevent_arp_spoofing=True
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group=True
shell command:
ovs-vsctl add-br br-provider

ovs-vsctl add-port br-provider [172.16.0.12的网卡]
  • storage
cinder.conf:
[database]
connection = mysql+pymysql://cinder:[email protected]/cinder
[DEFAULT]
transport_url = rabbit://openstack:[email protected]
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = yourpasswd
[DEFAULT]
my_ip = 10.0.0.13
[DEFAULT]
enabled_backends = ceph
[ceph]
volume_group = ceph
volume_backend_name = ceph
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = cinder
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_chunk_size = 134217728
backup_ceph_pool = cinder
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
[DEFAULT]
glance_api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
  • compute01-03
nova.conf:
[DEFAULT]
enabled_apis = osapi_compute,metadata
[DEFAULT]
transport_url = rabbit://openstack:[email protected]
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = yourpasswd
[DEFAULT]
my_ip = 10.0.0.14 (10.0.0.15 , 10.0.0.16)
[DEFAULT]
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://10.0.0.11:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = yourpasswd
metadata_proxy_shared_secret = METADATA_SECRET
[libvirt]
images_type = rbd
images_rbd_pool = nova
images_rbd_ceph_conf = /etc/ceph/ceph.conf
libvirt_live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"
libvirt_inject_password = false
libvirt_inject_key = false
libvirt_inject_partition = -2
shell command:
ovs-vsctl add-br br-provider

ovs-vsctl add-port br-provider [172.16.0.14,15,16的网卡]

增强版:

提供HA能力的架构

参考:

OpenStack Docs: Newton

Networking configuration options

时间: 2024-10-05 04:02:56

OpenStack Newton版本Ceph集成部署记录的相关文章

devstack安装openstack newton版本

准备使用devstack安装openstack N版,搞一套开发环境出来.一连整了4天,遇到各种问题,各种错误,一直到第4天下午4点多才算完成. 在这个过程中感觉到使用devstack搭建openstack环境还是有难度的,不光对新手来说,对于我手动源码装过很多次的人来说也蛮有难度.或者说是我自己学习能力不够. 个人感觉使用devstack,要想成功率高的话一定要选对linux版本.前三天使用ubuntu1604-desktop各种失败和报错,最后参考一篇博文才成功. 准备: VMware ub

openstack newton 版本dashboard 二次开发

N 版本的dashboard 开发和其他版本有些不同,主要是在dashboard.py中注册panel 的方式变了,下边以一个例子介绍如何在admin下创建一个panel. 1.切换到 dashboard 中admin panel所在目录:/opt/stack/horizon/openstack_dashboard/dashboards/admin 2.执行 python /opt/stack/horizon/manage.py startpanel test_panel -d openstac

全解┃OpenStack Newton发布,23家中国企业上榜(转载)

(转载自Openstack中文社区) 陈, 翔 2016-10-8 | 暂无评论 美国奥斯汀时间10月6日(北京时间6日24点),OpenStack Newton版本正式发布,在可扩展性.可靠性和用户体验方面均有显著提升(本文第四部分将具体介绍). 作为最火爆的开源云计算技术,OpenStack已经成为仅次于Linux的全球第二大活跃的开源社区,有超过585家企业.近4万人通过各种方式支持着这个超过2000万行代码的开源项目. 经过6年的打磨,Newton已经是第14个OpenStack版本,最

openstack newton 配置glusterfs 作cinder backend

一.搭建gluster 1.节点准备 hostname ip 数据盘vdb g0 192.168.10.10 10G g1 192.168.10.11 10G g2 192.168.10.12 10G 2.安装glusterfs yum install centos-release-gluster310 yum install glusterfs-server 3.创建glusterfs,登录g0 gluster peer probe g1 gluster peer probe g2 4.对vd

理解 OpenStack + Ceph (5):OpenStack 与 Ceph 之间的集成 [OpenStack Integration with Ceph]

理解 OpenStack + Ceph 系列文章: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 基础数据结构 (5)Ceph 与 OpenStack 的集成 1. Glance 与 Ceph RBD 集成 1.1 代码 Kilo 版本中,glance-store 代码被从 glance 代码中分离出来了,地址在 https://github.com/openstack/glance_store. Glance 中与 Ceph 相关的配置项

OpenStack Newton部署官方指南

作者:独笔孤行@TaoCloud请添加链接描述本文档主要基于OpenStack官方文档,对官方文档加以更加详细的说明和解读,更适合新手部署学习OpenStack.官方文档中主要为英文,只有M版和L版有中文版,但存在部分翻译错误,不建议新手使用.在此提供官方文档链接:https://docs.openstack.org/newton/install/ 若在虚拟机中部署OpenStack,网络要选择vxlan模式,vlan模式只能在物理机中使用. 1.基础环境1.1 最低硬件配置:? 控制节点: 1

saltstack 自动化部署openstack queens 版本

前面写了使用手动部署openstack的queens版本,但是太过繁琐,还容易出错,现在有时间写了一个使用saltstack部署openstack queens版本.环境:saltmaster 192.168.147.166 saltstack version 2017.7 redhat6.4controller 192.168.147.180 saltstack version 2017.7 centos7.2neutron 192.168.147.182 saltstack version

OpenStack Q版本新功能以及各核心组件功能对比

OpenStack Q版本已经发布了一段时间了.今天, 小编来总结一下OpenStack Q版本核心组件的各项主要新功能, 再来汇总一下最近2年来OpenStack N.O.P.Q各版本核心组件的主要新功能.仅供参考, 如有遗漏.错误请指正. 1.1         Q版新功能总结 Q版相对于P版, 主要还是各功能的增强和优化, 其中主要功能有: 计算组件中的vGPU支持.冷迁移至指定主机.PCI NUMA亲和性.卷共享等,镜像组件中的web方式导入镜像, 网络组件中的浮动IP QoS.DVR/

OpenStack Kilo版本新功能分析

OpenStack Kilo版本已经于2015年4月30日正式Release,这是OpenStack第11个版本,距离OpenStack项目推出已经整整过去了5年多的时间.在这个阶段OpenStack得到不断的增强,同时OpenStack社区也成为即Linux之后的第二大开源社区,参与的人数.厂商众多,也成就了OpenStack今天盛世的局面.虽然OpenStack在今年经历了一些初创型企业的倒闭,但是随着国内的传统行业用户对OpenStack越来越重视,我们坚信OpenStack明天会更好.