基于OpenStack(IceHouse+neutron) 部署 CloudFounry v183

之前苦于没有物理服务器,一直在虚拟机上小打小闹Cf,现在终于有了物理服务器,而且已经掌握了OpenStack的各个功能点,终于可以试一下了。本文基于OpenStack IceHouse 版本,使用Neutron网络搭建cf-183 版本,在网上查找资料,很少有使用neutron网络搭建的,目前也是使用了两个HM,解决了healthmanager 单点的问题,但是nats仍然是单点部署,但是参考官方文档的说法,nats很稳定,nats所在的虚拟机挂掉,可以由bosh
恢复。

环境准备

1、部署完成OpenStack icehouse 版本 网络模式;neutron+ovs+vlan,卷存储 ceph,基于OpenStack部署CloudFoundry时,OpenStack必须具有卷存储。

2、 OpenStack安装完以后,进行以下准备工作:

1、配置默认的安全组策略

2、创建密钥对

创建名称为cfkey的密钥对并下载备用,名称可随意,后边配置中会使用到。

3、新增或者修改Flavor

新增或修改原有的3条Flavor,要求如下:

4、调整OpenStack Project的Quotas限制

5、创建CloudFoundry使用的内部网络

注意:由于cloudfoundry 需要dns服务器,在创建该网络时,需要配置内部的dns 服务器地址,该场景使用的内部dns服务器地址:

6、创建路由器,使用该路由器,连接外部网络与net04

7、命令行修改cinder 配额

cinder quota-update  tenant_id --volumes 500   // tenant_id为admin租户的id(使用 keystone tenant-list 获取)

重启cinder 服务:

cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done

部署Bosh cli

在OpenStack 创建一个虚拟机,本文采用ubuntu 12.04 64 bits 系统

1、安装ruby 运行环境

[email protected]:~# curl -L https://get.rvm.io | bash -s stable

Rvm安装完毕后重新开启命令行窗口,以确保载入Rvm环境,然后安装Ruby,要求1.9.3以上版本,本环境中使用ruby-1.9.3-p484

[email protected]:~# rvm install 1.9.3

为了减少安装时间,替换gem源为淘宝gem 源

[email protected]:~#gem sources --remove https://rubygems.org/

[email protected]:~#gem sources -a https://ruby.taobao.org/

[email protected]:~#gem sources -l

2、安装必需的软件

[email protected]:~# apt-get install git libxslt-dev libxml2-dev libmysql-ruby libmysqlclient-dev libpq-dev

3、安装Bosh Cli

gem install bosh_cli_plugin_micro --pre

该过程需要下载一批gem包,需要等待一段时间,安装完成,验证bosh cli 版本

[email protected]:~# bosh --version

BOSH 1.2710.0

4、安装fog组件 验证openstack 环境

4.1 在root目录创建.fog文件,增加以下内容

:openstack:

:openstack_auth_url:  http://10.110.13.32:5000/v2.0/tokens  //

:openstack_api_key:   123456a? //openstack admin 用户密码

:openstack_username:  admin  // openstack admin 用户

:openstack_tenant:    admin   //租户名称

4.2 安装fog

[email protected]:~# gem install fog

4.3 载入fog 的openstack 模式

[email protected]:~# root#fog openstack

测试是否允许成功;

[email protected]:~# Compute[:openstack].servers   // 能够返回服务器信息

部署Micro Bosh

1、下载Micro Bosh Stemcell

[email protected]:~# mkdir -p ~/bosh-workspace/stemcells

[email protected]:~# cd ~/bosh-workspace/stemcells

[email protected]:~# wget http://bosh-jenkins-artifacts.s3.amazonaws.com/bosh-stemcell/openstack/bosh-stemcell-2917.2-openstack-kvm-ubuntu.tgz

(可以使用其他工具将bosh-stemcell-2917.2-openstack-kvm-ubuntu.tgz 下载,再放入该目录)

2、创建部署Micro Bosh的Manifest文件

[email protected]:~# mkdir -p ~/bosh-workspace/deployments/microbosh-openstack

[email protected]:~# cd ~/bosh-workspace/deployments/microbosh-openstack

[email protected]:~# vi micro_bosh.yml

配置内容:

---

name: microbosh-openstack

logging:

level: DEBUG

network:

type: dynamic

vip: 10.110.13.32 # Floating IP

cloud_properties:

net_id: 0bb4ff64-4413-41a9-9c3b-b93d7b6f6db1

resources:

persistent_disk: 16384

cloud_properties:

instance_type: cf.medium

cloud:

plugin: openstack

properties:

openstack:

auth_url: http://10.110.13.2:5000/v2.0

username: cloudfoundry     # openstack username

api_key: 123456a?       # openstack api_key

tenant: cloudfoundry    # openstack tenant

default_security_groups: ["default"] # using default security groups

default_key_name: vkey  # key name as cf created earlier

private_key: ~/vkey.pem # pem file by uploading yourself

apply_spec:

properties:

director:

max_threads: 3

hm:

resurrector_enabled: true

ntp:

- 0.north-america.pool.ntp.org

- 1.north-america.pool.ntp.org

3、部署Micro Bosh

设置Micro Bosh 部署文件:

[email protected]:~# cd ~/bosh-workspace/deployments

[email protected]:~/bosh-workspace/deployments# bosh micro deployment microbosh-openstack

使用上边下载好的Stemcell部署Micro Bosh:

[email protected]:~/bosh-workspace/deployments# bosh micro deploy ~/bosh-workspace/stemcells/bosh-stemcell-2917.2-openstack-kvm-ubuntu.tgz

部署成功提示bosh target切换信息:

4、登陆到Micro bosh 的director并创建账号

Target到Micro Bosh的director

[email protected]:~/bosh-workspace/deployments#bosh target https://10.110.13.32:25555

初始账号:admin/admin

[email protected]:~/bosh-workspace/deployments# bosh login

Your username: admin

Enter password: *****

Logged in as `admin‘

查看bosh状态

[email protected]:~# bosh status

Config

/root/.bosh_config

Director

Name       microbosh-openstack

URL        https://10.110.13.32:25555

Version    1.2719.1.0 (00000000)

User       admin

UUID       b9c17bd2-2e53-452f-a8e2-a6bfe391aca5

CPI        openstack

dns        enabled (domain_name: microbosh)

compiled_package_cache disabled

snapshots  disabled

Deployment

Manifest   /root/bosh-workspace/deployments/cf-183.yml

使用Mircro Bosh 部署Bosh

资源准备:部署bosh,需要8个虚拟机部署bosh的8个组件,(部署过程中,发现每个组件必须部署到单独的虚拟机中,需要准备8个内网IP,两个浮动IP)

1、上传Bosh Stemcell到Micro Bosh中

[email protected]:~/bosh-workspace/stemcells# bosh target https://10.110.13.32:25555

[email protected]:~/bosh-workspace/stemcells#  bosh login

[email protected]:~/bosh-workspace/stemcells# bosh upload stemcell bosh-stemcell-2917.2-openstack-kvm-ubuntu.tgz

2、下载Bosh Release代码并打包

[email protected]:~# cd ~/bosh-workspace

[email protected]:~# git clone https://github.com/cloudfoundry/bosh.git

[email protected]:~# bundle install --local  //如果失败,可以执行 bundle install  (如果使用已经存在的bosh release,可以不用执行这一步)

[email protected]:bundle exec rake release:create_dev_release

3、上传Bosh Release 到micro bosh 中

[email protected]:~/bosh-workspace# bosh upload release ~/bosh-workspace/bosh/release/dev_releases/bosh/bosh-105+dev.1.yml

(如果使用已经存在的bosh,使用命令 [email protected]:~#bosh upload release ~/bosh-workspace/bosh/release/releases/bosh-99.yml )

4、确认已经上传的stemcell和release

[email protected]:~# bosh stemcells

5、创建部署Bosh的Manifest文件

[email protected]:~# mkdir -p ~/bosh-workspace/deployments/bosh-openstack

[email protected]:~# cd ~/bosh-workspace/deployments/bosh-openstack

[email protected]:~# cp ~/bosh-workspace/bosh/release/examples/bosh-openstack-manual.yml bosh-openstack.yml

6、修改bosh-openstack.yml

---

name: bosh-openstack

director_uuid: b9c17bd2-2e53-452f-a8e2-a6bfe391aca5 # CHANGE

release:

name: bosh

version: 99

compilation:

workers: 2

network: default

reuse_compilation_vms: true

cloud_properties:

instance_type: cf.small # CHANGE

update:

canaries: 1

canary_watch_time: 3000-120000

update_watch_time: 3000-120000

max_in_flight: 4

networks:

- name: floating

type: vip

cloud_properties: {}

- name: default

type: manual

subnets:

- name: private

range: 171.71.71.0/24 # CHANGE

gateway: 171.71.71.1 # CHANGE

reserved:

- 171.71.71.2 - 171.71.71.60 # CHANGE

static:

- 171.71.71.61 - 171.71.71.100 # CHANGE

cloud_properties:

net_id: 0bb4ff64-4413-41a9-9c3b-b93d7b6f6db1 # CHANGE

resource_pools:

- name: common

network: default

size: 8

stemcell:

name: bosh-openstack-kvm-ubuntu-trusty-go_agent

version: 2719.2

cloud_properties:

instance_type: cf.small # CHANGE

jobs:

- name: nats

template: nats

instances: 1

resource_pool: common

networks:

- name: default

default: [dns, gateway]

static_ips:

- 171.71.71.62 # CHANGE

- name: redis

template: redis

instances: 1

resource_pool: common

networks:

- name: default

default: [dns, gateway]

static_ips:

- 171.71.71.63 # CHANGE

- name: postgres

template: postgres

instances: 1

resource_pool: common

persistent_disk: 16384

networks:

- name: default

default: [dns, gateway]

static_ips:

- 171.71.71.68 # CHANGE

- name: powerdns

template: powerdns

instances: 1

resource_pool: common

networks:

- name: default

default: [dns, gateway]

static_ips:

- 171.71.71.64 # CHANGE

- name: blobstore

template: blobstore

instances: 1

resource_pool: common

persistent_disk: 51200

networks:

- name: default

default: [dns, gateway]

static_ips:

- 171.71.71.65 # CHANGE

- name: director

template: director

instances: 1

resource_pool: common

persistent_disk: 16384

networks:

- name: default

default: [dns, gateway]

static_ips:

- 171.71.71.69 # CHANGE

- name: floating

static_ips:

- 10.110.13.35 # CHANGE

- name: registry

template: registry

instances: 1

resource_pool: common

networks:

- name: default

default: [dns, gateway]

static_ips:

- 171.71.71.66 # CHANGE

- name: health_monitor

template: health_monitor

instances: 1

resource_pool: common

networks:

- name: default

default: [dns, gateway]

static_ips:

- 171.71.71.67 # CHANGE

properties:

nats:

address: 171.71.71.62 # CHANGE

user: nats

password: c1oudc0w

redis:

address: 171.71.71.63 # CHANGE

password: redis

postgres: &bosh_db

host: 171.71.71.68 # CHANGE

user: postgres

password: postgres

database: bosh

dns:

address: 171.71.71.64 # CHANGE

db: *bosh_db

recursor: 10.110.13.36 # CHANGE

blobstore:

address: 171.71.71.65 # CHANGE

agent:

user: agent

password: agent

director:

user: director

password: director

director:

name: bosh

address: 171.71.71.69 # CHANGE

db: *bosh_db

registry:

address: 171.71.71.66 # CHANGE

db: *bosh_db

http:

user: registry

password: registry

hm:

http:

user: hm

password: hm

director_account:

user: admin

password: admin

resurrector_enabled: true

ntp:

- 0.north-america.pool.ntp.org

- 1.north-america.pool.ntp.org

openstack:

auth_url: http://10.110.13.2:5000/v2.0 # CHANGE

username: cloudfoundry # CHANGE

api_key: 123456a? # CHANGE

tenant: cloudfoundry # CHANGE

default_security_groups: ["default"] # CHANGE

default_key_name: vkey # CHANGE

6、部署bosh

[email protected]:~/bosh-workspace/deployments# bosh deployment ~/bosh-workspace/deployments/bosh-openstack/bosh-openstack.yml

[email protected]:~/bosh-workspace/deployments# bosh deploy

Bosh 部署CloudFoundry

1、依次执行以下命令从GitHub获取并更新代码

[email protected]:~# mkdir -p ~/src/cloudfoundry

[email protected]:~# cd ~/src/cloudfoundry

[email protected]:~/src/cloudfoundry# git clone -b release-candidate https://github.com/cloudfoundry/cf-release.git

[email protected]:~/src/cloudfoundry# cd cf-release

[email protected]:~/src/cloudfoundry/cf-release# ./update

2、制作cloudfoundry release 包:

[email protected]:~/src/cloudfoundry/cf-release# bosh create release --force

该命令式获取最新的CloudFoundry 源码,也可以使用已经经过测试的发布版本,命令行操作:

[email protected]:~/src/cloudfoundry/cf-release# bosh create release --force

使用已经经过测试的发布版本,命令行操作:

[email protected]:~/src/cloudfoundry/cf-release# bosh create release releases/cf-183.yml

最终会在releases目录下生成一个tgz的压缩包例如cf-183.tgz

注意:上述操作,非常耗时,也容易出现响应超时的问题。目前可以采用github社区提供的release包,cf-183版本的下载地址:

https://community-shared-boshreleases.s3.amazonaws.com/boshrelease-cf-183.tgz

3、连接director,上传CloudFoundry bosh release 包和stemcell包

[email protected]:~/src/cloudfoundry/cf-release# bosh target https://10.110.13.32:25555

[email protected]:~/src/cloudfoundry/cf-release# bosh login

[email protected]:~/src/cloudfoundry/cf-release# bosh upload stemcell ~/bosh-workspace/stemcells/bosh-stemcell-2917.2-openstack-kvm-ubuntu.tgz

[email protected]:~/src/cloudfoundry/cf-release# bosh upload release releases/cf-183.tgz

检查是否上传成功:

[email protected]:~# bosh releases

[email protected]:~# bosh stemcells

该stemcell 的登录用户名密码 root/c1oudc0w

4、创建部署CloudFoundry需要的创建Manifest文件

[email protected]:~/src/cloudfoundry/cf-release# cd ~/bosh-workspace/deployments/

[email protected]:~/bosh-workspace/deployments# vi cf-183.yml

5、部署:

[email protected]:~/bosh-workspace/deployments# bosh deployment cf-183.yml

[email protected]:~/bosh-workspace/deployments# bosh deploy

6、部署成功

该环境使用的cf-183.yml

<%

director_uuid = ‘b9c17bd2-2e53-452f-a8e2-a6bfe391aca5‘

root_domain = "inspurapp.com"

deployment_name = ‘cf‘

cf_release = ‘183‘

protocol = ‘http‘

common_password = ‘c1oudc0w‘

%>

---

name: <%= deployment_name %>

director_uuid: <%= director_uuid %>

releases:

- name: cf

version: <%= cf_release %>

compilation:

workers: 2

network: shared

reuse_compilation_vms: true

cloud_properties:

instance_type: cf.small

update:

canaries: 0

canary_watch_time: 30000-600000

update_watch_time: 30000-600000

max_in_flight: 32

serial: false

networks:

- name: shared

type: dynamic

cloud_properties:

net_id: 0bb4ff64-4413-41a9-9c3b-b93d7b6f6db1

security_groups:

- default

- name: floating

type: vip

cloud_properties: {}

resource_pools:

- name: common

network: shared

size: 13

stemcell:

name: bosh-openstack-kvm-ubuntu-trusty-go_agent

version: 2719.2

cloud_properties:

instance_type: cf.small

- name: meidium

network: shared

size: 2

stemcell:

name: bosh-openstack-kvm-ubuntu-trusty-go_agent

version: 2719.2

cloud_properties:

instance_type: cf.medium

- name: large

network: shared

size: 2

stemcell:

name: bosh-openstack-kvm-ubuntu-trusty-go_agent

version: 2719.2

cloud_properties:

instance_type: cf.big

jobs:

- name: nats

templates:

- name: nats

- name: nats_stream_forwarder

instances: 1

resource_pool: common

networks:

- name: shared

shared: [dns, gateway]

- name: health_manager

templates:

- name: hm9000

instances: 2

resource_pool: common

networks:

- name: shared

shared: [dns, gateway]

- name: etcd

templates:

- name: etcd

instances: 1

resource_pool: common

networks:

- name: shared

shared: [dns, gateway]

- name: syslog_aggregator

templates:

- name: syslog_aggregator

instances: 1

resource_pool: common

persistent_disk: 40960

networks:

- name: shared

shared: [dns, gateway]

- name: nfs_server

templates:

- name: debian_nfs_server

instances: 1

resource_pool: common

persistent_disk: 40960

networks:

- name: shared

shared: [dns, gateway]

- name: postgres

templates:

- name: postgres

instances: 1

resource_pool: common

persistent_disk: 40960

networks:

- name: shared

shared: [dns, gateway]

properties:

db: databases

- name: loggregator

templates:

- name: loggregator

instances: 1

resource_pool: common

networks:

- name: shared

shared: [dns, gateway]

- name: trafficcontroller

templates:

- name: loggregator_trafficcontroller

instances: 1

resource_pool: common

networks:

- name: shared

shared: [dns, gateway]

- name: cloud_controller

templates:

- name: cloud_controller_ng

instances: 2

resource_pool: common

networks:

- name: shared

shared: [dns, gateway]

properties:

db: ccdb

- name: uaa

templates:

- name: uaa

instances: 2

resource_pool: common

networks:

- name: shared

shared: [dns, gateway]

- name: dea

templates:

- name: dea_logging_agent

- name: dea_next

instances: 2

resource_pool: large

networks:

- name: shared

shared: [dns, gateway]

- name: router

templates:

- name: gorouter

instances: 2

resource_pool: meidium

networks:

- name: shared

shared: [dns, gateway]

properties:

metron_agent:

zone: nova

properties:

domain: <%= root_domain %>

system_domain: <%= root_domain %>

system_domain_organization: ‘admin‘

app_domains:

- <%= root_domain %>

haproxy: {}

networks:

apps: shared

nats:

user: nats

password: <%= common_password %>

address: 0.nats.shared.<%= deployment_name %>.microbosh

port: 4222

machines:

- 0.nats.shared.<%= deployment_name %>.microbosh

syslog_aggregator:

address: 0.syslog-aggregator.shared.<%= deployment_name %>.microbosh

port: 54321

nfs_server:

address: 0.nfs-server.shared.<%= deployment_name %>.microbosh

network: "*.<%= deployment_name %>.microbosh"

allow_from_entries:

- 171.71.71.0/24

debian_nfs_server:

no_root_squash: true

loggregator_endpoint:

shared_secret: <%= common_password %>

host: 0.trafficcontroller.shared.<%= deployment_name %>.microbosh

loggregator:

zone: nova

servers:

zone:

-  0.loggregator.shared.<%= deployment_name %>.microbosh

traffic_controller:

zone: ‘nova‘

logger_endpoint:

use_ssl: <%= protocol == ‘https‘ %>

port: 80

ssl:

skip_cert_verify: true

router:

endpoint_timeout: 60

status:

port: 8080

user: gorouter

password: <%= common_password %>

servers:

z1:

- 0.router.shared.<%= deployment_name %>.microbosh

z2:

- 1.router.shared.<%= deployment_name %>.microbosh

etcd:

machines:

- 0.etcd.shared.<%= deployment_name %>.microbosh

dea: &dea

disk_mb: 40960

disk_overcommit_factor: 2

memory_mb: 8192

memory_overcommit_factor: 1

directory_server_protocol: <%= protocol %>

mtu: 1460

deny_networks:

- 169.254.0.0/16 # Google Metadata endpoint

dea_next: *dea

metron_agent:

zone: nova

metron_endpoint:

zone: nova

shared_secret: <%= common_password %>

disk_quota_enabled: true

dea_logging_agent:

status:

user: admin

password: <%= common_password %>

databases: &databases

db_scheme: postgres

address: 0.postgres.shared.<%= deployment_name %>.microbosh

port: 5524

roles:

- tag: admin

name: ccadmin

password: <%= common_password %>

- tag: admin

name: uaaadmin

password: <%= common_password %>

databases:

- tag: cc

name: ccdb

citext: true

- tag: uaa

name: uaadb

citext: true

ccdb: &ccdb

db_scheme: postgres

address: 0.postgres.shared.<%= deployment_name %>.microbosh

port: 5524

roles:

- tag: admin

name: ccadmin

password: <%= common_password %>

databases:

- tag: cc

name: ccdb

citext: true

ccdb_ng: *ccdb

uaadb:

db_scheme: postgresql

address: 0.postgres.shared.<%= deployment_name %>.microbosh

port: 5524

roles:

- tag: admin

name: uaaadmin

password: <%= common_password %>

databases:

- tag: uaa

name: uaadb

citext: true

cc: &cc

security_group_definitions : []

default_running_security_groups : []

default_staging_security_groups : []

srv_api_uri: <%= protocol %>://api.<%= root_domain %>

jobs:

local:

number_of_workers: 2

generic:

number_of_workers: 2

global:

timeout_in_seconds: 14400

app_bits_packer:

timeout_in_seconds: null

app_events_cleanup:

timeout_in_seconds: null

app_usage_events_cleanup:

timeout_in_seconds: null

blobstore_delete:

timeout_in_seconds: null

blobstore_upload:

timeout_in_seconds: null

droplet_deletion:

timeout_in_seconds: null

droplet_upload:

timeout_in_seconds: null

model_deletion:

timeout_in_seconds: null

bulk_api_password: <%= common_password %>

staging_upload_user: upload

staging_upload_password: <%= common_password %>

quota_definitions:

default:

memory_limit: 10240

total_services: 100

non_basic_services_allowed: true

total_routes: 1000

trial_db_allowed: true

resource_pool:

resource_directory_key: cloudfoundry-resources

fog_connection:

provider: Local

local_root: /var/vcap/nfs/shared

packages:

app_package_directory_key: cloudfoundry-packages

fog_connection:

provider: Local

local_root: /var/vcap/nfs/shared

droplets:

droplet_directory_key: cloudfoundry-droplets

fog_connection:

provider: Local

local_root: /var/vcap/nfs/shared

buildpacks:

buildpack_directory_key: cloudfoundry-buildpacks

fog_connection:

provider: Local

local_root: /var/vcap/nfs/shared

install_buildpacks:

- name: java_buildpack

package: buildpack_java_offline

- name: ruby_buildpack

package: buildpack_ruby

- name: nodejs_buildpack

package: buildpack_nodejs

- name: go_buildpack

package: buildpack_go

- name: php_buildpack

package: buildpack_php

- name: buildpack_python

package: buildpack_python

db_encryption_key: <%= common_password %>

hm9000_noop: false

diego: false

newrelic:

license_key: null

environment_name: <%= deployment_name %>

ccng: *cc

login:

enabled: false

uaa:

url: <%= protocol %>://uaa.<%= root_domain %>

no_ssl: <%= protocol == ‘http‘ %>

cc:

client_secret: <%= common_password %>

admin:

client_secret: <%= common_password %>

batch:

username: batch

password: <%= common_password %>

clients:

cf:

override: true

authorized-grant-types: password,implicit,refresh_token

authorities: uaa.none

scope: cloud_controller.read,cloud_controller.write,openid,password.write,cloud_controller.admin,scim.read,scim.write

access-token-validity: 7200

refresh-token-validity: 1209600

admin:

secret: <%= common_password %>

authorized-grant-types: client_credentials

authorities: clients.read,clients.write,clients.secret,password.write,scim.read,uaa.admin

scim:

users:

- admin|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin,uaa.admin,password.write

- services|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin

jwt:

signing_key: |

-----BEGIN RSA PRIVATE KEY-----

MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1

JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6

0gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBugspULZVNRxq7veq/fzwIDAQAB

AoGBAJ8dRTQFhIllbHx4GLbpTQsWXJ6w4hZvskJKCLM/o8R4n+0W45pQ1xEiYKdA

Z/DRcnjltylRImBD8XuLL8iYOQSZXNMb1h3g5/UGbUXLmCgQLOUUlnYt34QOQm+0

KvUqfMSFBbKMsYBAoQmNdTHBaz3dZa8ON9hh/f5TT8u0OWNRAkEA5opzsIXv+52J

duc1VGyX3SwlxiE2dStW8wZqGiuLH142n6MKnkLU4ctNLiclw6BZePXFZYIK+AkE

xQ+k16je5QJBAN0TIKMPWIbbHVr5rkdUqOyezlFFWYOwnMmw/BKa1d3zp54VP/P8

+5aQ2d4sMoKEOfdWH7UqMe3FszfYFvSu5KMCQFMYeFaaEEP7Jn8rGzfQ5HQd44ek

lQJqmq6CE2BXbY/i34FuvPcKU70HEEygY6Y9d8J3o6zQ0K9SYNu+pcXt4lkCQA3h

jJQQe5uEGJTExqed7jllQ0khFJzLMx0K6tj0NeeIzAaGCQz13oo2sCdeGRHO4aDh

HH6Qlq/6UOV5wP8+GAcCQFgRCcB+hrje8hfEEefHcFpyKH+5g1Eu1k0mLrxK2zd+

4SlotYRHgPCEubokb2S1zfZDWIXW3HmggnGgM949TlY=

-----END RSA PRIVATE KEY-----

verification_key: |

-----BEGIN PUBLIC KEY-----

MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d

KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX

qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug

spULZVNRxq7veq/fzwIDAQAB

-----END PUBLIC KEY-----

配置DNS

新创建一个虚拟机,作为dns-server (本文使用已经有的dns服务器,地址:10.106.1.36)

1、安装BIND9程序包

[email protected]:sudo apt-get install bind9

2、新增编辑文件,总共需要编辑2个文件,新增2个文件,如下:

修改/etc/bind/named.conf.options,去掉forwarders的注释,其中的IP为网络营运商提供的DNS服务器,这里我们使用google的DNS

forwarders {

8.8.8.8;

8.8.4.4;

};

修改/etc/bind/named.conf.local,在最后增加增加双向解析代码:

zone  "iae.me" {

type master;

file "/etc/bind/db.iae.me";

};

zone  "26.68.10.in-addr.arpa" {

type master;

file "/etc/bind/db.26.68.10";

};

注意:其中的26.68.10是目标IP 10.68.26.91(为haproxy的地址)的前三段,表示一个IP地址段

新增域名(iae.me)解析文件/etc/bind/db.iae.me,内容如下:

;

; BIND data file for dev sites

;

$TTL    604800

@       IN      SOA     mycloud.com. root.mycloud.com. (

1         ; Serial

604800         ; Refresh

86400         ; Retry

2419200         ; Expire

604800 )       ; Negative Cache TTL

;

@       IN      NS      mycloud.com.

@       IN      A       10.68.26.91

*.iae.me.  14400   IN      A       10.68.26.91

新增IP地址反向解析文件/etc/bind/db.10.68.26.91,内容如下:

;

; BIND reverse data file for dev domains

;

$TTL    604800

@       IN      SOA     dev. root.dev. (

1         ; Serial

604800         ; Refresh

86400         ; Retry

2419200         ; Expire

604800 )       ; Negative Cache TTL

;

@        IN      NS      iae.me.

91      IN      PTR     iae.me.

3、重启BIND9服务

[email protected]:service bind9 restart

注意:dns的配置应该在进行系统规划之前创建,将该dns地址配置在创建OpenStack网络时,这样,所有的虚拟机默认的dns地址中会包含该地址

配置haproxy

本文使用源码编译安装haproxy

1、[email protected]:tar xvf haproxy-1.5.0.tar.gz

2、[email protected]:cd haproxy-1.5.0

3、[email protected]:make TARGET=ubuntu34

4、[email protected]:make  install PREFIX=/usr/local/haproxy

5、[email protected]:cd /etc/

mkdir haproxy

cd haproxy/

vi haproxy.cfg 增加配置:

global

daemon

maxconn 300000

spread-checks 4

nbproc 8

log 127.0.0.1 local0 info

defaults

log global

#log 10.41.2.86:5140 syslog

#log 10.106.1.34:5140 syslog

option httplog

mode http

# log  127.0.0.1   local0 info

timeout connect 30000ms

timeout client 300000ms

timeout server 300000ms

# maxconn 320000

# option http-pretend-keepalive

option dontlognull

option forwardfor

option redispatch

option abortonclose

listen admin_stats

bind 0.0.0.0:1080         #监听端口

mode http                       #http的7层模式

option httplog                  #采用http日志格式

maxconn 10

stats refresh 30s               #统计页面自动刷新时间

stats uri /stats                #统计页面url

stats realm XingCloud\ Haproxy  #统计页面密码框上提示文本

stats auth admin:admin          #统计页面用户名和密码设置

stats hide-version              #隐藏统计页面上HAProxy的版本信息

frontend http-in

mode http

bind *:80

log-format ^%ci:%cp^[%t]^%ft^%b/%s^%hr^%r^%ST^%B^%Tr^%Ts

capture request header Host len 32

reqadd X-Forwarded-Proto:\ http

default_backend http-routers

backend tcp-routers

mode tcp

balance source

#   server node1 10.106.1.46:80 weight 10

#   server node2 10.106.1.57:80 weight 10

server node1 192.168.136.148:80  weight 10 cookie app1inst1 check inter 2000 rise 2 fall 5 maxconn 10000

server node2 192.168.136.155:80  weight 10 cookie app1inst2 check inter 2000 rise 2 fall 5 maxconn 10000

#    server node3 10.106.1.27:80  weight 10 cookie app1inst2 check inter 2000 rise 2 fall 5 maxconn 10000

backend http-routers

mode http

balance source

#server node1 10.106.1.46:80  weight 50 cookie app1inst1 check inter 2000 rise 2 fall 5

# server node2 10.106.1.57:80  weight 3

server node1 192.168.136.148:80  weight 50 cookie app1inst1 check inter 2000 rise 2 fall 5 maxconn 10000

server node2 192.168.136.155:80  weight 50 cookie app1inst2 check inter 2000 rise 2 fall 5 maxconn 10000

#server node3 10.106.1.27:80  weight 50 cookie app1inst3 check inter 2000 rise 2 fall 5 maxconn 10000

6、启动haproxy:

/usr/local/haproxy/sbin/haproxy -f /etc/haproxy/haproxy.cfg

关闭haproxy 使用kill -9

7、haproxy达到最优的性能,需要进行一些优化,目前在haproxy服务器进行的配置

加载模块:modprobe ip_conntrack

修改文件 /etc/sysctl.conf

net.ipv4.ip_forward = 0

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.default.accept_source_route = 0

kernel.sysrq = 0

kernel.core_uses_pid = 1

net.ipv4.tcp_syncookies = 1

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.shmmax = 68719476736

kernel.shmall = 4294967296

net.ipv4.tcp_max_tw_buckets = 6000

net.ipv4.tcp_sack = 1

net.ipv4.tcp_window_scaling = 1

net.ipv4.tcp_rmem = 4096 87380 4194304

net.ipv4.tcp_wmem = 4096 16384 4194304

net.core.wmem_default = 8388608

net.core.rmem_default = 8388608

net.core.rmem_max = 16777216

net.core.wmem_max = 16777216

net.core.netdev_max_backlog = 262144

net.core.somaxconn = 262144

net.ipv4.tcp_max_orphans = 3276800

net.ipv4.tcp_max_syn_backlog = 262144

net.ipv4.tcp_timestamps = 0

net.ipv4.tcp_synack_retries = 1

net.ipv4.tcp_syn_retries = 1

net.ipv4.tcp_tw_recycle = 1

net.ipv4.tcp_tw_reuse = 1

net.ipv4.tcp_mem = 94500000 915000000 927000000

net.ipv4.tcp_fin_timeout = 1

net.ipv4.tcp_keepalive_time = 30

net.ipv4.ip_local_port_range = 1024 65000

net.nf_conntrack_max = 1024000

测试使用

安装cf的命令行工具:cli,下载地址:https://github.com/cloudfoundry/cli

CloudFoundry扩容减容

扩容,减容通过修改部署文件完成,例如

执行以下命令,完成扩容减容

[email protected]:~/bosh-workspace/deployments# bosh deployment cf-183.yml

[email protected]:~/bosh-workspace/deployments# bosh deploy

CloudFoundry 平滑升级

暂未验证

制作离线buildpack

1、下载官方的java-buildpack

git clone https://github.com/cloudfoundry/java-buildpack.git

2、下载工程依赖

cd java-buildpack

bundle install

3、制作offline 的buildpack包

bundle exec rake package OFFLINE=true

4、上传buildpack

cf create-buildpack java-buildpack-offline build/java-buildpack-offline-c19642c.zip 5 --enable

环境信息汇总

10.68.26.91 haproxy

10.68.26.87 bosh客户端

10.68.26.92 cf命令行客户端

Bosh director:10.68.26.

常见问题:

1、OpenStack 租户的卷空间不足,需要增加租户的卷信息,命令行增加

cinder quota-update  tenant_id --volumes 500   // tenant_id为admin租户的id(使用 keystone tenant-list 获取)

重启cinder 服务:

cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done

2、micro bosh 的dns未起作用

解决方案: 重新规划网络,在cloudfoundry 子网内配置内部的dns,该子网全部使用这个dns 。解决方法:创建一个cf使用的子网,dns配置到10.106.1.36

OpenStack 中的修改dns,需要重启虚拟机。

3、单独的cloudfoundry 租户 cloudfoundry 123456a?

4、Cf默认的java buildpack是在线的,需要制作离线的buildpack,制作方法

5、修改java buildpack的tomcat参数配置(默认参数配置,app的应用启动时间非常长)

6、如何配置使用多个HM,多个nats

7、如果OpenStack 配置了主机名访问,需要在bosh cli客户机上配置hosts,还需要在bosh director 虚拟机上配置hosts  增加 *.*.*.*  controller

时间: 2025-01-10 08:14:33

基于OpenStack(IceHouse+neutron) 部署 CloudFounry v183的相关文章

Openstack Icehouse neutron vlan 模式下的配置

http://www.server-world.info/en/note?os=Ubuntu_14.04&p=openstack_icehouse&f=16

基于最新RHEL7系统的Packstack自动部署RDO(OpenStack Icehouse)

本篇文章是通过最新发布的Red Hat Enterpise Linux 7 系统部署OpenStack,集成到RHEL系统的OpenStack 简称为RDO.此篇是通过制作应答文件answer.conf自动化部署OpenStack Icehouse 版本. 由于采用RHEL7系统在部署中或多或少碰到不少报错的问题,这里只列出我的几张截图,在部署中还是需要根据实际情况来决定,多看下报错及日志文件:例如:解决包的依赖,服务不能没有启动起来,数据库密码设置未成功等:希望本篇可以给部署RDO的同学带来一

OpenStack Icehouse私有云实战部署

前言 相信你一定对"云主机"一词并不陌生吧,通过在Web页面选择所需主机配置,即可快速定制一台属于自己的虚拟主机,并实现登陆操作,大大节省了物理资源.但这一过程是如何实现的呢?本文带来OpenStack Icehouse私有云实战部署. OpenStack 简介 OpenStack是由网络主机服务商Rackspace和美国宇航局联合推出的一个开源项目,OpenStack的目标是为所有类型的云提供一个易于实施,可大规模扩展,且功能丰富的解决方案,任何公司或个人都可以搭建自己的云计算环境(

OpenStack IceHouse 部署 - 4 - 计算节点部署

Nova计算服务(计算节点) 参考 本页内容依照官方安装文档进行,具体参见Configure a compute node(nova service) 前置工作 数据库 由于我们在Nova(计算管理)部署配置中使用了mysql数据库,所以移除本地sqlite数据库 sudo rm /var/lib/nova/nova.sqlite 修改vmlinuz权限 For security reasons, the Linux kernel is not readable by normal users

OpenStack IceHouse 部署 - 5 - 网络节点部署

Neutron网络服务(网络节点) 目录 [隐藏] 1 参考 2 前置工作 2.1 调整内核参数 3 安装 4 配置 4.1 keystone对接 4.2 rabbitmq对接 4.3 metadata服务对接 4.4 ML2插件配置 4.5 L3-agent 4.6 DHCP-agent 5 接口配置 6 服务更新 7 服务验证 8 附加配置 8.1 共享上网 8.1.1 iptables NAT 8.1.2 虚拟路由 参考 由于硬件条件所限并结合实际网络环境,本页并不是完全按照官方给出的指导

openstack icehouse 3节点部署遇到的问题和解决方法

刚接触openstack不久,参考官方文档实施3节点部署时遇到了一些问题,主要集中在compute node,还好有十几年的运维经验协助我把问题一一解决了.以下会用红字部分标识解决方法. 系统环境:CentOS 6.5 64位 各节点IP:完全按照官方文档中的IP进行了配置 官方文档:http://docs.openstack.org/icehouse/install-guide/install/zypper/content/ 日志记录日期:2014-7-6 问题部分: Controller N

基于docker、kubernetes部署openstack到atomic系统上

声明: 本人阅读笔记,翻译类文章仅作意译.如有不对之处,请指出. 需要更本源的理解,请自行阅读英文. 本博客欢迎转发,但请保留原作者信息! 博客地址:http://blog.csdn.net/halcyonbaby 新浪微博:寻觅神迹 内容系本人学习.研究和总结,如有雷同,实属荣幸! 基于docker.kubernetes部署openstack到atomic系统上 openstack的服务定义,是不是看起来很简洁? openstack的实际组件构成,是不是看起来很复杂? 所有的openstack

深入浅出新一代云网络——VPC中的那些功能与基于OpenStack Neutron的实现(二)

在VPC功能实现第一篇中,简单介绍了一下VPC网络对租户间隔离能力的提升以及基于路由提供的一系列网络功能.在这一篇中,将继续介绍VPC网络中十分重要的一个内容:网络带宽的控制,共享以及分离. 首先是对第一篇中,端口转发功能的样例代码,all-in-one http service 风格的实现. 核心功能: find_router_ip = "ip netns exec qrouter-{router_id} ifconfig |grep -A1 qg- | grep inet | awk '{{

OpenStack kilo版 Neutron部署

在 controller节点.network节点.compute节点部署 安装neutron-server [email protected]:~# apt-get install neutron-server neutron-plugin-ml2 python-neutronclient 配置neutron-server /etc/neutron/neutron.conf: [DEFAULT] router_distributed = False rpc_backend = rabbit au