Deploying Cloud Foundry on OpenStack Juno and XenServer (Part II)

link

http://rabbitstack.github.io/deploying-cloud-foundry-on-openstack-juno-and-xenserver-part-ii/

Let‘s move on. We should have our OpenStack instance prepared for Cloud Foundry. The most usual way of deploying Cloud Foundry is through BOSH. For the who still didn‘t hear about it, BOSH is the platform for automation and lifecycle management of software and distributed services. It is also capable of monitoring and failure recovery of processes and virtual machines. There are already a few IT automation platforms in the market like Chef or Puppet, so, why to learn / use BOSH then?

One notable difference is that BOSH is able to perform the deployment from the sterile environment, i.e. package source code and dependencies, create the virtual machines (jobs in BOSH terminology) from the so calledstemcell template (VM which has BOSH agent installed and is used to generate the jobs), and finally install, start and monitor the required services and VMs. Visit the official page from the link above to learn more about BOSH.

Deploying MicroBOSH

MicroBOSH is a single VM which contains all the necessary components to boot BOSH, including the blobstore, nats, director, health manager etc. Once you have an instance of MicroBOSH running, you can deploy BOSH if you wish. Install BOSH CLI gems (Ruby >= 1.9.3 is required).

$ gem install bosh_cli bosh_cli_plugin_micro

You will need to create a keypair in OpenStack and configure bosh security group with the rules shown in the table below. You can do it by accessing the Horizon dashboard or by using nova CLI.

Direction IP Protocol Port Range Remote
Ingress TCP 1-65535 bosh
Ingress TCP 53 (DNS) 0.0.0.0/0 (CIDR)
Ingress TCP 4222 0.0.0.0/0 (CIDR)
Ingress TCP 6868 0.0.0.0/0 (CIDR)
Ingress TCP 4222 0.0.0.0/0 (CIDR)
Ingress TCP 25250 0.0.0.0/0 (CIDR)
Ingress TCP 25555 0.0.0.0/0 (CIDR)
Ingress TCP 25777 0.0.0.0/0 (CIDR)
Ingress UDP 53 0.0.0.0/0 (CIDR)
Ingress UDP 68 0.0.0.0/0 (CIDR)
$ nova keypair-add microbosh > microbosh.pem
$ chmod 600 microbosh.pem

BOSH uses a variety of artifacts in order to complete the deployment life cycle. We can basically distinguish between stemcell, release and deployment. To deploy MicroBOSH we will only need a stemcell which can be downloaded using the bosh CLI. First get a list of available stemcells and download thebosh-stemcell-2839-openstack-kvm-centos-go_agent-raw.tgz.

$ bosh public stemcells
+-----------------------------------------------------------------+
| Name                                                            |
+-----------------------------------------------------------------+
| bosh-stemcell-2427-aws-xen-ubuntu.tgz                           |
| bosh-stemcell-2652-aws-xen-centos.tgz                           |
| bosh-stemcell-2839-aws-xen-centos-go_agent.tgz                  |
| bosh-stemcell-2427-aws-xen-ubuntu-go_agent.tgz                  |
| bosh-stemcell-2710-aws-xen-ubuntu-lucid-go_agent.tgz            |
| bosh-stemcell-2652-aws-xen-ubuntu-lucid.tgz                     |
| bosh-stemcell-2839-aws-xen-ubuntu-trusty-go_agent.tgz           |
| bosh-stemcell-2690.6-aws-xen-ubuntu-trusty-go_agent.tgz         |
| bosh-stemcell-2719.1-aws-xen-centos-go_agent.tgz                |
| bosh-stemcell-2719.1-aws-xen-ubuntu-trusty-go_agent.tgz         |
| bosh-stemcell-2719.2-aws-xen-centos-go_agent.tgz                |
| bosh-stemcell-2719.2-aws-xen-ubuntu-trusty-go_agent.tgz         |
| bosh-stemcell-2719.3-aws-xen-ubuntu-trusty-go_agent.tgz         |
| light-bosh-stemcell-2427-aws-xen-ubuntu.tgz                     |
| light-bosh-stemcell-2652-aws-xen-centos.tgz                     |
| light-bosh-stemcell-2839-aws-xen-centos-go_agent.tgz            |
| light-bosh-stemcell-2427-aws-xen-ubuntu-go_agent.tgz            |
| light-bosh-stemcell-2710-aws-xen-ubuntu-lucid-go_agent.tgz      |
| light-bosh-stemcell-2652-aws-xen-ubuntu-lucid.tgz               |
| light-bosh-stemcell-2839-aws-xen-ubuntu-trusty-go_agent.tgz     |
| light-bosh-stemcell-2690.6-aws-xen-ubuntu-trusty-go_agent.tgz   |
| light-bosh-stemcell-2719.1-aws-xen-centos-go_agent.tgz          |
| light-bosh-stemcell-2719.1-aws-xen-ubuntu-trusty-go_agent.tgz   |
| light-bosh-stemcell-2719.2-aws-xen-centos-go_agent.tgz          |
| light-bosh-stemcell-2719.2-aws-xen-ubuntu-trusty-go_agent.tgz   |
| light-bosh-stemcell-2719.3-aws-xen-ubuntu-trusty-go_agent.tgz   |
| light-bosh-stemcell-2839-aws-xen-hvm-centos-go_agent.tgz        |
| light-bosh-stemcell-2839-aws-xen-hvm-ubuntu-trusty-go_agent.tgz |
| bosh-stemcell-2427-openstack-kvm-ubuntu.tgz                     |
| bosh-stemcell-2624-openstack-kvm-centos.tgz                     |
| bosh-stemcell-2624-openstack-kvm-ubuntu-lucid.tgz               |
| bosh-stemcell-2839-openstack-kvm-centos-go_agent.tgz            |
| bosh-stemcell-2839-openstack-kvm-ubuntu-trusty-go_agent.tgz     |
| bosh-stemcell-2652-openstack-kvm-ubuntu-lucid-go_agent.tgz      |
| bosh-stemcell-2719.1-openstack-kvm-centos-go_agent.tgz          |
| bosh-stemcell-2719.1-openstack-kvm-ubuntu-trusty-go_agent.tgz   |
| bosh-stemcell-2719.2-openstack-kvm-centos-go_agent.tgz          |
| bosh-stemcell-2719.2-openstack-kvm-ubuntu-trusty-go_agent.tgz   |
| bosh-stemcell-2719.3-openstack-kvm-ubuntu-trusty-go_agent.tgz   |
| bosh-stemcell-2839-openstack-kvm-centos-go_agent-raw.tgz        |
| bosh-stemcell-2839-openstack-kvm-ubuntu-trusty-go_agent-raw.tgz |
| bosh-stemcell-2427-vcloud-esxi-ubuntu.tgz                       |
| bosh-stemcell-2652-vcloud-esxi-ubuntu-lucid.tgz                 |
| bosh-stemcell-2839-vcloud-esxi-ubuntu-trusty-go_agent.tgz       |
| bosh-stemcell-2690.5-vcloud-esxi-ubuntu-trusty-go_agent.tgz     |
| bosh-stemcell-2690.6-vcloud-esxi-ubuntu-trusty-go_agent.tgz     |
| bosh-stemcell-2710-vcloud-esxi-ubuntu-lucid-go_agent.tgz        |
| bosh-stemcell-2427-vsphere-esxi-ubuntu.tgz                      |
| bosh-stemcell-2624-vsphere-esxi-centos.tgz                      |
| bosh-stemcell-2839-vsphere-esxi-centos-go_agent.tgz             |
| bosh-stemcell-2427-vsphere-esxi-ubuntu-go_agent.tgz             |
| bosh-stemcell-2710-vsphere-esxi-ubuntu-lucid-go_agent.tgz       |
| bosh-stemcell-2624-vsphere-esxi-ubuntu-lucid.tgz                |
| bosh-stemcell-2839-vsphere-esxi-ubuntu-trusty-go_agent.tgz      |
| bosh-stemcell-2719.1-vsphere-esxi-centos-go_agent.tgz           |
| bosh-stemcell-2719.1-vsphere-esxi-ubuntu-trusty-go_agent.tgz    |
| bosh-stemcell-2719.2-vsphere-esxi-ubuntu-trusty-go_agent.tgz    |
| bosh-stemcell-2719.2-vsphere-esxi-centos-go_agent.tgz           |
| bosh-stemcell-2719.3-vsphere-esxi-ubuntu-trusty-go_agent.tgz    |
| bosh-stemcell-2690.6-vsphere-esxi-ubuntu-trusty-go_agent.tgz    |
| bosh-stemcell-389-warden-boshlite-ubuntu-trusty-go_agent.tgz    |
| bosh-stemcell-53-warden-boshlite-ubuntu.tgz                     |
| bosh-stemcell-389-warden-boshlite-centos-go_agent.tgz           |
| bosh-stemcell-64-warden-boshlite-ubuntu-lucid-go_agent.tgz      |
+-----------------------------------------------------------------+
$ bosh download public stemcell bosh-stemcell-2839-openstack-kvm-centos-go_agent-raw.tgz
bosh-stemcell:   4% |ooo                              |  24.4MB 753.0KB/s ETA:  00:11:43

Now we are ready to create the MicroBOSH deployment manifestmicrobosh-openstack.yml file. You will need to change net_id with your OpenStack instance network identifier, ip with the ip address from the network pool. You can find out that information by executing the following commands.

$ nova network-list
+--------------------------------------+----------+----------------+
| ID                                   | Label    | Cidr           |
+--------------------------------------+----------+----------------+
| 3f36d40e-1097-49a0-a023-4606dbf3a1f5 | yuna-net | 192.168.1.0/24 |
+--------------------------------------+----------+----------------+

$ nova network-show 3f36d40e-1097-49a0-a023-4606dbf3a1f5
+---------------------+--------------------------------------+
| Property            | Value                                |
+---------------------+--------------------------------------+
| bridge              | xenbr0                               |
| bridge_interface    | eth0                                 |
| broadcast           | 192.168.1.255                        |
| cidr                | 192.168.1.0/24                       |
| cidr_v6             | -                                    |
| created_at          | 2014-12-28T17:18:14.000000           |
| deleted             | False                                |
| deleted_at          | -                                    |
| dhcp_server         | 192.168.1.50                         |
| dhcp_start          | 192.168.1.51                         |
| dns1                | 8.8.4.4                              |
| dns2                | -                                    |
| enable_dhcp         | True                                 |
| gateway             | 192.168.1.50                         |
| gateway_v6          | -                                    |
| host                | -                                    |
| id                  | 3f36d40e-1097-49a0-a023-4606dbf3a1f5 |
| injected            | False                                |
| label               | yuna-net                             |
| mtu                 | -                                    |
| multi_host          | True                                 |
| netmask             | 255.255.255.0                        |
| netmask_v6          | -                                    |
| priority            | -                                    |
| project_id          | -                                    |
| rxtx_base           | -                                    |
| share_address       | True                                 |
| updated_at          | -                                    |
| vlan                | -                                    |
| vpn_private_address | -                                    |
| vpn_public_address  | -                                    |
| vpn_public_port     | -                                    |
+---------------------+--------------------------------------+

Under the openstack section change the Identity service endpoint, OpenStack credentials, the private key location, and optionally set the timeout for OpenStack resources.

---
name: microbosh-openstack

logging:
  level: DEBUG

network:
  type: manual
  ip: 192.168.1.55
  cloud_properties:
    net_id: 3f36d40e-1097-49a0-a023-4606dbf3a1f5

resources:
  persistent_disk: 16384
  cloud_properties:
    instance_type: m1.medium

cloud:
  plugin: openstack
  properties:
    openstack:
      auth_url: http://controller:5000/v2.0
      username: admin
      api_key: admin
      tenant: admin
      default_security_groups: ["bosh"]
      default_key_name: microbosh
      private_key: /root/microbosh.pem
      state_timeout: 900

apply_spec:
  properties:
    director:
      max_threads: 3
    hm:
      resurrector_enabled: true
    ntp:
      - 0.europe.pool.ntp.org
      - 1.europe.pool.ntp.org

Finally, set the current deployment manifest file and deploy MicroBOSH.

$ bosh micro deployment microbosh-openstack.yml
$ bosh micro deploy bosh-stemcell-2839-openstack-kvm-centos-go_agent-raw.tgz

If everything goes well you should login into the MicroBOSH instance (use admin, for both username and password).

$ bosh target 192.168.1.55
Target set to ‘microbosh-openstack‘
Your username: admin
Enter password: *****
Logged in as ‘admin‘

Deploying Cloud Foundry

Start by cloning the Cloud Foundry repository. Enter the newly created cf-releasedirectory and execute the update script to update all submodules.

$ git clone https://github.com/cloudfoundry/cf-release.git
$ cd cf-release
$ ./update

Upload the stemcell to the BOSH Director.

$ bosh upload stemcell bosh-stemcell-2839-openstack-kvm-centos-go_agent-raw.tgz

In BOSH terminology, release is a collection of packages and source code, dependencies, configuration properties, and any other components required to perform a deployment. To create a Cloud Foundry release, use this command fromcf-release directory.

$ bosh create release

This will download the required blobs from the S3 storage service and generate a release tarball. You should end up with the similar directory structures.

$ ls blobs
buildpack_cache    git           haproxy         mysql             php-buildpack     rootfs      ruby-buildpack
cli                go-buildpack  java-buildpack  nginx             postgres          ruby        sqlite
debian_nfs_server  golang        libyaml         nodejs-buildpack  python-buildpack  ruby-2.1.4  uaa

$ ls packages
acceptance-tests        buildpack_python     dea_next             golang     loggregator_trafficcontroller  postgres        warden
buildpack_cache         buildpack_ruby       debian_nfs_server    golang1.3  login                          rootfs_lucid64
buildpack_go            cli                  doppler              gorouter   metron_agent                   ruby
buildpack_java          cloud_controller_ng  etcd                 haproxy    mysqlclient                    ruby-2.1.4
buildpack_java_offline  collector            etcd_metrics_server  hm9000     nats                           smoke-tests
buildpack_nodejs        common               git                  libpq      nginx                          sqlite
buildpack_php           dea_logging_agent    gnatsd               libyaml    nginx_newrelic_plugin          uaa

Now you can upload the release to the BOSH Director.

$ bosh upload release

The most complex part of Cloud Foundry BOSH deployment is the manifest file where all components are tied together - computing resource specifications, VMs, software releases, and configuration properties. You can use the deployment which worked great on my environment. Don’t forget to create cf.small and cf.medium flavors in OpenStack.

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475
<%

director_uuid = ‘YOUR_DIRECTOR_UUID‘

static_ip = ‘YOUR_FLOATING_IP‘

root_domain = "#{static_ip}.xip.io"

deployment_name = ‘cf‘

cf_release = ‘194+dev.2‘

protocol = ‘http‘

common_password = ‘YOUR_PASSWORD‘

%>

---

name: <%= deployment_name %>

director_uuid: <%= director_uuid %>

releases:

- name: cf

version: <%= cf_release %>

compilation:

workers: 2

network: default

reuse_compilation_vms: true

cloud_properties:

instance_type: cf.medium

update:

canaries: 0

canary_watch_time: 30000-600000

update_watch_time: 30000-600000

max_in_flight: 32

serial: false

networks:

- name: default

type: dynamic

cloud_properties:

net_id: 3f36d40e-1097-49a0-a023-4606dbf3a1f5

security_groups:

- default

- bosh

- cf-private

- name: external

type: dynamic

cloud_properties:

net_id: 3f36d40e-1097-49a0-a023-4606dbf3a1f5

security_groups:

- default

- bosh

- cf-private

- cf-public

- name: float

type: vip

cloud_properties:

net_id: 3f36d40e-1097-49a0-a023-4606dbf3a1f5

resource_pools:

- name: common

network: default

stemcell:

name: bosh-openstack-kvm-ubuntu-trusty-go_agent-ft

version: latest

cloud_properties:

instance_type: cf.small

- name: large

network: default

stemcell:

name: bosh-openstack-kvm-ubuntu-trusty-go_agent-ft

version: latest

cloud_properties:

instance_type: cf.medium

jobs:

- name: nats

templates:

- name: nats

- name: nats_stream_forwarder

instances: 1

resource_pool: common

networks:

- name: default

default: [dns, gateway]

- name: nfs_server

templates:

- name: debian_nfs_server

instances: 1

resource_pool: common

persistent_disk: 65535

networks:

- name: default

default: [dns, gateway]

- name: postgres

templates:

- name: postgres

instances: 1

resource_pool: common

persistent_disk: 65536

networks:

- name: default

default: [dns, gateway]

properties:

db: databases

- name: uaa

templates:

- name: uaa

instances: 1

resource_pool: common

networks:

- name: default

default: [dns, gateway]

- name: trafficcontroller

templates:

- name: loggregator_trafficcontroller

instances: 1

resource_pool: common

networks:

- name: default

default: [dns, gateway]

- name: cloud_controller

templates:

- name: nfs_mounter

- name: cloud_controller_ng

instances: 1

resource_pool: large

networks:

- name: default

default: [dns, gateway]

properties:

db: ccdb

- name: health_manager

templates:

- name: hm9000

instances: 1

resource_pool: common

networks:

- name: default

default: [dns, gateway]

- name: dea

templates:

- name: dea_logging_agent

- name: dea_next

instances: 2

resource_pool: large

networks:

- name: default

default: [dns, gateway]

- name: router

templates:

- name: gorouter

instances: 1

resource_pool: common

networks:

- name: external

default: [dns, gateway]

- name: float

static_ips:

- <%= static_ip %>

properties:

networks:

apps: external

properties:

domain: <%= root_domain %>

system_domain: <%= root_domain %>

system_domain_organization: ‘admin‘

app_domains:

- <%= root_domain %>

haproxy: {}

networks:

apps: default

nats:

user: nats

password: <%= common_password %>

address: 0.nats.default.<%= deployment_name %>.microbosh

port: 4222

machines:

- 0.nats.default.<%= deployment_name %>.microbosh

nfs_server:

address: 0.nfs-server.default.<%= deployment_name %>.microbosh

network: "*.<%= deployment_name %>.microbosh"

allow_from_entries:

- 192.168.1.0/24 # change according to your subnet

debian_nfs_server:

no_root_squash: true

metron_agent:

zone: z1

metron_endpoint:

zone: z1

shared_secret: <%= common_password %>

loggregator_endpoint:

shared_secret: <%= common_password %>

host: 0.trafficcontroller.default.<%= deployment_name %>.microbosh

loggregator:

zone: z1

servers:

zone:

- 0.loggregator.default.<%= deployment_name %>.microbosh

traffic_controller:

zone: ‘zone‘

logger_endpoint:

use_ssl: <%= protocol == ‘https‘ %>

port: 80

ssl:

skip_cert_verify: true

router:

endpoint_timeout: 60

status:

port: 8080

user: gorouter

password: <%= common_password %>

servers:

z1:

- 0.router.default.<%= deployment_name %>.microbosh

z2: []

etcd:

machines:

- 0.etcd.default.<%= deployment_name %>.microbosh

dea: &dea

disk_mb: 102400

disk_overcommit_factor: 2

memory_mb: 15000

memory_overcommit_factor: 3

directory_server_protocol: <%= protocol %>

mtu: 1460

deny_networks:

- 169.254.0.0/16 # Google Metadata endpoint

advertise_interval_in_seconds: 10

heartbeat_interval_in_seconds: 10

dea_next: *dea

disk_quota_enabled: false

dea_logging_agent:

status:

user: admin

password: <%= common_password %>

databases: &databases

db_scheme: postgres

address: 0.postgres.default.<%= deployment_name %>.microbosh

port: 5524

roles:

- tag: admin

name: ccadmin

password: <%= common_password %>

- tag: admin

name: uaaadmin

password: <%= common_password %>

databases:

- tag: cc

name: ccdb

citext: true

- tag: uaa

name: uaadb

citext: true

ccdb: &ccdb

db_scheme: postgres

address: 0.postgres.default.<%= deployment_name %>.microbosh

port: 5524

roles:

- tag: admin

name: ccadmin

password: <%= common_password %>

databases:

- tag: cc

name: ccdb

citext: true

ccdb_ng: *ccdb

uaadb:

db_scheme: postgresql

address: 0.postgres.default.<%= deployment_name %>.microbosh

port: 5524

roles:

- tag: admin

name: uaaadmin

password: <%= common_password %>

databases:

- tag: uaa

name: uaadb

citext: true

cc: &cc

internal_api_password: <%= common_password %>

security_group_definitions:

- name: public_networks

rules:

- protocol: all

destination: 0.0.0.0-9.255.255.255

- protocol: all

destination: 11.0.0.0-169.253.255.255

- protocol: all

destination: 169.255.0.0-172.15.255.255

- protocol: all

destination: 172.32.0.0-192.167.255.255

- protocol: all

destination: 192.169.0.0-255.255.255.25

- name: internal_network

rules:

- protocol: all

destination: 10.0.0.0-10.255.255.255

- name: dns

rules:

- destination: 0.0.0.0/0

ports: ‘53‘

protocol: tcp

- destination: 0.0.0.0/0

ports: ‘53‘

protocol: udp

default_running_security_groups:

- public_networks

- internal_network

- dns

default_staging_security_groups:

- public_networks

- internal_network

- dns

srv_api_uri: <%= protocol %>://api.<%= root_domain %>

jobs:

local:

number_of_workers: 2

generic:

number_of_workers: 2

global:

timeout_in_seconds: 14400

app_bits_packer:

timeout_in_seconds: null

app_events_cleanup:

timeout_in_seconds: null

app_usage_events_cleanup:

timeout_in_seconds: null

blobstore_delete:

timeout_in_seconds: null

blobstore_upload:

timeout_in_seconds: null

droplet_deletion:

timeout_in_seconds: null

droplet_upload:

timeout_in_seconds: null

model_deletion:

timeout_in_seconds: null

bulk_api_password: <%= common_password %>

staging_upload_user: upload

staging_upload_password: <%= common_password %>

quota_definitions:

default:

memory_limit: 10240

total_services: 100

non_basic_services_allowed: true

total_routes: 1000

trial_db_allowed: true

resource_pool:

resource_directory_key: cloudfoundry-resources

fog_connection:

provider: Local

local_root: /var/vcap/nfs/shared

packages:

app_package_directory_key: cloudfoundry-packages

fog_connection:

provider: Local

local_root: /var/vcap/nfs/shared

droplets:

droplet_directory_key: cloudfoundry-droplets

fog_connection:

provider: Local

local_root: /var/vcap/nfs/shared

buildpacks:

buildpack_directory_key: cloudfoundry-buildpacks

fog_connection:

provider: Local

local_root: /var/vcap/nfs/shared

install_buildpacks:

- name: java_buildpack

package: buildpack_java

- name: ruby_buildpack

package: buildpack_ruby

- name: nodejs_buildpack

package: buildpack_nodejs

- name: go_buildpack

package: buildpack_go

db_encryption_key: <%= common_password %>

hm9000_noop: false

diego:

staging: disabled

running: disabled

newrelic:

license_key: null

environment_name: <%= deployment_name %>

ccng: *cc

login:

enabled: false

uaa:

url: <%= protocol %>://uaa.<%= root_domain %>

no_ssl: <%= protocol == ‘http‘ %>

login:

client_secret: <%= common_password %>

cc:

client_secret: <%= common_password %>

admin:

client_secret: <%= common_password %>

batch:

username: batch

password: <%= common_password %>

clients:

cf:

override: true

authorized-grant-types: password,implicit,refresh_token

authorities: uaa.none

scope: cloud_controller.read,cloud_controller.write,openid,password.write,cloud_controller.admin,scim.read,scim.write

access-token-validity: 7200

refresh-token-validity: 1209600

admin:

secret: <%= common_password %>

authorized-grant-types: client_credentials

authorities: clients.read,clients.write,clients.secret,password.write,scim.read,uaa.admin

doppler:

secret: <%= common_password %>

scim:

users:

- admin|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin,uaa.admin,password.write

- services|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin

jwt:

signing_key: |

-----BEGIN RSA PRIVATE KEY-----

MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1

JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6

0gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBugspULZVNRxq7veq/fzwIDAQAB

AoGBAJ8dRTQFhIllbHx4GLbpTQsWXJ6w4hZvskJKCLM/o8R4n+0W45pQ1xEiYKdA

Z/DRcnjltylRImBD8XuLL8iYOQSZXNMb1h3g5/UGbUXLmCgQLOUUlnYt34QOQm+0

KvUqfMSFBbKMsYBAoQmNdTHBaz3dZa8ON9hh/f5TT8u0OWNRAkEA5opzsIXv+52J

duc1VGyX3SwlxiE2dStW8wZqGiuLH142n6MKnkLU4ctNLiclw6BZePXFZYIK+AkE

xQ+k16je5QJBAN0TIKMPWIbbHVr5rkdUqOyezlFFWYOwnMmw/BKa1d3zp54VP/P8

+5aQ2d4sMoKEOfdWH7UqMe3FszfYFvSu5KMCQFMYeFaaEEP7Jn8rGzfQ5HQd44ek

lQJqmq6CE2BXbY/i34FuvPcKU70HEEygY6Y9d8J3o6zQ0K9SYNu+pcXt4lkCQA3h

jJQQe5uEGJTExqed7jllQ0khFJzLMx0K6tj0NeeIzAaGCQz13oo2sCdeGRHO4aDh

HH6Qlq/6UOV5wP8+GAcCQFgRCcB+hrje8hfEEefHcFpyKH+5g1Eu1k0mLrxK2zd+

4SlotYRHgPCEubokb2S1zfZDWIXW3HmggnGgM949TlY=

-----END RSA PRIVATE KEY-----

verification_key: |

-----BEGIN PUBLIC KEY-----

MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d

KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX

qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug

spULZVNRxq7veq/fzwIDAQAB

-----END PUBLIC KEY-----

view rawcf-194.yml hosted with ? by GitHub

Set and initiate the deploy. This process can take a few hours. Relax.

$ bosh deployment cf-deployment.yml
$ bosh deploy

Pushing an application

Download the cf CLI from https://github.com/cloudfoundry/cli/releases. Make sure you can access the API endpoint of the Cloud Foundry instance. If so, use cf loginwith your username, organization and space.

$ curl http://api.192.168.1.249.xip.io/info
$ cf login -a api.192.168.1.249.xip.io -u user -o rabbitstack -s qa

To test our instance we are going to push a very simple node.js app. Create a new directory and place server.js and the application manifest.yml file in it.

var http = require("http");

var server = http.createServer(function (req, res) {
    res.writeHeader(200, {
        "Content-Type":"text/html"
    });
    res.end("Bunnies on Cloud Foundry. Port is " + process.env.VCAP_APP_PORT);
}).listen(process.env.VCAP_APP_PORT);
---
applications:
- name: rabbitstack
  path: .
  memory: 256M
  instances: 1

From within the directory run cf push and accesshttp://rabbitstack.192.168.1.249.xip.io from the browser. Play with cf scale and see how port number changes on every request.

Congratulations! You now have a fully functional private Cloud Foundry.

时间: 2024-10-25 17:40:28

Deploying Cloud Foundry on OpenStack Juno and XenServer (Part II)的相关文章

Deploying Cloud Foundry on OpenStack Juno and XenServer (Part I)

link http://rabbitstack.github.io/deploying-cloud-foundry-on-openstack-juno-and-xenserver-part-i/ Cloud Foundry ecosystem had been blowing my mind for a long time, and I think it really has made an IT disruption letting us focus on applications as th

体验 Pivotal Cloud Foundry

Cloud Foundry 是开源的 PAAS 实现, Pivotal 基于CF 做了一些扩展,发布了自己的商业化版本 PCF. 并且将 PCF 部署到AWS 上做为一个参考实现,这就是 PWS. 目前 PCF 支持的 IAAS 包括 AWS, AZURE, GCP, vSphere , OpenStack. 如果仅仅是尝试如何在PCF平台上部署应用,可以选择两种方式,一种是直接部署到PWS,另一种是直接部署一个PCF--如果没有安装PCF的基础设施,那可以选择PCF 的Dev 版本,它可以直接

【Cloud Foundry】Could Foundry学习(二)——核心组件分析

在阅读的过程中有不论什么问题,欢迎一起交流 邮箱:[email protected]    QQ:1494713801 Cloud Foundry核心组件架构图例如以下: 主要组件:     Cloud Controller:实质上是VMC和STS交互的server端,它收到指令后发消息到各模快,管理整个云的执行.相当于Cloud Foundry的大脑. DEA:负责处理对所部署的App的訪问请求.事实上质是打包和訪问Droplet.当中Droplet是通过Stager组件将提交的源码及Clou

Cloud Foundry warden container 安全性探讨

本文将从Cloud Foundry中warden container的几个方面探讨warden container的安全性. 1. warden container互访 1.1.  互访原理 在Cloud Foundry内部,用户应用的运行环境通过warden container来进行隔离. 其中,网络方面,container之间的互访如下图: 假设container1主动访问container3: 1.  container1从自身的虚拟网卡virtual eth0_0发起请求,由于自身内核路

openStack juno for ubuntu12-04

1,pwgen(openssl rand -hex 10) some Open-Stack services add a root wrapper to sudo that can interfere with security policies  (Mirantis openStack Certification涉及) 2,apt-get update && apt-get intall ntp; 3,OpenStack juno apt repo packages for ubuntu

Cloud Foundry学苑简介

从计算机诞生到今天,我们可以认为数字技术经历了如图2所示三次大的浪潮:以大型机(Mainframe)为代表的第一台平台,以小型机(Mini或者Minicomputer)和PC(Personal Computer, 也叫微型机Microcomputer)为代表的第二代平台和以云计算为基础的第三代平台. 随着人们对云计算技术研究的深入,业界把云计算软件栈分成了三个层次,IaaS也叫I层云,PaaS也叫P层云,和SaaS也叫S层云. Cloud Foundry是VMware推出的业界第一个开源PaaS

使用Fuel安装OpenStack juno之一安装Fuel Master

安装OpenStack是一件很复杂的事情,特别是在想目中,如果一个组件一个组件,一台一台的coding部署,估计太消耗时间,而且出错的概率很高,所以使用工具推送部署的效率就很高了,而且必须得可靠.mirantis是一家专门做openstack服务的公司,对openstack的贡献也很高,目前已经实现盈利,其openstack部署工具Fuel也很高效稳定,可以在生产环境使用,而且可以购买其服务. 那就不多说此工具了,咱们开始部署,先来了解下大致的环境: 部署硬件:windows 7 (8GB RA

Cloud Foundry技术全貌及核心组件分析

原文链接:http://www.programmer.com.cn/14472/ 历经一年多的发展,Cloud Foundry的架构设计和实现有了众多改进和优化.为了便于大家了解和深入研究首个开源PaaS平台——Cloud Foundry,<程序员>杂志携手Cloud Foundry社区开设了“深入Cloud Foundry”专栏,旨在从架构组成.核心模块功能.源代码分析等角度来全面剖析Cloud Foundry,同时会结合各行业的典型案例来讲解Cloud Foudry在具体应用场景中的表现.

基于Cloud Foundry平台部署nodejs项目上线

Cloud Foundry(以下简称CF),CF是Vmware公司的PaaS服务平台,Paas(Platform as a Service,平台即服务), 是为开发者提供一个应用运行的平台,有了这人平台,开发者无需搭建线上应用运行环境和服务(Mysql/mongodb/Rabbitmq等),包括硬件和软件(os/应用软件如tomcat/rails等)环境.开发者可专注代码开发,最终提供源码(或war包之类的)信息,上传至PAAS,即可运行:同时pass平台提供DNS服务,一些Webapp可以直接