Vitual Router in The Cloud

VyOS and ESXi,VyOS Configuration

The next step is to configure both VyOS routers. Before we do, we should ensure that we have a good high-level understanding of what should be happening.

The ultimate goal of this three-router setup is to have our own VyOS router as the gateway to the Internet, while also allowing the Verizon router to continue providing network access for the value-added services
like:

  • Video-on-Demand to set-top boxes
  • On-screen caller ID
  • Remote DVR access
  • Etc.

The Verizon router does this by setting up its own NAT’d network on the 192.168.1.0/24 range, which the STBs in
the house sit on and use to communicate with Verizon’s servers. The VZ router expects and requires the IP it is assigned on its WAN port to be publicly routable on the FiOS ISP network. If it is not, things may or may not work, or they might become unpredictable
in their functionality.

The entire point of the secondary router is to provide 1:1 NAT between the home network and the VZ router, so that the VZ router gets assigned the same IP as the primary router that is actually talking to the FiOS
ISP network.

With
three different Layer 2 domains and some creative port forwarding, the Verizon router won’t even know the difference.

This network configuration, combined with some port forwarding rules on the primary and secondary router (discussed later), allows traffic between the Verizon router and the Verizon servers to flow normally without
the VZ router being aware that it is not actually directly connected to the FiOS ISP network.

Let’s start by configuring the primary router. This router will actually receive the public-facing IP from the FiOS ISP network, and thus will ultimately be responsible for all Internet traffic. Log into your primary
router and run the show interfaces command.

[email protected]:~$ show interfaces
Codes: S - State, L - Link, u - Up, D - Down, A - Admin Down
Interface        IP Address                        S/L  Description
---------        ----------                        ---  -----------
eth0             108.0.0.123/24                    u/u  FiOS Public Internet
eth1             10.0.0.1/24                       u/u  Home Network
lo               127.0.0.1/8                       u/u
                 ::1/128

We see two Ethernet interfaces, eth0 and eth1. These represent the two vNICs provisioned
to this VM, and which correspond to the FiOS Public Network and Home Network port groups, respectively.

Let’s configure the eth0 interface first.

[email protected]:~$ configure
[email protected]:~# set interfaces ethernet eth0 address dhcp
[email protected]:~# set interfaces ethernet eth0 description FiOS_ISP_Net
[email protected]:~# set interfaces ethernet eth0 duplex auto
[email protected]:~# set interfaces ethernet eth0 speed auto

This will set this interface up to use a dynamically assigned address (from Verizon), set a description to make it easy to remember what it connects to, and auto negotiate speed and duplex settings.

There is one more step required. We must configure this interface to impersonate our Verizon hardware router’s WAN interface
by setting it to use the same MAC address (Verizon filters MACs that are not on its whitelist). You can find the WAN MAC you need to enter printed on the
bottom of your Verizon router. Replace 0a:1b:2c:3d:4e:5f below as appropriate:

[email protected]:~# set interfaces ethernet eth0 mac 0a:1b:2c:3d:4e:5f

Let’s take a look at the changes we are making.

[email protected]:~# compare

When you are satisfied, commit the changes to the running configuration and save the running config to disk. If you commit but do not save, the changes will not persist after a reboot of the router.

[email protected]:~# commit
[email protected]:~# save

Let’s take another look at the interface configuration now. We’re still in configuration mode (note the # symbol at the end of the command prompt),
so we need to prepend run to the command we used before.

[email protected]:~# run show interfaces

Hopefully, your eth0 interface has a public address assigned from the Verizon DHCP server. If not, check your connections and configurations.

Assuming all is well, you should now be able to ping addresses to confirm that you have connectivity out to the Internet.

[email protected]:~# run ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=1 ttl=251 time=21.1 ms
64 bytes from 8.8.8.8: icmp_req=2 ttl=251 time=22.0 ms
64 bytes from 8.8.8.8: icmp_req=3 ttl=251 time=20.9 ms
64 bytes from 8.8.8.8: icmp_req=4 ttl=251 time=22.3 ms
^C
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 20.948/21.610/22.307/0.605 ms

If this works, then congratulations! The good news is that your first router is working. The bad news is that nothing else can use your Internet connection yet.

VyOS and OpenStack Configuration Drives

VyOS is
an open source fork of the last open source release of Vayatta, which turned proprietary a few years ago. We are currently using VyOS at work to set up OSPF routers in an OpenStack environment, and will soon have to spawn a very large amount of these in a
proof-of-concept deployment.

This describes how we add support for OpenStack’s Configuration
Drive
 to VyOS.

VyOS has something of an unhealthy relationship with Debian Squeeze (it is currently incompatible with newer Debian releases), and requires a Debian Squeeze installation in order to create the VyOS ISO used for deployments.

Below we will patch a post-installation script and add our own (very simple) Python-script that parses the Configuration Drive information and complies with a very small subset of what features packages like cloud-init provide.
Unfortunately, cloud-init is not available for Debian Squeeze, which is the whole reason we are doing this in the first place.

Steps:

  1. Install Squeeze
  2. Create Python script
  3. Run setup-script
  4. Import ISO into Glance
  5. Spawn OpenStack instance
  6. Verify that it works

Prepare a little Python script for parsing the OpenStack Configuration Drive metadata:

#!/usr/bin/python
import json
import shutil

meta_data_file = open(‘/config-drive/openstack/latest/meta_data.json‘)
json_input = meta_data_file.read()

try:
  decoded = json.loads(json_input)

  for file in decoded[‘files‘]:
    print file[‘content_path‘], file[‘path‘]
    shutil.copy2(‘/config-drive/openstack‘ + file[‘content_path‘], file[‘path‘])

except (ValueError, KeyError, TypeError):
  print "JSON format or content error"

Save this as process-openstack-metadata.py. This will be baked into the ISO in the script below as/root/vyos-init.py.

Below is a script to generate a VyOS ISO with a few modifications. Most of it is straight from the VyOS wiki page How
to build an ISO image
. Read through it so you see what it does and save it as build-vyos-iso.sh, chmod it (chmod
+x build-vyos-iso.sh) and run it.

#!/bin/bash -xe

apt-get install debian-archive-keyring

cat >> /etc/apt/sources.list <<EOF
deb http://backports.debian.org/debian-backports squeeze-backports main
EOF

apt-get update

# Get backported version of squashfs
apt-get -t squeeze-backports install squashfs-tools

apt-get install git autoconf automake dpkg-dev live-helper syslinux genisoimage

branch=hydrogen # hydrogen = stable, helium = dev

if ! test -d build-iso
then
  git clone https://github.com/vyos/build-iso.git

  cd build-iso

  git branch $branch --track origin/$branch
  git checkout $branch
else
  cd build-iso
fi

if ! test -d pkgs/vyatta-cfg-system/debian
then
  git submodule update --init pkgs/vyatta-cfg-system

  cd pkgs/vyatta-cfg-system/

  git branch $branch --track origin/$branch
  git checkout $branch
else
  cd pkgs/vyatta-cfg-system/
fi

# Reset debian/vyatta-cfg-system.postinst.in so we can patch it again
git checkout debian/vyatta-cfg-system.postinst.in

# Patch debian/vyatta-cfg-system.postinst.in
patch -p0 <<"HEREDOC"
--- debian/vyatta-cfg-system.postinst.in    2015-01-17 15:09:53.000000000 +0100
+++ debian/vyatta-cfg-system.postinst.in.patched    2015-01-17 15:11:19.000000000 +0100
@@ -143,6 +143,19 @@
 # configuration is fully applied. Any modifications done to work around
 # unfixed bugs and implement enhancements which are not complete in the Vyatta
 # system can be placed here.
+
+mkdir /config-drive
+
+mount -o ro -t iso9660 /dev/disk/by-label/config-2 /config-drive
+
+/root/vyos-init.py
+
+configure
+load /root/configuration
+commit
+save
+
+umount /config-drive
 EOF
 fi

HEREDOC

cd -

mkdir livecd/config.vyatta/chroot_local-includes/root

cp ../process-openstack-metadata.py   livecd/config.vyatta/chroot_local-includes/root/vyos-init.py

chmod +x livecd/config.vyatta/chroot_local-includes/root/vyos-init.py

aptitude install pdebuild-cross
make vyatta-cfg-system

find pkgs -name ‘vyatta-cfg-system*.deb‘ ||   (echo "pkgs/vyatta-cfg-system*.deb not found, exiting..."; exit 1)

echo python-simplejson >>   livecd/config.vyatta/chroot_local-packageslists/vyatta-extra.list

export PATH=/sbin:/usr/sbin:$PATH
autoreconf -i
./configure
make iso

ls -l livecd/binary.iso

echo Done!

If everything went well you will have an ISO at ./build-iso/livecd/binary.iso.

Upload this file into OpenStack with Glance and name it “VyOS Router”:

glance image-create --name "VyOS Router" --is-public True   --disk-format iso --container bare < ./build-iso/livecd/binary.iso

Create your own config.boot (or whatever else you want on the deployed machine):

cat > config.boot <<"EOF"
interfaces {
    ethernet eth0 {
        address dhcp
    }
    loopback lo {
    }
}
service {
    ssh {
        port 22
    }
}
system {
    login {
        user vyos {
            authentication {
                plaintext-password "demo"
            }
            level admin
        }
    }
}
EOF

Spawn an instance with a predefined flavor and our new configuration file to be included on the configuration drive:

nova boot --config-drive true --image "VyOS Router"   --flavor <flavor> --file /root/configuration=config.boot   --meta essential=false --nic net-id=<net-id> vyos

Verify that it works by logging into VyOS and check if the running configuration is the one you expect. You can start by checking if the file /root/configuration exists
and if it the content is what you intended.

How to debug:

[email protected]:~$ /usr/sbin/tcpdump -f "icmp" -i eth0

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes

18:10:13.234909 IP 10.168.11.102 > 8.8.8.8: ICMP echo request, id 40962, seq 49, length 64

18:10:13.261277 IP 8.8.8.8 > 10.168.11.102: ICMP echo reply, id 40962, seq 49, length 64

18:10:14.235045 IP 10.168.11.102 > 8.8.8.8: ICMP echo request, id 40962, seq 50, length 64

18:10:14.261379 IP 8.8.8.8 > 10.168.11.102: ICMP echo reply, id 40962, seq 50, length 64

18:10:15.235249 IP 10.168.11.102 > 8.8.8.8: ICMP echo request, id 40962, seq 51, length 64

18:10:15.261549 IP 8.8.8.8 > 10.168.11.102: ICMP echo reply, id 40962, seq 51, length 64

^C

6 packets captured

6 packets received by filter

0 packets dropped by kernel

[email protected]:~$ /usr/sbin/tcpdump -f "icmp" -i eth1

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes

18:10:18.235887 IP XXX.XXX.187.78 > 8.8.8.8: ICMP echo request, id 40962, seq 54, length 64

18:10:18.262249 IP 8.8.8.8 > XXX.XXX.187.78: ICMP echo reply, id 40962, seq 54, length 64

18:10:19.236110 IP XXX.XXX.187.78 > 8.8.8.8: ICMP echo request, id 40962, seq 55, length 64

18:10:19.262477 IP 8.8.8.8 > XXX.XXX.187.78: ICMP echo reply, id 40962, seq 55, length 64

18:10:20.236345 IP XXX.XXX.187.78 > 8.8.8.8: ICMP echo request, id 40962, seq 56, length 64

18:10:20.262652 IP 8.8.8.8 > XXX.XXX.187.78: ICMP echo reply, id 40962, seq 56, length 64

18:10:21.236527 IP XXX.XXX.187.78 > 8.8.8.8: ICMP echo request, id 40962, seq 57, length 64

18:10:21.262927 IP 8.8.8.8 > XXX.XXX.187.78: ICMP echo reply, id 40962, seq 57, length 64

18:10:22.237082 IP XXX.XXX.187.78 > 8.8.8.8: ICMP echo request, id 40962, seq 58, length 64

18:10:22.263398 IP 8.8.8.8 > XXX.XXX.187.78: ICMP echo reply, id 40962, seq 58, length 64

^C

10 packets captured

10 packets received by filter

0 packets dropped by kernel

[email protected]:~$ ping 10.168.11.102

PING 10.168.11.102 (10.168.11.102) 56(84) bytes of data.

64 bytes from 10.168.11.102: icmp_req=1 ttl=64 time=0.481 ms

64 bytes from 10.168.11.102: icmp_req=2 ttl=64 time=0.559 ms

时间: 2024-10-14 06:49:32

Vitual Router in The Cloud的相关文章

Cloud Foundry技术全貌及核心组件分析

原文链接:http://www.programmer.com.cn/14472/ 历经一年多的发展,Cloud Foundry的架构设计和实现有了众多改进和优化.为了便于大家了解和深入研究首个开源PaaS平台——Cloud Foundry,<程序员>杂志携手Cloud Foundry社区开设了“深入Cloud Foundry”专栏,旨在从架构组成.核心模块功能.源代码分析等角度来全面剖析Cloud Foundry,同时会结合各行业的典型案例来讲解Cloud Foudry在具体应用场景中的表现.

Could Foundry学习(三)——Router

一.概述 Router组件在Cloud Foundry中是对所有进来的Request进行路由. 进入Router的request主要有两类:首先是来自VMCClient或者STS的管理型指令,这类request会被路由到AppLife Management组件,又叫CloudController组件去:第二类是外界对你所部署的apps访问的request,这部份requests会被路由到Appexecution,又或者叫做DEAs的组件去. 系统可以部署多个Routers共同处理进来的reque

【Cloud Foundry】Could Foundry学习(二)——核心组件分析

在阅读的过程中有不论什么问题,欢迎一起交流 邮箱:[email protected]    QQ:1494713801 Cloud Foundry核心组件架构图例如以下: 主要组件:     Cloud Controller:实质上是VMC和STS交互的server端,它收到指令后发消息到各模快,管理整个云的执行.相当于Cloud Foundry的大脑. DEA:负责处理对所部署的App的訪问请求.事实上质是打包和訪问Droplet.当中Droplet是通过Stager组件将提交的源码及Clou

Spring Cloud官方文档中文版-服务发现:Eureka客户端

官方文档地址为:http://cloud.spring.io/spring-cloud-static/Brixton.SR7/#_spring_cloud_netflix 文中例子我做了一些测试在:http://git.oschina.net/dreamingodd/spring-cloud-preparation Spring Cloud Netflix This project provides Netflix OSS integrations for Spring Boot apps th

OTC(Open Telekom Cloud)与AWS对比之VPC

VPC(Virtual Private Cloud)作为云计算最基础的服务,在云计算使用中有着重要的作用.下面我们对OTC和AWS中的提供的VPC服务进行一下详细的比较. 提供的服务种类 OTC:                        AWS: OTC AWS Virtual Private Cloud Virtual  Private Cloud Route  Table Your  VPCs Subnet9 Subnets Security  Group Route  Tables

Deploying Cloud Foundry on OpenStack Juno and XenServer (Part II)

link http://rabbitstack.github.io/deploying-cloud-foundry-on-openstack-juno-and-xenserver-part-ii/ Let's move on. We should have our OpenStack instance prepared for Cloud Foundry. The most usual way of deploying Cloud Foundry is through BOSH. For the

基于Cloud Foundry平台部署nodejs项目上线

Cloud Foundry(以下简称CF),CF是Vmware公司的PaaS服务平台,Paas(Platform as a Service,平台即服务), 是为开发者提供一个应用运行的平台,有了这人平台,开发者无需搭建线上应用运行环境和服务(Mysql/mongodb/Rabbitmq等),包括硬件和软件(os/应用软件如tomcat/rails等)环境.开发者可专注代码开发,最终提供源码(或war包之类的)信息,上传至PAAS,即可运行:同时pass平台提供DNS服务,一些Webapp可以直接

docker-compose 完整打包发布, 多服务,多节点SPRING CLOUD ,EUREKA 集群

这里不再使用 端口映射的方式,因为不同主机上,Feign 根据 docker hostname访问会有问题. 把打包的好jar copy到docker镜像里 有几个服务,就复制几个dockerfile Dockerfile FROM registry.cn-hangzhou.aliyuncs.com/laizhenwei/jre:8u144 # MAINTAINER zhenweilai <181282945@qq.com> COPY jar /usr/local RUN cp /usr/sh

Tagging Physical Resources in a Cloud Computing Environment

A cloud system may create physical resource tags to store relationships between cloud computing offerings, such as computing service offerings, storage offerings, and network offerings, and the specific physical resources in the cloud computing envir