Openflow Lab

GETTING STARTED OPENFLOW OPENVSWITCH TUTORIAL LAB : SETUP


For a more up to date tutorial as anything more then 6 months old is outdated
in the world of SDN Please see:
OpenDaylight
OpenStack Integration with DevStack on Fedora 20

I wrote a Python OpenFlow installation app to automate an OpenFlow KVM and
Open vSwitch setup found at:
OpenFlow,
OpenvSwitch and KVM SDN Lab Installation App →


GETTING STARTED OPENFLOW OPENVSWITCH TUTORIAL LAB – SETUP:


Getting Started OpenFlow OpenvSwitch Tutorial Lab : This is an OpenFlow
Tutorial using OpenvSwitch and Floodlight controller
but any other controller or switch can be used. I have had some requests on some
scenarios so I put this together. Adding a few more flexible components. Getting
to know all of these packages like KVM, OpenvSwitch are going to be pretty big
in the future ecosystem orchestrations.

The video doesn’t have any sound, I am tight on time, sorry. I think it
should be pretty straightforward and the video may help if you get stuck.
Probably a couple typos here and there I will try and catch over the weekend. We
are lacking good lab material on these topics right now so maybe this will save
a couple folks some time.


PREREQUISITES

The KVM requires an x86 machine with either Intel VT or AMD w/AMD-V support.
Anything fairly new will have that support in the processor. There are a few
older HW builds that support hardware assisted virtualization by enabling it in
the bios. Pretty much Googling your machine for hardware virtualization will let
you know. Qemu can be run on non VT HW but the machines will probably get
brutalized by a few host VMs. When you are setting up the vSwitch either have an
out of band or be on it physically. Be careful when you are adding multiple
interfaces to bridges since you can spin up a bridging loop pretty quickly
unless you have STP on. I recommend a test/dev network or mom’s basement
network. If not BPDUguard is your friend
This is done on a fresh install of 64-bit Ubuntu 12.04 (Precise).

Quick screencast. I highly recommend considering using a small Linux Kernel
named linux-0.2.img.bz2
from Qemu if using a laptop or nested
hypervisor.



Shell

$nano /etc/network/interfaces

Add in your configuration to the file to your physical interface and save the
file.

Restart networking. If the configuration is off this will cut you off.

$/etc/init.d/networking restart

$route -n will display your default route if you are having connectivity
issues.

$apt-get update

$apt-get dist-upgrade

INSTALL OPENVSWITCH

Shell

$ apt-get install openvswitch-datapath-source bridge-utils

$ module-assistant auto-install openvswitch-datapath

$ apt-get install openvswitch-common

Verify install

$ovs-vsctl show

ovs_version: “1.4.0+build0″

Processes should look something like this

$ps -ea | grep ovs

26464 ? 00:00:00 ovsdb-server

26465 ? 00:00:00 ovsdb-server

26473 ? 00:00:00 ovs-vswitchd

26474 ? 00:00:00 ovs-vswitchd

26637 ? 00:00:00 ovs-controller

$ /etc/init.d/openvswitch-switch restart


Add your bridge, think of this as a subnet if you aren’t familiar with the
term.

Add a physical interface to your virtual bridge for connectivity off box. If
you don’t script this part you will probably clip your connection as you zero
out eth0 and apply it to br-int. You can pop the commands into a text file and
make it executable with
chmod +x script.sh

Shell

$ ovs-vsctl add-br br-int

$ ovs-vsctl add-port br-int eth0

$ ifconfig eth0 0

#Zero out your eth0 interface and slap it on the bridge interface

#(warning will clip you unless you script it)

$ifconfig br-int 192.168.1.208 netmask 255.255.255.0

#Change your default route

$route add default gw 192.168.1.1 br-int and $route del default gw
192.168.1.1 eth0


INSTALL FLOODLIGHT OPENFLOW CONTROLLER AND ATTACH OPENVSWITCH

Install dependencies, apt-get for UB and yum for RH:

Shell

apt-get install build-essential default-jdk ant python-dev eclipse git

Clone the Github project and build the jar and start the controller:

Shell

$git clone git://github.com/floodlight/floodlight.git

cd into the floodlight directory created.

$cd floodlight

Run ant to build a jar. It will be in the ~/floodlight/target directory.

$ant

Run the controller :

$java -jar target/floodlight.jar

By default it will binds to port 6633 and all ports e.g.
0.0.0.0/0.0.0.0:6633


ATTACH OPENVSWITCH TO THE CONTROLLER

Shell

$ovs-vsctl set-controller br-int tcp:192.168.1.208:6633


In the FloodLight console you will see something like this:

Shell

[New I/O server worker #1-1] INFO n.f.core.internal.Controller - Switch
handshake successful: OFSwitchImpl [/192.168.1.208:49519
DPID[00:00:ba:66:35:e8:38:48]]


The output of OVS ‘ovs-vsctl show’ looks something like this:


Shell

# ovs-vsctl show

70a40219-8725-46a8-b808-af75c642cac8

Bridge "br-int"

Controller "tcp:192.168.1.208:6633"

is_connected: true

Port "eth0"

Interface "eth0"

Port "br-int"

Interface "br-int"

type: internal

ovs_version: "1.4.0+build0"


INSTALL KVM AND INTEGRATE INTO OVS

Shell

$apt-get install kvm uml-utilities

These two scripts bring up the KVM Tap interfaces into your bridge from the
CLI. If you copy and paste below make sure the (‘) does not get formatted
improperly. It should be yellow in nano. “switch=br-int” br-int is the name of
your bridge in OVS.
$nano /etc/ovs-ifup  (open and paste what is
below)

Shell

#!/bin/sh

switch=‘br-int‘

/sbin/ifconfig $1 0.0.0.0 up

ovs-vsctl add-port ${switch} $1

$nano /etc/ovs-ifdown (open and paste what is below)

Shell

#!/bin/sh

switch=‘br-int‘

/sbin/ifconfig $1 0.0.0.0 down

ovs-vsctl del-port ${switch} $1

Make both files executable
chmod +x /etc/ovs-ifup /etc/ovs-ifdown


BOOT THE GUEST VIRTUAL MACHINES


  • Host1

Shell

kvm -m 512 -net nic,macaddr=00:00:00:00:cc:10 -net
tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -cdrom
ubuntu-12.04-desktop-amd64.iso

  • Host2

Shell

kvm -m 512 -net nic,macaddr=00:11:22:CC:CC:10 -net
tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -cdrom
ubuntu-12.04-desktop-amd64.iso

  • Host3

Shell

kvm -m 512 -net nic,macaddr=22:22:22:00:cc:10 -net
tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -cdrom
ubuntu-12.04-desktop-amd64.iso

Each one of those will begin loading from the ISO. I just click “Try Ubuntu”
when they are booting and just run them from disk since really all we need are
nodes that can test connectivity as we push static flows. If it is a more
permanent test lab it would make since to install them to disk.

While those are spinning up let’s install curl.

Shell

$apt-get install curl


Figure 1. OVS Taps


One they are up assign IP addresses to them by clicking in the top left of
the Ubuntu window and type in ‘terminal’ no parentheses. Then give them IPs if
you want to statically assign them with ifconfig.

Shell

sudo ifconfig eth0 192.168.1.x netmask 255.255.255.0

OPENFLOW STARTER TUTORIAL LAB #1


For a more up to date tutorial as anything more then 6 months old is outdated
in the world of SDN Please see:
OpenDaylight
OpenStack Integration with DevStack on Fedora 20

  1. Lab 1: Add static destination MAC addresses to each node. Match: DstMac: ,
    Action:DstPortX

  2. Lab 2: Add static flow with src mac address match with the associated
    action to an output port e.g. Match:SrcMac Action:DstPortY.

  3. Lab 3: Add a bad static flow for one of the hosts and watch ICMP replies
    from the gateway on the board port come back through tcpdump. Match:DstMac,
    Action:PortZ



Figure 1. The topology for the
lab simulates in software the same capabilities you can get in hardware thanks
to OpenvSwitch.[/crayon]
This setup allows you to add and remove as many
matches into the API calls and tinker with them to get a feel once you nail down
the basics. Then you can write the next “killer app” get rich and make it rain,
but first lets figure out what is going on here.

RESTFUL/JSON API

The API is documented very well (that is huge and differentiating IMO) @

Shell

<a
href="http://www.openflowhub.org/display/floodlightcontroller/Proposed+New+API">http://www.openflowhub.org/display/floodlightcontroller/Proposed+New+API</a>

RESTful APIs are very important in my opinion if there is to be a transition
of any kind to make it human readable for at the least troubleshooting or easy
field parsing programmaticlaly for those of us who are only willing to muck our
way through interpreted languages. Huge fan of what they have done here with
their API and I expect the industry to follow this.

FORWARDING TABLE IN OPENVSWITCH

Based on ‘ovs-appctl fdb/show br-int’ build your cheat sheet so see what port
your host VMs are on inside of OpenvSwitch. If you do not see your entry it has
like timed out ~300 seconds or so, refresh the entry by simply pinging the host
VM from the vSwitch. These tables are the same as your CAM tables doing
key/value exact matches for L2 mac address lookups and LPM (Longest Prefix
Match) in todays network systems only in software.

Shell

$ovs-appctl fdb/show br-int

port VLAN MAC Age

1 0 00:23:69:62:26:09 58

6 0 00:11:22:cc:cc:10 7

5 0 00:00:00:00:cc:10 4

0 0 5c:26:0a:5a:c8:b2 3

7 0 22:22:22:00:cc:10 3

MAC tables for this lab are as follows. Yours will likely be different based
on the assignment by the vSwitch. The mac addresses are specifed by the KVM boot
but anything can be used as long they are unique.

The DPID datapath ID is required to send the API calls. You need to find the
one on your vSwitch. Lots of ways to find it either through the Floodlight
console or APIs or from the ovs-ofctl show <bridge name> listed below. It
is basically a few bytes prepended on your Nics MAC address).

Shell

$ ovs-ofctl show br-int

OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:00005c260a5ac8b2 (that is your
DPID)

Replace the curl commands with your DPID curl -d ‘{“switch”:
“00:00:5c:26:0a:5a:c8:b2″,  (that longer than usual mac looking ID)

“ovs-dpctl dump-flows br-int” will display the datapaths being instantiated
into the OpenvSwitch and handy for debugging and tshooting.

Figure2. MAC to Port mapping or forwarding table for the
labs. Build this from  “ovs-appctl fdb/show br-int”
output.
Throughout the lab I have my VM hosts pinging the gateway so I can
watch what happens as I instantiate static flows into the OpenvSwitch (OVS) flow
table.

OPENFLOW WEBUI GUI

Through the lab for starters it might be easier for some to watch the web
page. This is a nice Django front end put together by Wes Felter and some of his
guys at IBM. There are some bugs which I’m sure the Floodlight guys would like
anyone to clean up. If you leave the page open it continues to refresh until it
consumes the planet as it polls the controller. Just close and reopen every now
and then.

The WebUI loads be default with the jar binary:

Shell

java -jar floodlight.jar

Shell

http://&lt;yourIP&gt;:8080/ui/index.html


Figure 3. WebUI starts
automatically and binds to port 8080

It might be more comfortable for some to use the WebUI / GUI. It is a nice
clean web front at that!

All three labs are in this screencast.

LAB 1 STATIC MAC ENTRIES FOR OUR 3 HOSTS

Figure 3. Three hosts with static mac entries for each
port.

STATIC FLOW PUSH INTO THE OPENFLOW PIPELINE

Before we run we crawl, before we dynamically forward we statically forward!
It seems natural that most of the time we start with static entries when
teaching the mechanics of routing with network IGPs. Here we are defining static
data paths. We match (or don’t) a rule and have an associated action to it that
will eventually kick off a fairly complex sets of flow tables in a pipeline in
v1.1 and up.

The fairly close command for a data path  in a tradiational instruction
set on today’s switches would be this ‘”mac-address-table static 0000.0000.cc10
vlan 100 interface GigabitEthernet0/1″. We are not setting a vlan id but would
be as easy as adding “dataLayerVirtualLan”:x to the flow push. That is obviously
not scalable but I think it is important to understand how datapaths get pushed
to the OF enabled switch. Normally even in the SDN world those mac address are
learned through flooding to all ports FFFF.FFFF.FFFF on the broadcast domain.
The controller than learns of it starts a mac address timer to begin to age it
out if no more traffic is received so as not to exhaust the it’s tables but
cache it if it continues talking by restarting the timer each time a frame is
received from the MAC source.

Push static flows for each destination mac address in the switch to an
assigned port. We have a match and action explicitly defined. All we are doing
is adding static mac address entries instead of them being defined dynamically
through flooding. Not each name is unique. If copying and pasting make sure to
strip formatting.

As you add the flows keep in mind each curl you do will overwrite the
previous one their with the same name in the table. Notice each flow pushed has
a unique name. It’s almost ACLs but not quite.

  • Install curl

Shell

$apt-get install curl

With OVS and the OF controller run each of these from your command
line.
Remember to replace the DPID “switch”: “00:00:5c:26:0a:5a:c8:b2″ &
the IP addr 192.168.1.208 with your lab addresses. Each curl command is one
line.

INSTANTIATE THE OPENFLOW FORWARDING RULES

  • Host 1

Shell

$ curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow1",
"cookie":"0", "priority":"32768", "dst-mac":"00:00:00:00:cc:10","active":"true",
"actions":"output=5"}‘
http://192.168.1.208:8080/wm/staticflowentrypusher/json

  • Host 2

Shell

$ curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow2",
"cookie":"0", "priority":"32768", "dst-mac":"00:11:22:cc:cc:10","active":"true",
"actions":"output=6"}‘
http://192.168.1.208:8080/wm/staticflowentrypusher/json

  • Host 3

Shell

$ curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow3",
"cookie":"0", "priority":"32768", "dst-mac":"22:22:22:00:cc:10","active":"true",
"actions":"output=7"}‘
http://192.168.1.208:8080/wm/staticflowentrypusher/json

LIST THE FLOWS

Now through the API we can pull all static flows that have been pushed with
this API call. Notice all of the Tuples (header fields e.g. SrcMac, Dest,IP etc)
being listed. Look for the “match” and “action” you pushed.

Shell

$ curl http://192.168.1.208:8080/wm/staticflowentrypusher
/list/00:00:5c:26:0a:5a:c8:b2/json

CLEAR OR DELETE THE STATIC FLOWS

To clear all of the static flows the API call looks like this. Clear all
flows the API also has a delete function documented:

Shell

$ curl http://192.168.1.208:8080/wm/staticflowentrypusher
/clear/00:00:5c:26:0a:5a:c8:b2/json


OPENFLOW STARTER TUTORIAL LAB #2


For a more up to date tutorial as anything more then 6 months old is outdated
in the world of SDN Please see:
OpenDaylight
OpenStack Integration with DevStack on Fedora 20

OpenFlow Starter Tutorial Lab #2 :This lab is to restrict two hosts to only
talk to each other with source based forwarding using the static flow pusher
RESTful API. You can add any field you want to make the forwarding decisions on.
Remember to name the flows with unique names or else you will overwrite
previously instantiated flows. Previous posts in the series have setup included.
Links to those at the bottom of this post.

Figure 1. OpenFlow Starter Tutorial Lab #2 Topology

Based on source MAC address we can lock two ports into only talking to each
other. This is used for security reasons today in sensitive areas. This allows
for very granular port to port mapping. We are adding two flows, just as a host
needs a flow setup to talk to another host it also needs a return flow to put
established.

Delete old static Flows from Lab 1.

Shell

curl
http://192.168.1.208:8080/wm/staticflowentrypusher/clear/00:00:5c:26:0a:5a:c8:b2/json

PUSH THE TWO STATIC OPENFLOW RESTFUL API CALLS TO CREATE YOUR FLOWMOD

Shell

#To ping from port 1 to 6

$curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow1",
"cookie":"0", "priority":"32768", "src-mac":"00:11:22:cc:cc:10","active":"true",
"actions":"output=6"}‘
http://192.168.1.208:8080/wm/staticflowentrypusher/json

#To ping from port 6 to 1

$curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow2",
"cookie":"0", "priority":"32768", "src-mac":"22:22:22:00:cc:10","active":"true",
"actions":"output=1"}‘
http://192.168.1.208:8080/wm/staticflowentrypusher/json

Ping the hosts from those two ports. They should only be able to ping each
other not your gateway or anything else since the closets match is the static
one pushed.

Once I add these may gateway no longer pings becuase the only place those to
source mac addresses explicitly match on are eachothers ports. So while they can
talk to each other they can not talk anywhere else.

While this is clearly not managable at scale, it should get the your wheels
going on the possiblities this opens when you start thinking about how powerful
this granularity can become in the security world if done programmatically from
policy.

OPENFLOW STARTER TUTORIAL LAB #3

For a more up to date tutorial as anything more then 6 months old is outdated
in the world of SDN Please see:
OpenDaylight
OpenStack Integration with DevStack on Fedora 20

OpenFlow Starter Tutorial Lab #3 : Move individual flows

Pre-requisites install and the beginning of the lab can be found here.

Figure 1. OpenFlow starter tutorial Lab #3 topology. Add an entry to
the wrong port and watch it break.

Let’s clear all of our flows and get everything pinging the gateway
again.

Shell

$curl http://192.168.1.208:8080/wm/staticflowentrypusher
/clear/00:00:5c:26:0a:5a:c8:b2/json

Add our three earlier entries from Lab1

Shell

$curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow1",
"cookie":"0", "priority":"32768", "dst-mac":"00:00:00:00:cc:10","active":"true",
"actions":"output=5"}‘
http://192.168.1.208:8080/wm/staticflowentrypusher/json

Shell

curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow2",
"cookie":"0", "priority":"32768", "dst-mac":"00:11:22:cc:cc:10","active":"true",
"actions":"output=6"}‘
http://192.168.1.208:8080/wm/staticflowentrypusher/json

Shell

$curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow3",
"cookie":"0", "priority":"32768", "dst-mac":"22:22:22:00:cc:10","active":"true",
"actions":"output=7"}‘
http://192.168.1.208:8080/wm/staticflowentrypusher/json

TCPDUMP ANALYSIS

Start tcpdump on the host you will send Host3′s traffic to. In my case I am
starting tcpdump on Host1 where I am going to send Host3′s traffic to.

Shell

$ sudo tcpdump -i eth0 host &lt;IP of host 3&gt;

The filter “host <ip>”says only capture traffic to or from that host.
We should never see unicast traffic from one host to another under proper
condictions on a packet switched network.

INSTANTIATE BAD FLOWS

Now lets push a mac to a bad port and watch it break. This will overwrite
‘static-flow3′. This will break Host 3.

Shell

$curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"wrong-port",
"cookie":"0", "priority":"32768", "dst-mac":"22:22:22:00:cc:10","active":"true",
"actions":"output=5"}‘
http://192.168.1.208:8080/wm/staticflowentrypusher/json

Figure 2. TCPdump output when I push host 3′s forwarding datapath to
host 1.

As soon as you added the “wrong-port” static flow you began getting ICMP
replies from the gateway until that times out. This has many more security type
implications. Why not have your action be forward to two ports instead of just
one. The 2nd port could be an IDS monitoring traffic and instead of trying to
process the firehose of traffic in a typical port mirror you can get as granular
you want and watch only particular matching traffic as defined by the tuple
matching in the header fields  (src_mac,dst_ip,VID etc.). No you can use a
fraction of the hardware and only process what is important to your use case.
Load balancing is another obvious one. Policy routing that may be scalable if
managed programmatically by northbound API’s.

Figure 3. The API is the end game IMO

CONCLUSION

Thats it hope this maybe demystifies a bit of OpenFlow for you. I still have
lots to learn as it is never-ending cycle, but going through a couple of labs
seems to help nail some of this down and show that with complexity or more
accuraltely abstraction, will bring more simplicity to the operators(some day so
very very very far away theoretically). This lab setup can scale out to a wide
range of different scenarios then just the couple little guys here. Would love
to hear what others are doing.

From an end user perspective, it is the same ideas we have had in best
matching of prefixes all along but we are adding more ways to match and fields
to match upon. The API is what is going to be very important in my opinion and
will open up the value over the next coulple of years as the northbound apps
begin to surface. Sorry there is not any commentary on the videos, swamped but I
think its fairly straightforward. I only added the video in case someone gets
stuck. Feel free to contact with assistance or jump on irc.freenode.net on
#openflow.

MISCELLANEOUS API CALLS

Find all flows

Shell

$curl http://192.168.1.208:8080/wm/core/switch/
00:00:5c:26:0a:5a:c8:b2/flow/json

List all static

Shell

$curl
http://192.168.1.208:8080/wm/staticflowentrypusher/list/00:00:5c:26:0a:5a:c8:b2/json

Clear all flows

Shell

$curl
http://192.168.1.208:8080/wm/staticflowentrypusher/clear/00:00:5c:26:0a:5a:c8:b2/json


ADDITIONAL OPENFLOW AND SDN LINKS AND RESOURCES

http://openvswitch.org/ Martin Casado’s
group have put an amazing vSwitch out there. I doubt there will be many
vSwitches that are not munging his work in some form or fashion over the next
few years.

http://floodlight.openflowhub.org/
Thanks to Nick Bastin for answering my question on the #openflow channel. He is
a great asset to the community.

http://www.noxrepo.org/ Another nice
OpenFlow Controller is POX a Python based platform agnostic project the Murphy
McCauley is doing a great job with. As soon as I dig into the API I am going to
do a similar tutorial with that. I need the API docs if anyone has them hook me
up.

I am typically always /nick networkstatic on irc.freenode.net in #openvswitch
#openflow #openstack and #packetpushers if anyone has any questions.

Openflow Lab,布布扣,bubuko.com

时间: 2024-11-13 09:38:19

Openflow Lab的相关文章

[转]Openflow Lab

GETTING STARTED OPENFLOW OPENVSWITCH TUTORIAL LAB : SETUP For a more up to date tutorial as anything more then 6 months old is outdated in the world of SDN Please see: OpenDaylight OpenStack Integration with DevStack on Fedora 20 I wrote a Python Ope

SDN实战: Practice SDN/OpenFlow with LINC-Switch and OpenDaylight

SDN IN ACTION: Practice SDN/OpenFlow with LINC-Switch and OpenDaylight 薛国锋  [email protected] 本次实验,重点学习了Erlang语言.LINC软件OpenFlow交换机以及OpenDaylight开源控制器. Last time we had built anemulated environment based on ONOS and Mininet, today we are going to play

深入SDN(二):关于SDN/OpenFlow的学习&amp;研究路线

我个人的理解: 第一步:当然是SDN的history,这里主要指的是学术界的研究情况: The Road to SDN, Nick Feamster, Jennifer Rexford, 2013,从学术概念上讨论SDN这一路在时间轴上的演进 Maturing of OpenFlow and SDNthrough Deployments,Nick McKeown, 2012,斯坦福在研究和部署的四个阶段的成果,以及两者之间的互相影响,可以说是SDN是怎样炼成的 A Survey of SDN:

深入SDN(三):SDN、OpenFlow和NOS是什么?

本文解答四个问题: 问题一:What is SDN? 之前根据自己的经验和学习状况回答了如何去研究SDN&OpenFlow?,到底What is SDN? 现有的SDN课程中在介绍SDN时,基本都是两步走: 第一步引用Nick McKeown的观点,类比PC产业,从"Refactoring Functionality"的角度来定义SDN,直接了当非常容易理解,感觉豁然开朗. 第二步引用Scott Shenker的观点,从"Redefining Abstractions

Simulation.Lab.Software.SimLab.Composer.2015.v6.1.MACOSX 1CD

CA Spectrum Linux VM 10.01.00.00.103 Linux 2DVD  Tahoe Design HYDROFLO v3.0.0.4 1CD  CA Spectrum Windows VM 10.01.00.00.103 Win64 2DVD  Delcam Exchange 2016 R2 CR 8.3.1005 Win64 1CD  Delcam PowerSHAPE 2016 SP2 Update only 1CD  ESI Group VA One 2015.0

x01.Lab.StoreApp: XP 停服,微软变脸

变脸,川剧的一种表演形式,除了哄哄小孩,似乎别无用处.而川剧变脸从业者何其多也,存在时间何其长也.以如此多的从业者,如此长的时间,来进行科研,其成果一定是斐然吧.推而广之,试问天下谁能敌! 微软变脸,足以改变世界.这次变脸,不仅是形式上的,而且是骨子里的.为适应手机.平板的性能苛求,其应用商店程序是建立在 WinRT 的基础上,而 WinRT 是建立在 COM 基础上.即面向对象,又拥抱底层,可谓鱼与熊掌兼得.不仅如此,其开发者许可,应用商店成立,无一不在显示微软的掌控能力."天下英雄,尽入毂中

Citrix Provisioning Services LAB時出現Unable to contact th database Server

今天在執行PVS LAB時出現下圖 後來發現為SQL連入的要調整如下圖1.Machine Account2.對應DB3.DB權限 如果是用SA帳戶作為本機的SQL,就是調這組帳戶 調完後對應就可以使用PVS 感謝

OpenFlow

What is OpenFlow? OpenFlow is an open standard that enables researchers to run experimental protocols in the campus networks we use every day. OpenFlow is added as a feature to commercial Ethernet switches, routers and wireless access points – and pr

Lab颜色空间进行颜色提取 及其实现

这段时间在做车灯检测,晚上有些尾灯偏黄色亮度偏弱,仅用灰度度是不够的,经比较了在RGB.HSV.Lab颜色空间下进行颜色提取,发现Lab颜色模型的效果是最好的.下面介绍Lab的原理及其代码实现. Lab颜色模型由三个要素组成,一个要素是亮度(L),a 和b是两个颜色通道.a包括的颜色是从深绿色(低亮度值)到灰色(中亮度值)再到亮粉红色(高亮度值):b是从亮蓝色(低亮度值)到灰色(中亮度值)再到黄色(高亮度值).因此,这种颜色混合后将产生具有明亮效果的色彩.(这段百度的,哈哈 ) RGB转换到La