WAN Optimizatoin - WAN Optimization Technology

Part 1: Adaptive Compression

Configure > Optimization > Performance

Detects LZ data compression performance for a connection dynamically && turns it off(sets the compression level to 0) momentarily if it is not achieving optimal result.

Improves end-to-end throughput over the LAN by maximiing the WAN throughput.

By default, this setting is disabled.

Part 2: Admission Control - Connection Counts

occurs when optimized connection count exceeds model thresholds.

continues to optimize existing connections but new connections PT until connectoin counts falls below the "enable" threshold.

KBase: "admission control connection"

Logs:

Nov  3 09:38:35 sh01 sport[4539]: [admission_control.NOTICE] - {- -} Connection limit achieved.
Nov  3 09:38:35 sh01 sport[4539]: [admission_control.NOTICE] - {- -} Memory Usage: 295 Current Connections: 136
Nov  3 09:38:35 sh01 sport[4539]: [admission_control.WARN] - {- -} Pausing intercept…
Nov  3 09:38:45 sh01 statsd[4511]: [statsd.NOTICE]: Alarm triggered for rising error for event admission_conn

To automatically generate a sysdump:

(conifg)#debug alarm admission_conn enable

Part 3: Auto Discovery/Enhanced Auto Discovery

RiOS Auto-discovery Process

Step 1: Client send an SYN to the SteelHead.

Step 2: SteelHead add an TCP option 0x4c(76) to the SYN and it becomes SYN+ than sent to the Server Side SteelHead. also it send

Step 3: the Server Side SteelHead see the option 0x4c(76) also known as TCP probe. it respond an SYN/ACK+ back.  this time the inner TCP Session has been establised.

Step 4: the Server side SteelHead sends an SYN to the Server.

Step 5: the server respond the SYN/ACK to the SteelHead. Outer TCP session

Step 6: the Client SteelHead through Inner TCP session to the client and send an SYN/ACK to the client.  this time , Outer TCP session has been established.

TCP Option:

The TCP option used for auto-discovery is 0x4c(76).

The client-side SteelHead appliance attaches a 10 byte option to the TCP header;

The server-side SteelHead appliance attaches a 14 byte option in return.

Note: only done in the initial discovery process and not during connecton setup between the SteelHead appliances and outer TCP sessions.

Enhanced Auto-Discovery

automatically finds and optimizes between most distant SteelHead pair.

eliminates the need for manual peering rules

Also called "Auto Peering"

By default, automatic peering is enabled.

Supports unlimited SteelHeads in transit between Client and Server

Part 4: Connection Pooling

Connection pooling enhances network performance by providing a pool of pre-existing idle connections instead of having the SteelHead create a new connection for every request. This feature is useful to application protocols, such as HTTP, that use many rapidly created, short lived TCP connections.

By default, RiOS establishes 20 inner channels plus an Out-of-Band channel between two Steelhead appliances when they first communicate. RiOS uses the standard TCP keep-alive mechanism to monitor the state of these channels and the availability of the other Steelhead appliance.These TCP keep-alive packets are at least 52 bytes per packet (IP header, TCP header and timestamp TCP options) and are generated every 20 seconds for the OOB channel and every 300 seconds for the inner channels. Over the period of an hour, this will generate 840 packets (including return packets), resulting in at least 43680 bytes of traffic for the 21 connections to each remote Steelhead. Additionally, the total size of the packet over your WAN media will also depend on the particular link-layer (such as Ethernet header), as well as any other TCP options that other WAN devices in your network add to these packets.

Part 5: High Speed && MX-TCP

HSTCP is a feature you can enable on Steelhead appliances to help reduce WAN data transfers inefficiencies that are caused by limitations with regular TCP. Enabling the HSTCP feature allows for more complete utilization of these “long fat pipes”. HSTCP is an IETF defined RFC standard (defined in RFC 3649 and RFC 3742), and has been shown to provide significant performance improvements in networks with high BDP values.

Part 6: In-Path Rules

Applied on the Client Side SteelHead.

5 types of rules:

- pass-through rules(Define traffic to pass-thorugh, not optimize)

- Auto-Discovery rules(Define traffic to auto-discovery, optimize)

- Fixed-target rules(Manually define traffic and steelheads to optimize, no auto-discovery)

- Discard(Packets are silently dropped)

- Deny(The connection is reset)

Rules are processed top down until there is a match

In-path rules are only inspected when SYNs arrive on LAN ports.

Part 7: Peering Rules

Used to configure how SteelHead respond to auto-discovery probes.

Can be used to pass probes when SteelHead are connected serially.

Enhanced auto-discovery will automatically discovery the end SteelHeads

Can be used to define which SteelHeads we will accept connectoins from

3 types:

- auto - automatically determine the best response

- accept - accept the peering requests that match the rule

- pass - pass-thorugh peering requests that match the rule.

Part 8: Pre-Population(CIFS & MAPI)

CIFS prepopulation

Configure > Optimization > CIFs Prepopulation page.

the prepopulation operations effectively performs the first SteelHead appliance read of the data on the prepopulation share. Subsequently, the SteelHead appliance handles read and write requests as effectively as with a warm data transfer. With warm transfers, only new or modified data is sent, dramatically increasing the rate of data transfer over the WAN.

There are two reasons why this is important.

1. The CIFS pre-pop request always ingress and egres the SH using the primary port.

2. the data store will not be warmed if the traffic doesn‘t pass through the SH on its way to the primary port.

MAPI Prepopulation

Without MAPI prepopulation, if a user closes Microsoft Outlook or switches off the workstation the TCP sessions are broken. with MAPI prepopulation, The steelhaed appliance can start acting as if it is the mail client. if the client closes the connection, the client-side steelhead appliance will keep an open connection to the server-side steelhead appliance and ther server-side steelhead appliance will keep the connection open to the server. this allows for data to be pushed through the data store before the user logs on to the server again. the default timer is set to 96 hours, after that, the connection will be reset.

Part 9: SDR(Default, M and Adaptive)

Scable Data Referecning, SDR

Bandwidth optimizatoin is delivered thorugh SDR(Scalable Data Referencing). SDR uses a proprietary algorithm to break up TCP data streams into data chunks that are stored in the hard disk(data store) of the steelhead appliances. Each data chunk is assigned a unique integer label(reference) before it is sent to the peer steelhead appliance across the WAN. If the same byte sequence is seen again in the TCP data stream, then the reference is sent across the WAN instead of the raw data chunk. The peer steelhead appliance uses this reference to reconstruct the roiginal data chunk and the TCP data stream. Data and references are maintained in persistent storage in the data store within each SteelHead appliance. There are no consistency issues even in the presence of replicated data.

How Does SDR Work?

When data is sent for the first time across a network (no commonality with any file ever sent before), all data and references are new and are sent to the Steelhead appliance on the far side of the network. This new data and the accompanying references are compressed using conventional algorithms so as to improve performance, even on the first transfer.
When data is changed, new data and references are created. Thereafter, whenever new requests are sent across the network, the references created are compared with those that already exist in the local data store. Any data that the Steelhead appliance determines already exists on the far side of the network are not sent—only the references are sent across the network.
As files are copied, edited, renamed, and otherwise changed or moved, the Steelhead appliance continually builds the data store to include more and more data and references. References can be shared by different files and by files in different applications if the underlying bits are common to both. Since SDR can operate on all TCP-based protocols, data commonality across protocols can be leveraged so long as the binary representation of that data does not change between the protocols. For example, when a file transferred via FTP is then transferred using WFS (Windows File System), the binary representation of the file is basically the same and thus references can be sent for that file.

SDR Flavors(Adaptive Data Streamlining)

Default

- Disk based data store

- Excellent BW reductoin

SDR-M

- RAM based data store

- Excellent LAN side throught

SDR-Adaptive

- Blended data store / compression model

- Monitors both read and write disk I/O response and, based on statistical trends

- Good LAN side throughput and BW reduction

Part 10: Streamlining Techniques(Data, Transport & Application)

Data Streamlining

Data reductoin

- Eliminate redundant data on the WAN

- 60% - 95% reduction in bandwidth utilization

Compression

- LZ-Compression for "new" data segments

- useful for data transferred on first pass

QoS

- (Optional) Prioritize data on bandwidth and latency

- compatible with existing QoS implementations

Disaster Recovery Intelligence

- Automatically adapt algorithms to large-scale DR transfers

- Optimize reads, writes, and segment handling for massive loads.

Transport Streamlinig

SSL Acceleration

- Supports end-to-end acceleration of secure traffic

- Maintains the prefered trust model

时间: 2024-11-10 15:56:06

WAN Optimizatoin - WAN Optimization Technology的相关文章

WAN Optimizatoin - SteelHead Appliance Deployment

Part 1: Cabling(Straight through VS Crossover) The primary interface is typically used for managemnt purposes, datastore synchronization(if possible), and for server-side out-of-path configuratoins. (Straight-through cable to switch) the LAN and WAN

WAN Optimizatoin - SaaS && Cloud

Part 1: SteelHead SaaS SteelHead SaaS uses Akamai's SRIP overlay network. Sure Route IP(SRIP) is an Akamai overlay network that allows optimized netwrok traffic across the Internet. SRIP continuously maps Internet to calculate shortest path. Cloud St

MTK工程WAN口无法联通问题

开发板网口的配置在/etc/config/network文件里. 文件初始配置是这样的: [email protected]:/# cat /etc/config/network config interface 'loopback' option ifname 'lo' option proto 'static' option ipaddr '127.0.0.1' option netmask '255.0.0.0' config globals 'globals' option ula_pr

LAN、WAN、WLAN、WiFi之间的区别

感觉这几个概念让人傻傻分不清,下面以最常见的路由器来解释这几个概念 LAN 1 LAN,全称Local Area Network,中文名叫做局域网. 顾名思义,LAN是指在某一区域内由多台计算机互联成的计算机组.一般是方圆几千米以内.局域网可以实现文件管理.应用软件共享.打印机共享.工作组内的日程安排.电子邮件和传真通信服务等功能.局域网是封闭型的,可以由办公室内的两台计算机组成,也可以由一个公司内的上千台计算机组成. 2 具体到路由器,我们一般组网,都是组建的LAN网络,用户在局域网里通信.传

Network Function Virtualization for a Network Device

An apparatus for performing network function virtualization (NFV), comprising: a memory, a processor coupled to the memory, wherein the memory includes instructions that when executed by the processor cause the apparatus to perform the following: rec

Citrix XenApp和XenDesktop 打印系统解析③

Citrix Universal PrintServer(UPS) 3.3.1.Citrix UPS概述 Citrix UniversalPrinter Server(以下简称UPS)是一个XenApp和XenDesktop环境中的一个打印组件,它有助力提高用户在网络打印方面的体验. 首先我们来说,Citrix已经有了UPD为什么还需要推出UPS?根据前文我们的描述,Citrix UPD只是解决了基于客户端的打印问题(当然并不是完全解决,部分驱动还存在兼容问题),而对于网络打印呢?Citrix

智能NDS服务器的搭建——三大运营商线路分流解析DNS

在我们中国电信运营商不止一家,有电信.移动.网通,但我们在访问互联网资源时,有时候就会现跨网访问的情况,但有时间跨网访问速度是奇慢的.所以我们的网站运营商,也会在网站的服务器上同时配上三大电信运营商的线路,如此一来,电信用户访问的时候就走电信的出口,移动用户访问的时候就走移动的出口,网通通用户访问的就走网通的出口,这样也就很好的解决了跨网访问速度奇慢的问题的了.但这里其实就用到了,如何让DNS在解析地址的过程中智能的去判断哪个运营商的用户走哪条线路了.今天在这里给大家模拟实现一下,智能DNS如何

linux常用命令加实例大全

目  录引言    1一.安装和登录    2(一)    login    2(二)    shutdown    2(三)    halt    3(四)    reboot    3(五)    install    4(六)    mount    4(七)    umount    6(八)    chsh    6(九)    exit    7(十)    last    7(十一)    动手练习    7二.文件处理    9(一)    file    9(二)    mkd

javascript实现有向无环图中任意两点最短路径的dijistra算法

有向无环图 一个无环的有向图称做有向无环图(directed acycline praph).简称DAG 图.DAG 图是一类较有向树更一般的特殊有向图, dijistra算法 摘自 http://www.cnblogs.com/biyeymyhjob/archive/2012/07/31/2615833.html 1.定义概览 Dijkstra(迪杰斯特拉)算法是典型的单源最短路径算法,用于计算一个节点到其他所有节点的最短路径.主要特点是以起始点为中心向外层层扩展,直到扩展到终点为止.Dijk