Part 1: Adaptive Compression
Configure > Optimization > Performance
Detects LZ data compression performance for a connection dynamically && turns it off(sets the compression level to 0) momentarily if it is not achieving optimal result.
Improves end-to-end throughput over the LAN by maximiing the WAN throughput.
By default, this setting is disabled.
Part 2: Admission Control - Connection Counts
occurs when optimized connection count exceeds model thresholds.
continues to optimize existing connections but new connections PT until connectoin counts falls below the "enable" threshold.
KBase: "admission control connection"
Logs:
Nov 3 09:38:35 sh01 sport[4539]: [admission_control.NOTICE] - {- -} Connection limit achieved. Nov 3 09:38:35 sh01 sport[4539]: [admission_control.NOTICE] - {- -} Memory Usage: 295 Current Connections: 136 Nov 3 09:38:35 sh01 sport[4539]: [admission_control.WARN] - {- -} Pausing intercept… Nov 3 09:38:45 sh01 statsd[4511]: [statsd.NOTICE]: Alarm triggered for rising error for event admission_conn
To automatically generate a sysdump:
(conifg)#debug alarm admission_conn enable
Part 3: Auto Discovery/Enhanced Auto Discovery
RiOS Auto-discovery Process
Step 1: Client send an SYN to the SteelHead.
Step 2: SteelHead add an TCP option 0x4c(76) to the SYN and it becomes SYN+ than sent to the Server Side SteelHead. also it send
Step 3: the Server Side SteelHead see the option 0x4c(76) also known as TCP probe. it respond an SYN/ACK+ back. this time the inner TCP Session has been establised.
Step 4: the Server side SteelHead sends an SYN to the Server.
Step 5: the server respond the SYN/ACK to the SteelHead. Outer TCP session
Step 6: the Client SteelHead through Inner TCP session to the client and send an SYN/ACK to the client. this time , Outer TCP session has been established.
TCP Option:
The TCP option used for auto-discovery is 0x4c(76).
The client-side SteelHead appliance attaches a 10 byte option to the TCP header;
The server-side SteelHead appliance attaches a 14 byte option in return.
Note: only done in the initial discovery process and not during connecton setup between the SteelHead appliances and outer TCP sessions.
Enhanced Auto-Discovery
automatically finds and optimizes between most distant SteelHead pair.
eliminates the need for manual peering rules
Also called "Auto Peering"
By default, automatic peering is enabled.
Supports unlimited SteelHeads in transit between Client and Server
Part 4: Connection Pooling
Connection pooling enhances network performance by providing a pool of pre-existing idle connections instead of having the SteelHead create a new connection for every request. This feature is useful to application protocols, such as HTTP, that use many rapidly created, short lived TCP connections.
By default, RiOS establishes 20 inner channels plus an Out-of-Band channel between two Steelhead appliances when they first communicate. RiOS uses the standard TCP keep-alive mechanism to monitor the state of these channels and the availability of the other Steelhead appliance.These TCP keep-alive packets are at least 52 bytes per packet (IP header, TCP header and timestamp TCP options) and are generated every 20 seconds for the OOB channel and every 300 seconds for the inner channels. Over the period of an hour, this will generate 840 packets (including return packets), resulting in at least 43680 bytes of traffic for the 21 connections to each remote Steelhead. Additionally, the total size of the packet over your WAN media will also depend on the particular link-layer (such as Ethernet header), as well as any other TCP options that other WAN devices in your network add to these packets.
Part 5: High Speed && MX-TCP
HSTCP is a feature you can enable on Steelhead appliances to help reduce WAN data transfers inefficiencies that are caused by limitations with regular TCP. Enabling the HSTCP feature allows for more complete utilization of these “long fat pipes”. HSTCP is an IETF defined RFC standard (defined in RFC 3649 and RFC 3742), and has been shown to provide significant performance improvements in networks with high BDP values.
Part 6: In-Path Rules
Applied on the Client Side SteelHead.
5 types of rules:
- pass-through rules(Define traffic to pass-thorugh, not optimize)
- Auto-Discovery rules(Define traffic to auto-discovery, optimize)
- Fixed-target rules(Manually define traffic and steelheads to optimize, no auto-discovery)
- Discard(Packets are silently dropped)
- Deny(The connection is reset)
Rules are processed top down until there is a match
In-path rules are only inspected when SYNs arrive on LAN ports.
Part 7: Peering Rules
Used to configure how SteelHead respond to auto-discovery probes.
Can be used to pass probes when SteelHead are connected serially.
Enhanced auto-discovery will automatically discovery the end SteelHeads
Can be used to define which SteelHeads we will accept connectoins from
3 types:
- auto - automatically determine the best response
- accept - accept the peering requests that match the rule
- pass - pass-thorugh peering requests that match the rule.
Part 8: Pre-Population(CIFS & MAPI)
CIFS prepopulation
Configure > Optimization > CIFs Prepopulation page.
the prepopulation operations effectively performs the first SteelHead appliance read of the data on the prepopulation share. Subsequently, the SteelHead appliance handles read and write requests as effectively as with a warm data transfer. With warm transfers, only new or modified data is sent, dramatically increasing the rate of data transfer over the WAN.
There are two reasons why this is important.
1. The CIFS pre-pop request always ingress and egres the SH using the primary port.
2. the data store will not be warmed if the traffic doesn‘t pass through the SH on its way to the primary port.
MAPI Prepopulation
Without MAPI prepopulation, if a user closes Microsoft Outlook or switches off the workstation the TCP sessions are broken. with MAPI prepopulation, The steelhaed appliance can start acting as if it is the mail client. if the client closes the connection, the client-side steelhead appliance will keep an open connection to the server-side steelhead appliance and ther server-side steelhead appliance will keep the connection open to the server. this allows for data to be pushed through the data store before the user logs on to the server again. the default timer is set to 96 hours, after that, the connection will be reset.
Part 9: SDR(Default, M and Adaptive)
Scable Data Referecning, SDR
Bandwidth optimizatoin is delivered thorugh SDR(Scalable Data Referencing). SDR uses a proprietary algorithm to break up TCP data streams into data chunks that are stored in the hard disk(data store) of the steelhead appliances. Each data chunk is assigned a unique integer label(reference) before it is sent to the peer steelhead appliance across the WAN. If the same byte sequence is seen again in the TCP data stream, then the reference is sent across the WAN instead of the raw data chunk. The peer steelhead appliance uses this reference to reconstruct the roiginal data chunk and the TCP data stream. Data and references are maintained in persistent storage in the data store within each SteelHead appliance. There are no consistency issues even in the presence of replicated data.
How Does SDR Work?
When data is sent for the first time across a network (no commonality with any file ever sent before), all data and references are new and are sent to the Steelhead appliance on the far side of the network. This new data and the accompanying references are compressed using conventional algorithms so as to improve performance, even on the first transfer.
When data is changed, new data and references are created. Thereafter, whenever new requests are sent across the network, the references created are compared with those that already exist in the local data store. Any data that the Steelhead appliance determines already exists on the far side of the network are not sent—only the references are sent across the network.
As files are copied, edited, renamed, and otherwise changed or moved, the Steelhead appliance continually builds the data store to include more and more data and references. References can be shared by different files and by files in different applications if the underlying bits are common to both. Since SDR can operate on all TCP-based protocols, data commonality across protocols can be leveraged so long as the binary representation of that data does not change between the protocols. For example, when a file transferred via FTP is then transferred using WFS (Windows File System), the binary representation of the file is basically the same and thus references can be sent for that file.
SDR Flavors(Adaptive Data Streamlining)
Default
- Disk based data store
- Excellent BW reductoin
SDR-M
- RAM based data store
- Excellent LAN side throught
SDR-Adaptive
- Blended data store / compression model
- Monitors both read and write disk I/O response and, based on statistical trends
- Good LAN side throughput and BW reduction
Part 10: Streamlining Techniques(Data, Transport & Application)
Data Streamlining
Data reductoin
- Eliminate redundant data on the WAN
- 60% - 95% reduction in bandwidth utilization
Compression
- LZ-Compression for "new" data segments
- useful for data transferred on first pass
QoS
- (Optional) Prioritize data on bandwidth and latency
- compatible with existing QoS implementations
Disaster Recovery Intelligence
- Automatically adapt algorithms to large-scale DR transfers
- Optimize reads, writes, and segment handling for massive loads.
Transport Streamlinig
SSL Acceleration
- Supports end-to-end acceleration of secure traffic
- Maintains the prefered trust model