Studying TCP‘s Throughput and Goodput using NS
- What is Throughput
- Throughput is the amount of data received by the destination.
- The Average Throughput is the throughput per unit of time
- Example:
Suppose a TCP receiver receives 60 M Bytes of data in 1 min, then:- The throughput of the period is 60 M Bytes
- The average throughput is 60 M Bytes/min or 1 M Bytes/sec
- Two kinds of Throughput
- There are 2 kinds of throughputs in networking:
- Indiscriminant throughput (= the average amount data received by the receiver per unit time, regardless of whether the data is retransmission or not).
- Good throughput (= the average amount data received by the receiver per unit time that are NOT retransmissions).
- The first kind of throughput is known as throughput in network literature.
- The second kind of throughput is known as goodput in network literature.
- There are 2 kinds of throughputs in networking:
- Obtaining Throughput and Goodput information in NS simulations
- Techniques to obtain the different throughput information in NS:
- The (indiscrimant) throughput information is obtained from trace information from queues in links obtained during the NS simulation run.
- The goodput information is obtained by attaching a trace application to the TCP Sink (receiving) Agent
- We will first look at how to obtain (indiscrimant) throughput information next
- Further down in this webpage (see: clickhere), we will look at how to obtain goodput information
- Techniques to obtain the different throughput information in NS:
- Obtain Throughput information from Trace Information of Queues
- First, you make tell NS to print out trace information of a queue in a link, using:
$ns trace-queue $src_node $dest_node $output_file
This will instruct NS to write packet arriving activities on the link ($src_node, $dest_node) to the output file $output_file
- Example:
set n0 [$ns node] set n1 [$ns node] $ns duplex-link $n0 $n1 2Mb 10ms DropTail set trace_file [open "out.tr" w] $ns trace-queue $n0 $n1 $trace_file
- The output file has the following format:
- Most important informations in the trace record:
- Each line of the trace file corresponds to one event of packet activity
- The first character of one trace record indicates the action:
- +: a packet arrives at a queue (it may or may not be dropped !)
- -: a packet leaves a queue
- r: a packet is received into a queue (buffered)
- d: a packet is dropped from a queue
- The last 4 fields contains:
- source id: id of sender (format is: x.y = node x and transport agent y)
- receiver id: id of receiver (format is: x.y = node x and transport agent y)
- sequence number: useful to determine if packet was new or a retransmission
- packet id: is always increasing - usefule to determine the number of packets lossed.
- The packet size contains the number of bytes in the packet
- Sample trace output:
+ 0.311227 3 4 tcp 592 ------- 0 0.0 4.0 0 0 - 0.311227 3 4 tcp 592 ------- 0 0.0 4.0 0 0 r 0.351867 3 4 tcp 592 ------- 0 0.0 4.0 0 0
- Trace record shows an arrival of a packet from TCP source 0 attached at node 0, for TCP destination 0 attached at node 4
- The size of this packet is 592 bytes.
- To compute the throughput, do:
- Trace the last hop link of the destination.
- Look for receive (r) events in the trace file.
- Look for the correct source and destination pair
- Add up all the packet sizes in the receive events
- First, you make tell NS to print out trace information of a queue in a link, using:
- Example Tracing TCP Throughput
- Recall the network simulated:
- Note that the last link to the destination is (n3, n4)
- The following addition is made to the Reno script to get trace data to the destination:
#Make a NS simulator set ns [new Simulator] # Define a ‘finish‘ procedure proc finish {} { exit 0 } # Create the nodes: set n0 [$ns node] set n1 [$ns node] set n2 [$ns node] set n3 [$ns node] set n4 [$ns node] set n5 [$ns node] # Create the links: $ns duplex-link $n0 $n2 2Mb 10ms DropTail $ns duplex-link $n1 $n2 2Mb 10ms DropTail $ns duplex-link $n2 $n3 0.3Mb 200ms DropTail $ns duplex-link $n3 $n4 0.5Mb 40ms DropTail $ns duplex-link $n3 $n5 0.5Mb 30ms DropTail # Add a TCP sending module to node n0 set tcp1 [new Agent/TCP/Reno] $ns attach-agent $n0 $tcp1 # Add a TCP receiving module to node n4 set sink1 [new Agent/TCPSink] $ns attach-agent $n4 $sink1 # Direct traffic from "tcp1" to "sink1" $ns connect $tcp1 $sink1 # Setup a FTP traffic generator on "tcp1" set ftp1 [new Application/FTP] $ftp1 attach-agent $tcp1 $ftp1 set type_ FTP (no necessary) # Schedule start/stop times $ns at 0.1 "$ftp1 start" $ns at 100.0 "$ftp1 stop" # Set simulation end time $ns at 125.0 "finish" (Will invoke "exit 0") ################################################## ## Obtain Trace date at destination (n4) ################################################## set trace_file [open "out.tr" w] $ns trace-queue $n3 $n4 $trace_file # Run simulation !!!! $ns run
- Example Program: (Demo above code)
- Prog file: click here
Run progran with NS and you will get a trace file
- Recall the network simulated:
- Processing the Trace File
- Take a look at the output trace file "out.tr":
- + 0.311227 3 4 tcp 40 ------- 0 0.0 4.0 0 0
(@ 0.311227 sec: 40 bytes of TCP data arrives at node 3)
- - 0.311227 3 4 tcp 40 ------- 0 0.0 4.0 0 0
(@ 0.311227 sec: 40 bytes of TCP data departed from node 3)
- r 0.351867 3 4 tcp 40 ------- 0 0.0 4.0 0 0
(@ 0.351867 sec: 40 bytes of TCP data is received by node 4)
- + 0.831888 3 4 tcp 592 ------- 0 0.0 4.0 1 2
(@ 0.831888 sec: 592 bytes of TCP data arrives at node 3)
- - 0.831888 3 4 tcp 592 ------- 0 0.0 4.0 1 2
(@ 0.831888 sec: 592 bytes of TCP data departed from node 3)
- + 0.847675 3 4 tcp 592 ------- 0 0.0 4.0 2 3
(@ 0.847675 sec: 592 bytes of TCP data arrives at node 3)
- - 0.847675 3 4 tcp 592 ------- 0 0.0 4.0 2 3
(@ 0.847675 sec: 592 bytes of TCP data departed from node 3)
- r 0.88136 3 4 tcp 592 ------- 0 0.0 4.0 1 2
(@ 0.88136 sec: 592 bytes of TCP data is received by node 4)
- r 0.897147 3 4 tcp 592 ------- 0 0.0 4.0 2 3 + 1.361381 3 4 tcp 592 ------- 0 0.0 4.0 3 6 - 1.361381 3 4 tcp 592 ------- 0 0.0 4.0 3 6 + 1.377168 3 4 tcp 592 ------- 0 0.0 4.0 4 7 - 1.377168 3 4 tcp 592 ------- 0 0.0 4.0 4 7 + 1.392955 3 4 tcp 592 ------- 0 0.0 4.0 5 8 - 1.392955 3 4 tcp 592 ------- 0 0.0 4.0 5 8 + 1.408741 3 4 tcp 592 ------- 0 0.0 4.0 6 9 - 1.408741 3 4 tcp 592 ------- 0 0.0 4.0 6 9 r 1.410853 3 4 tcp 592 ------- 0 0.0 4.0 3 6
- Computing the throughput received in each time interval is nothing more than tallying the total number of bytes received in each interval.
- Example:
- From the above trace file, we can tally the number of bytes received in the first second of interval time by adding the packet sizes that are received in the first second.
These are the packets that are received by TCP entity "4.0" in the first second:
- r 0.351867 3 4 tcp 40 ------- 0 0.0
4.0
- 0 0 r 0.88136 3 4 tcp 592 ------- 0 0.0
4.0
- 1 2 r 0.897147 3 4 tcp 592 ------- 0 0.0
4.0
- 2 3
The total number of bytes received in the first second is therefore:
40 + 592 + 592 = 1224 bytes
or:
1224 × 8 = 9792 bits
So the throughput in the first second is 9792 bits/sec = 0.009792 Mbps
Repeat the process for the second second, and so on...
You do NOT want to do that by hand...
- You need to write a program to identify the correct source, destination pair and add up all the packet size...
- Someone has done that !!!
- Here is a PERL program that can be used to process trace files: click here
Save it and run as using:perl throughput.pl trfile destNode srcNode.port# destNode.port# \ time-granularity
Time-granularity let you compute the throughput over time intervals, each interval will have the size given by the time-granularity
- Example:
You can use the following command to get average throughput data from the trace file obtained in the NS script above:perl throughput.pl out.tr 4 0.0 4.0 1 > out
We need to use these parameters to extract the throughput data because of:
r 0.351867 3 4 tcp 592 ------- 0 0.0 4.0 0 0
- 4 is the node ID
- 0.0 is the ID of the TCP source
- 4.0 is the ID of the TCP sink
- The parameter "out.tr" is the input filename
- The parameter "1" is interval size (here: interval size is 1 second)
- The throughput.pl script will create an output file that is suitable for plotting.
To plot the progressing of average throughput, use gnuplot:gnuplot >> plot "out" using 1:2 title "TCP throughput" with lines 1
Goodput..... - Take a look at the output trace file "out.tr":
- The difference between Throughput and Goodput
- Consider the following receive events in the trace file that happens between 3 and 4 seconds (you can obtain the trace file when running the example program: click here)
- r 3.01512 3 4 tcp 592 ------- 0 0.0 4.0 32 63 r 3.030907 3 4 tcp 592 ------- 0 0.0 4.0 33 64 r 3.046693 3 4 tcp 592 ------- 0 0.0 4.0 34 65 r 3.06248 3 4 tcp 592 ------- 0 0.0 4.0 35 66 r 3.078267 3 4 tcp 592 ------- 0 0.0 4.0 36 67 r 3.094053 3 4 tcp 592 ------- 0 0.0 4.0 37 68 r 3.10984 3 4 tcp 592 ------- 0 0.0 4.0 38 69 r 3.125627 3 4 tcp 592 ------- 0 0.0 4.0 39 70 r 3.141413 3 4 tcp 592 ------- 0 0.0 4.0 40 71 r 3.1572 3 4 tcp 592 ------- 0 0.0 4.0 41 72 r 3.172987 3 4 tcp 592 ------- 0 0.0 4.0 42 73 r 3.188773 3 4 tcp 592 ------- 0 0.0 4.0 43 74 r 3.20456 3 4 tcp 592 ------- 0 0.0 4.0 44 75 r 3.220347 3 4 tcp 592 ------- 0 0.0 4.0 45 76 r 3.236133 3 4 tcp 592 ------- 0 0.0 4.0 46 77 r 3.25192 3 4 tcp 592 ------- 0 0.0 4.0 47 78 r 3.267707 3 4 tcp 592 ------- 0 0.0 4.0 48 79 r 3.283493 3 4 tcp 592 ------- 0 0.0 4.0
49
- 80
packet 50 is lossed !!!
- r 3.29928 3 4 tcp 592 ------- 0 0.0 4.0
51
- 82 r 3.315067 3 4 tcp 592 ------- 0 0.0 4.0 53 84 r 3.330853 3 4 tcp 592 ------- 0 0.0 4.0 55 86 r 3.34664 3 4 tcp 592 ------- 0 0.0 4.0 57 88 r 3.362427 3 4 tcp 592 ------- 0 0.0 4.0 59 90 r 3.378213 3 4 tcp 592 ------- 0 0.0 4.0 61 92 r 3.528827 3 4 tcp 592 ------- 0 0.0 4.0 63 110 r 3.544613 3 4 tcp 592 ------- 0 0.0 4.0 64 111 r 3.5604 3 4 tcp 592 ------- 0 0.0 4.0 65 113 r 3.576187 3 4 tcp 592 ------- 0 0.0 4.0 66 114 r 3.591973 3 4 tcp 592 ------- 0 0.0 4.0 67 116 r 3.60776 3 4 tcp 592 ------- 0 0.0 4.0 68 117 r 3.623547 3 4 tcp 592 ------- 0 0.0 4.0 69 119 r 3.639333 3 4 tcp 592 ------- 0 0.0 4.0 70 120 r 3.65512 3 4 tcp 592 ------- 0 0.0 4.0 71 122 r 3.670907 3 4 tcp 592 ------- 0 0.0 4.0 72 123 r 3.686693 3 4 tcp 592 ------- 0 0.0 4.0 73 125 r 3.70248 3 4 tcp 592 ------- 0 0.0 4.0 74 126 r 3.718267 3 4 tcp 592 ------- 0 0.0 4.0 75 128 r 3.734053 3 4 tcp 592 ------- 0 0.0 4.0 76 129 r 3.74984 3 4 tcp 592 ------- 0 0.0 4.0 77 131 r 3.765627 3 4 tcp 592 ------- 0 0.0 4.0 78 132 r 3.781413 3 4 tcp 592 ------- 0 0.0 4.0 79 134 r 3.7972 3 4 tcp 592 ------- 0 0.0 4.0 80 135 r 3.812987 3 4 tcp 592 ------- 0 0.0 4.0 81 137 r 3.828773 3 4 tcp 592 ------- 0 0.0 4.0 83 139 r 3.84456 3 4 tcp 592 ------- 0 0.0 4.0 85 141 r 3.860347 3 4 tcp 592 ------- 0 0.0 4.0 87 143 r 3.876133 3 4 tcp 592 ------- 0 0.0 4.0 89 145 r 3.89192 3 4 tcp 592 ------- 0 0.0 4.0 91 147 r 3.907707 3 4 tcp 592 ------- 0 0.0 4.0 93 149 r 3.923493 3 4 tcp 592 ------- 0 0.0 4.0 95 151 r 3.93928 3 4 tcp 592 ------- 0 0.0 4.0 97 153 r 3.955067 3 4 tcp 592 ------- 0 0.0 4.0 99 156 r 3.970853 3 4 tcp 592 ------- 0 0.0 4.0 101 159 r 3.98664 3 4 tcp 592 ------- 0 0.0 4.0 102 161
- The packet with sequence number 50 was lost and is NOT received between time 4 and 5 second (see the trace above) !!
- Therefore, packets with sequence numbers 51 and higher CANNOT be delivered to the receiver between time 4 and 5 seconds.
- Between time 4 and 5, the receiver has received a total of 54 packets (just count the number of entries above) - each packet has 592 bytes.
Therefore, the throughput between 3 and 4 second is:54 x 592 bytes = 31968 bytes /sec = 255744 bits/sec = 0.255744 Mbps
- However... because packets with sequence numbers 51 and above cannot be received without receiving packet 50 first, these packets are NOT counted in the goodput of that interval...
Only packets with sequence numbers 32 to 49 (18 packets) are counted as "good" throughput.Therefore, the goodput between 3 and 4 second is:
18 x 592 bytes = 10656 bytes/sec = 85248 bits/sec = 0.085248 Mbps
- Consider the following receive events in the trace file that happens between 3 and 4 seconds (you can obtain the trace file when running the example program: click here)
- Obtaining Goodput information
- This fact will allow you to obtain the "goodput" from an NS simulation:
- If
an application
- (i.e.,
a class that is derived from the NS class Application
- ) can attach itself to
an TCPSink agent
- .
The TCPSink agent will call (= invoke) the application‘s recv(bytes) method whenever the TCPSink agent delivers a data segment.
Data segments are always delivered in sequence. In other words, the TCPSink agent will not deliver segment n unless segment n-1 has been delivered.
- Using this fact, there is a very easy way to obtain the "goodput":
- Just make the TCPSink agent (that receives and delivers data packets in the correct sequence to the receiving application) do the computation.
- In order to do this, all we need to do is write an OTcl class TraceApp that is a subclass of the NS class Application
In the class TraceApp we need to write a method recv(x) - the parameter x is the number of bytes in the data segment.
- A simple TraceApp that tally the number of bytes received by the TCPSink agent:
Class TraceApp -superclass Application TraceApp instproc init {args} { $self set bytes_ 0 eval $self next $args } TraceApp instproc recv {byte} { $self instvar bytes_ set bytes_ [expr $bytes_ + $byte] return $bytes_ }
- The expression -superclass indicates that the class TraceApp is a child class of the NS class Application (Note: FTP is also a child of Application)
- There are 2 methods defined in the TraceApp class:
- init{args} - this method is invoked by the new operator when an object of the class TraceApp is created.
(You can view this method as the constructor)
The init{args} method set the variable bytes_ to 0 (this variable is used to tally the number of bytes received)
- recv{byte} - this method is provide so that the TCPSink agent can invoke it.
(In fact, the recv{byte} method is defined in the parent class Application - by defining a new recv{byte} method in the child class TraceApp, we have overridden the one in the parent class Application)The most important statement in recv{byte} is:
set bytes_ [expr $bytes_ + $byte]
- byte is the input parameter to the recv{byte} method
- bytes_ is the local variable of the recv{byte} method used to tally the number of bytes received
- So the method increments bytes_ by byte - i.e., do the tally
- init{args} - this method is invoked by the new operator when an object of the class TraceApp is created.
- There is still some work left to do:
- Create a TraceApp object
- Attach the TraceApp to the TCPSink
- Activate the TraceApp object
- The following code shows the steps in Tcl code:
set sink1 [new Agent/TCPSink] ;# Create a TCPSink object set traceapp [new TraceApp] ;# Create a TraceApp object $traceapp attach-agent $sink1 ;# Attach traceapp to TCPSink $ns at 0.0 "$traceapp start" ;# Start the traceapp object
- The above code will have the TraceApp object accumulate goodput information from the start to the end of the simulation.
- If we want goodput information over smaller time interval, we can use the technique (that we saw when we obtain CWND information).
- The following self-scheduling Tcl function can be used to obtain goodput information (works just like the one that obtains CWND information):
plotThroughput {tcpSink outfile} { global ns set now [$ns now] ;# Read current time set nbytes [$tcpSink set bytes_] ;# Read number of bytes $tcpSink set bytes_ 0 ;# Reset for next epoch ### Prints "TIME throughput" to output file set throughput [expr ($nbytes * 8.0 / 1000000) / $time_incr] puts $outfile "$now $throughput" set time_incr 1.0 $ns at [expr $now+1.0] "plotThroughput $tcpSink $outfile" }
- Example Program: (Demo above code)
- Prog file: click here
When you run the program, it will create an output file "out2.tput" that contain goodput data which you can plot using gnuplot:
plot "out2.tput" using 1:2 with lines 1
- This fact will allow you to obtain the "goodput" from an NS simulation:
http://www.mathcs.emory.edu/~cheung/Courses/558-old/Syllabus/90-NS/3-Perf-Anal/TCP-Throughput.html
Studying TCP's Throughput and Goodput using NS