Studying TCP's Throughput and Goodput using NS

Studying TCP‘s Throughput and Goodput using NS



  • What is Throughput

    • Throughput is the amount of data received by the destination.
    • The Average Throughput is the throughput per unit of time
    • Example:

      Suppose a TCP receiver receives 60 M Bytes of data in 1 min, then:

      • The throughput of the period is 60 M Bytes
      • The average throughput is 60 M Bytes/min or 1 M Bytes/sec


  • Two kinds of Throughput

    • There are 2 kinds of throughputs in networking:

      1. Indiscriminant throughput (= the average amount data received by the receiver per unit time, regardless of whether the data is retransmission or not).
      2. Good throughput (= the average amount data received by the receiver per unit time that are NOT retransmissions).
    • The first kind of throughput is known as throughput in network literature.
    • The second kind of throughput is known as goodput in network literature.


  • Obtaining Throughput and Goodput information in NS simulations
    • Techniques to obtain the different throughput information in NS:

      1. The (indiscrimant) throughput information is obtained from trace information from queues in links obtained during the NS simulation run.
      2. The goodput information is obtained by attaching a trace application to the TCP Sink (receiving) Agent
    • We will first look at how to obtain (indiscrimant) throughput information next
    • Further down in this webpage (see: clickhere), we will look at how to obtain goodput information


  • Obtain Throughput information from Trace Information of Queues

    • First, you make tell NS to print out trace information of a queue in a link, using:

         $ns  trace-queue  $src_node  $dest_node  $output_file
      

      This will instruct NS to write packet arriving activities on the link ($src_node, $dest_node) to the output file $output_file


    • Example:
        set n0 [$ns node]
        set n1 [$ns node]
      
        $ns duplex-link $n0 $n1   2Mb  10ms DropTail       
      
        set trace_file [open  "out.tr"  w]
      
        $ns  trace-queue  $n0  $n1  $trace_file
      


    • The output file has the following format:
    • Most important informations in the trace record:
      • Each line of the trace file corresponds to one event of packet activity
      • The first character of one trace record indicates the action:
        • +: a packet arrives at a queue (it may or may not be dropped !)
        • -: a packet leaves a queue
        • r: a packet is received into a queue (buffered)
        • d: a packet is dropped from a queue
      • The last 4 fields contains:
        • source id: id of sender (format is: x.y = node x and transport agent y)
        • receiver id: id of receiver (format is: x.y = node x and transport agent y)
        • sequence number: useful to determine if packet was new or a retransmission
        • packet id: is always increasing - usefule to determine the number of packets lossed.
      • The packet size contains the number of bytes in the packet

    • Sample trace output:
         + 0.311227 3 4 tcp 592 ------- 0 0.0 4.0 0 0
         - 0.311227 3 4 tcp 592 ------- 0 0.0 4.0 0 0
         r 0.351867 3 4 tcp 592 ------- 0 0.0 4.0 0 0
      
      • Trace record shows an arrival of a packet from TCP source 0 attached at node 0, for TCP destination 0 attached at node 4
      • The size of this packet is 592 bytes.


    • To compute the throughput, do:
      1. Trace the last hop link of the destination.
      2. Look for receive (r) events in the trace file.
      3. Look for the correct source and destination pair
      4. Add up all the packet sizes in the receive events


  • Example Tracing TCP Throughput

    • Recall the network simulated:

    • Note that the last link to the destination is (n3, n4)
    • The following addition is made to the Reno script to get trace data to the destination:
        #Make a NS simulator
        set ns [new Simulator]	
      
        # Define a ‘finish‘ procedure
        proc finish {} {
           exit 0
        }
      
        # Create the nodes:
        set n0 [$ns node]
        set n1 [$ns node]
        set n2 [$ns node]
        set n3 [$ns node]
        set n4 [$ns node]
        set n5 [$ns node]
      
        # Create the links:
        $ns duplex-link $n0 $n2   2Mb  10ms DropTail
        $ns duplex-link $n1 $n2   2Mb  10ms DropTail
        $ns duplex-link $n2 $n3 0.3Mb 200ms DropTail
        $ns duplex-link $n3 $n4 0.5Mb  40ms DropTail
        $ns duplex-link $n3 $n5 0.5Mb  30ms DropTail
      
        # Add a TCP sending module to node n0
        set tcp1 [new Agent/TCP/Reno]
        $ns attach-agent $n0 $tcp1
      
        # Add a TCP receiving module to node n4
        set sink1 [new Agent/TCPSink]
        $ns attach-agent $n4 $sink1
      
        # Direct traffic from "tcp1" to "sink1"
        $ns connect $tcp1 $sink1
      
        # Setup a FTP traffic generator on "tcp1"
        set ftp1 [new Application/FTP]
        $ftp1 attach-agent $tcp1
        $ftp1 set type_ FTP               (no necessary)
      
        # Schedule start/stop times
        $ns at 0.1   "$ftp1 start"
        $ns at 100.0 "$ftp1 stop"
      
        # Set simulation end time
        $ns at 125.0 "finish"		    (Will invoke "exit 0")   
      
        ##################################################
        ## Obtain Trace date at destination (n4)
        ##################################################
      
        set trace_file [open  "out.tr"  w]
      
        $ns  trace-queue  $n3  $n4  $trace_file
      
        # Run simulation !!!!
        $ns run
      

    • Example Program: (Demo above code)                        

      Run progran with NS and you will get a trace file



  • Processing the Trace File

    • Take a look at the output trace file "out.tr":

          + 0.311227 3 4 tcp 40 ------- 0 0.0 4.0 0 0

      (@ 0.311227 sec: 40 bytes of TCP data arrives at node 3)

          - 0.311227 3 4 tcp 40 ------- 0 0.0 4.0 0 0

      (@ 0.311227 sec: 40 bytes of TCP data departed from node 3)

          r 0.351867 3 4 tcp 40 ------- 0 0.0 4.0 0 0

      (@ 0.351867 sec: 40 bytes of TCP data is received by node 4)

          + 0.831888 3 4 tcp 592 ------- 0 0.0 4.0 1 2

      (@ 0.831888 sec: 592 bytes of TCP data arrives at node 3)

          - 0.831888 3 4 tcp 592 ------- 0 0.0 4.0 1 2

      (@ 0.831888 sec: 592 bytes of TCP data departed from node 3)

          + 0.847675 3 4 tcp 592 ------- 0 0.0 4.0 2 3

      (@ 0.847675 sec: 592 bytes of TCP data arrives at node 3)

          - 0.847675 3 4 tcp 592 ------- 0 0.0 4.0 2 3

      (@ 0.847675 sec: 592 bytes of TCP data departed from node 3)

          r 0.88136 3 4 tcp 592 ------- 0 0.0 4.0 1 2

      (@ 0.88136 sec: 592 bytes of TCP data is received by node 4)

        r 0.897147 3 4 tcp 592 ------- 0 0.0 4.0 2 3 + 1.361381 3 4 tcp 592 ------- 0 0.0 4.0 3 6 - 1.361381 3 4 tcp 592 ------- 0 0.0 4.0 3 6 + 1.377168 3 4 tcp 592 ------- 0 0.0 4.0 4 7 - 1.377168 3 4 tcp 592 ------- 0 0.0 4.0 4 7 + 1.392955 3 4 tcp 592 ------- 0 0.0 4.0 5 8 - 1.392955 3 4 tcp 592 ------- 0 0.0 4.0 5 8 + 1.408741 3 4 tcp 592 ------- 0 0.0 4.0 6 9 - 1.408741 3 4 tcp 592 ------- 0 0.0 4.0 6 9 r 1.410853 3 4 tcp 592 ------- 0 0.0 4.0 3 6
    • Computing the throughput received in each time interval is nothing more than tallying the total number of bytes received in each interval.
    • Example:
        From the above trace file, we can tally the number of bytes received in the first second of interval time by adding the packet sizes that are received in the first second.

      These are the packets that are received by TCP entity "4.0" in the first second:

          r 0.351867 3 4 tcp 40 ------- 0 0.0

      4.0

          0 0 r 0.88136 3 4 tcp 592 ------- 0 0.0

      4.0

          1 2 r 0.897147 3 4 tcp 592 ------- 0 0.0

      4.0

        2 3

      The total number of bytes received in the first second is therefore:

      40 + 592 + 592 = 1224 bytes

      or:

      1224 × 8 = 9792 bits

      So the throughput in the first second is 9792 bits/sec = 0.009792 Mbps

      Repeat the process for the second second, and so on...

      You do NOT want to do that by hand...



    • You need to write a program to identify the correct source, destination pair and add up all the packet size...
    • Someone has done that !!!
    • Here is a PERL program that can be used to process trace files: click here

      Save it and run as using:

        perl  throughput.pl   trfile  destNode  srcNode.port#  destNode.port#  \
      			time-granularity
      

      Time-granularity let you compute the throughput over time intervals, each interval will have the size given by the time-granularity

    • Example:

      You can use the following command to get average throughput data from the trace file obtained in the NS script above:

         perl  throughput.pl  out.tr  4  0.0  4.0  1  > out
      

      We need to use these parameters to extract the throughput data because of:

         r 0.351867 3 4 tcp 592 ------- 0 0.0 4.0 0 0
      
      • 4 is the node ID
      • 0.0 is the ID of the TCP source
      • 4.0 is the ID of the TCP sink
      • The parameter "out.tr" is the input filename
      • The parameter "1" is interval size (here: interval size is 1 second)


    • The throughput.pl script will create an output file that is suitable for plotting.

      To plot the progressing of average throughput, use gnuplot:

        gnuplot
        >> plot "out" using 1:2 title "TCP throughput" with lines 1
      

    Goodput.....


  • The difference between Throughput and Goodput
    • Consider the following receive events in the trace file that happens between 3 and 4 seconds (you can obtain the trace file when running the example program: click here)

          r 3.01512 3 4 tcp 592 ------- 0 0.0 4.0 32 63 r 3.030907 3 4 tcp 592 ------- 0 0.0 4.0 33 64 r 3.046693 3 4 tcp 592 ------- 0 0.0 4.0 34 65 r 3.06248 3 4 tcp 592 ------- 0 0.0 4.0 35 66 r 3.078267 3 4 tcp 592 ------- 0 0.0 4.0 36 67 r 3.094053 3 4 tcp 592 ------- 0 0.0 4.0 37 68 r 3.10984 3 4 tcp 592 ------- 0 0.0 4.0 38 69 r 3.125627 3 4 tcp 592 ------- 0 0.0 4.0 39 70 r 3.141413 3 4 tcp 592 ------- 0 0.0 4.0 40 71 r 3.1572 3 4 tcp 592 ------- 0 0.0 4.0 41 72 r 3.172987 3 4 tcp 592 ------- 0 0.0 4.0 42 73 r 3.188773 3 4 tcp 592 ------- 0 0.0 4.0 43 74 r 3.20456 3 4 tcp 592 ------- 0 0.0 4.0 44 75 r 3.220347 3 4 tcp 592 ------- 0 0.0 4.0 45 76 r 3.236133 3 4 tcp 592 ------- 0 0.0 4.0 46 77 r 3.25192 3 4 tcp 592 ------- 0 0.0 4.0 47 78 r 3.267707 3 4 tcp 592 ------- 0 0.0 4.0 48 79 r 3.283493 3 4 tcp 592 ------- 0 0.0 4.0

      49

          80

      packet 50 is lossed !!!

          r 3.29928 3 4 tcp 592 ------- 0 0.0 4.0

      51

        82 r 3.315067 3 4 tcp 592 ------- 0 0.0 4.0 53 84 r 3.330853 3 4 tcp 592 ------- 0 0.0 4.0 55 86 r 3.34664 3 4 tcp 592 ------- 0 0.0 4.0 57 88 r 3.362427 3 4 tcp 592 ------- 0 0.0 4.0 59 90 r 3.378213 3 4 tcp 592 ------- 0 0.0 4.0 61 92 r 3.528827 3 4 tcp 592 ------- 0 0.0 4.0 63 110 r 3.544613 3 4 tcp 592 ------- 0 0.0 4.0 64 111 r 3.5604 3 4 tcp 592 ------- 0 0.0 4.0 65 113 r 3.576187 3 4 tcp 592 ------- 0 0.0 4.0 66 114 r 3.591973 3 4 tcp 592 ------- 0 0.0 4.0 67 116 r 3.60776 3 4 tcp 592 ------- 0 0.0 4.0 68 117 r 3.623547 3 4 tcp 592 ------- 0 0.0 4.0 69 119 r 3.639333 3 4 tcp 592 ------- 0 0.0 4.0 70 120 r 3.65512 3 4 tcp 592 ------- 0 0.0 4.0 71 122 r 3.670907 3 4 tcp 592 ------- 0 0.0 4.0 72 123 r 3.686693 3 4 tcp 592 ------- 0 0.0 4.0 73 125 r 3.70248 3 4 tcp 592 ------- 0 0.0 4.0 74 126 r 3.718267 3 4 tcp 592 ------- 0 0.0 4.0 75 128 r 3.734053 3 4 tcp 592 ------- 0 0.0 4.0 76 129 r 3.74984 3 4 tcp 592 ------- 0 0.0 4.0 77 131 r 3.765627 3 4 tcp 592 ------- 0 0.0 4.0 78 132 r 3.781413 3 4 tcp 592 ------- 0 0.0 4.0 79 134 r 3.7972 3 4 tcp 592 ------- 0 0.0 4.0 80 135 r 3.812987 3 4 tcp 592 ------- 0 0.0 4.0 81 137 r 3.828773 3 4 tcp 592 ------- 0 0.0 4.0 83 139 r 3.84456 3 4 tcp 592 ------- 0 0.0 4.0 85 141 r 3.860347 3 4 tcp 592 ------- 0 0.0 4.0 87 143 r 3.876133 3 4 tcp 592 ------- 0 0.0 4.0 89 145 r 3.89192 3 4 tcp 592 ------- 0 0.0 4.0 91 147 r 3.907707 3 4 tcp 592 ------- 0 0.0 4.0 93 149 r 3.923493 3 4 tcp 592 ------- 0 0.0 4.0 95 151 r 3.93928 3 4 tcp 592 ------- 0 0.0 4.0 97 153 r 3.955067 3 4 tcp 592 ------- 0 0.0 4.0 99 156 r 3.970853 3 4 tcp 592 ------- 0 0.0 4.0 101 159 r 3.98664 3 4 tcp 592 ------- 0 0.0 4.0 102 161
    • The packet with sequence number 50 was lost and is NOT received between time 4 and 5 second (see the trace above) !!
    • Therefore, packets with sequence numbers 51 and higher CANNOT be delivered to the receiver between time 4 and 5 seconds.
    • Between time 4 and 5, the receiver has received a total of 54 packets (just count the number of entries above) - each packet has 592 bytes.

      Therefore, the throughput between 3 and 4 second is:

       54 x 592 bytes = 31968 bytes /sec = 255744 bits/sec = 0.255744 Mbps
      
    • However... because packets with sequence numbers 51 and above cannot be received without receiving packet 50 first, these packets are NOT counted in the goodput of that interval...

      Only packets with sequence numbers 32 to 49 (18 packets) are counted as "good" throughput.

      Therefore, the goodput between 3 and 4 second is:

       18 x 592 bytes = 10656 bytes/sec = 85248 bits/sec = 0.085248 Mbps
      


  • Obtaining Goodput information
    • This fact will allow you to obtain the "goodput" from an NS simulation:

          If

      an application

          (i.e.,

      a class that is derived from the NS class Application

          ) can attach itself to

      an TCPSink agent

          .

      The TCPSink agent will call (= invoke) the application‘s recv(bytes) method whenever the TCPSink agent delivers a data segment.

      Data segments are always delivered in sequence. In other words, the TCPSink agent will not deliver segment n unless segment n-1 has been delivered.

    • Using this fact, there is a very easy way to obtain the "goodput":
      • Just make the TCPSink agent (that receives and delivers data packets in the correct sequence to the receiving application) do the computation.
    • In order to do this, all we need to do is write an OTcl class TraceApp that is a subclass of the NS class Application

      In the class TraceApp we need to write a method recv(x) - the parameter x is the number of bytes in the data segment.



    • A simple TraceApp that tally the number of bytes received by the TCPSink agent:
         Class TraceApp -superclass Application     
      
         TraceApp instproc init {args} {
                 $self set bytes_ 0
                 eval $self next $args
         }
      
         TraceApp instproc recv {byte} {
                 $self instvar bytes_
                 set bytes_ [expr $bytes_ + $byte]
                 return $bytes_
         }
      
    • The expression -superclass indicates that the class TraceApp is a child class of the NS class Application (Note: FTP is also a child of Application)
    • There are 2 methods defined in the TraceApp class:
      • init{args} - this method is invoked by the new operator when an object of the class TraceApp is created.

        (You can view this method as the constructor)

        The init{args} method set the variable bytes_ to 0 (this variable is used to tally the number of bytes received)

      • recv{byte} - this method is provide so that the TCPSink agent can invoke it.

        (In fact, the recv{byte} method is defined in the parent class Application - by defining a new recv{byte} method in the child class TraceApp, we have overridden the one in the parent class Application)

        The most important statement in recv{byte} is:

           set bytes_ [expr $bytes_ + $byte]
        
        • byte is the input parameter to the recv{byte} method
        • bytes_ is the local variable of the recv{byte} method used to tally the number of bytes received
        • So the method increments bytes_ by byte - i.e., do the tally


    • There is still some work left to do:
      1. Create a TraceApp object
      2. Attach the TraceApp to the TCPSink
      3. Activate the TraceApp object
    • The following code shows the steps in Tcl code:
         set sink1 [new Agent/TCPSink]      ;# Create a TCPSink object
         set traceapp [new TraceApp]        ;# Create a TraceApp object   
      
         $traceapp attach-agent $sink1      ;# Attach traceapp to TCPSink
      
         $ns  at  0.0  "$traceapp  start"   ;# Start the traceapp object
      


    • The above code will have the TraceApp object accumulate goodput information from the start to the end of the simulation.
    • If we want goodput information over smaller time interval, we can use the technique (that we saw when we obtain CWND information).
    • The following self-scheduling Tcl function can be used to obtain goodput information (works just like the one that obtains CWND information):
         plotThroughput {tcpSink outfile} {
            global ns
      
            set now [$ns now]				;# Read current time
      
            set nbytes [$tcpSink set bytes_]		;# Read number of bytes
      
            $tcpSink set bytes_ 0			;# Reset for next epoch
      
         ### Prints "TIME throughput" to output file
            set throughput [expr ($nbytes * 8.0 / 1000000) / $time_incr]
            puts  $outfile  "$now $throughput"
      
            set time_incr 1.0
            $ns at [expr $now+1.0] "plotThroughput $tcpSink  $outfile"
      
         }
      


    • Example Program: (Demo above code)                        

      When you run the program, it will create an output file "out2.tput" that contain goodput data which you can plot using gnuplot:

      plot "out2.tput" using 1:2 with lines 1

http://www.mathcs.emory.edu/~cheung/Courses/558-old/Syllabus/90-NS/3-Perf-Anal/TCP-Throughput.html







Studying TCP's Throughput and Goodput using NS

时间: 2024-10-21 15:10:11

Studying TCP's Throughput and Goodput using NS的相关文章

Studying TCP's Congestion Window using NS

Studying TCP's Congestion Window using NS How to obtain TCP's CWND value The most important value that determine the behavior of TCP is the congestion window size or traditionally abreviated as CWND In NS, every TCP-type class (Agent/TCP/Tahoe, (Agen

【NS2】各种TCP版本 之 TCP Tahoe 和 TCP Reno(转载)

实验目的 学习TCP的拥塞控制机制,并了解TCP Tahoe 和 TCP Reno的运行方式. 基础知识回顾 TCP/IP (Transmission Control Protocol/Internet Protocol)是目前使用最广泛的一组通信协议.TCP所负责的功能包括:将自应用程序收到的信息分成许多较小的数据区段.提供连接导向的服务.提供可靠性服务.提供应用程序与应用和式之间的流量控制,并依据网络的状况提供拥塞控制. 当应用程序有数据要传送到网上去时,为了希望能和网络上其他的TCP联机公

TCP协议的NS2仿真

# # ftp # \ # tcp sink # \ / # n0--------5M 2ms---------n1 # # set ns [new Simulator] set f [open out.tr w] $ns trace-all $f set nf [open out.nam w] $ns namtrace-all $nf proc finish {} { global ns nf f $ns flush-trace close $f close $nf exec nam out.

【NS2】NS2 教學手冊(转载)

之前做毕设的时候搜索NS2的相关资料,发现这个里面涵盖很广,特此收藏,感谢原作者的辛勤劳作. NS2 教學手冊 ( NS2 Learning Guide) [快速連結區] My works  中文影音教學區  Q&A for my works  My Book  My Talks  Forum  Basic  ns2-installation  Tcl/Tk/Otcl  Debug  Trace Processing  awk/gawk  Gnuplot  perl  latex  Traffi

【转】几款网络仿真软件的比较

转自: 网络仿真技术是一种通过建立网络设备和网络链路的统计模型, 并模拟网络流量的传输, 从而获取网络设计或优化所需要的网络性能数据的仿真技术.由于仿真不是基于数学计算, 而是基于统计模型,因此,统计复用的随机性被精确地再现.网络仿真技术具有以下特点:一, 全新的模拟实验机理使其具有在高度复杂的网络环境下得到高可信度结果的特点.二, 网络仿真的预测功能是其他任何方法都无法比拟的:三,使用范围广, 既可以用于现有网络的优化和扩容,也可以用于新网络的设计,而且特别适用于中大型网络的设计和优化:四,初

ns-2

暑假学习内容 <<移动环境中三维模型差错复原传输算法研究>>这个项目中,我是做的是第二大块-模型混合传输协议的研究.而在设计此协议之前,首先要掌握网络模拟-NS2.NS2是现在学术界广泛使用的一种网络模拟软件.在设计了协议后要进行模拟,然后改进协议.再模拟,再改进... 直到协议达到预想的结果. NS2的优势在于它内容丰富,它是一个庞大的系统,目前学习资料又相对较少,特别是针对初学者的. 首先学习TCL(tool command language)语言,TCL是一种解释执行的脚本语

ns-2 tcp-udp模拟实验

模拟一个网络环境,该环境中包含两个传输节点s1,s2,路由器r,和资料接收端d. 大概如下图所示: 源代码: set ns [new Simulator]$ns color 1 Blue$ns color 2 Red set nf [open out.nam w]$ns namtrace-all $nf set nd [open out.tr w]$ns trace-all $nd proc finish {} { global ns nf nd $ns flush-trace close $n

一个简单的ns2实验全过程

实验名称:比较tcp和udp的丢包行为 试验目的:1. 熟练用ns2做网络仿真试验的整个流程:2. 练习写tcl脚本,了解怎么应用http和rtp:3. 练习用awk处理trace数据,了解怎么计算丢包率:4. 练习用gnuplot绘制曲线图,熟练gnuplot的使用. 实验步骤:1.确定网络拓扑.   一个简单的三个节点的拓扑,两个运行cbr(const-bitrate)应用的发送结点,一个接收结点.一条链路使用tcp链接,一条链路使用udp连接.如图. 2.写tcl脚本. # jiqing

[Linux] NS2模拟仿真源码

例子1 #建立一个模拟 set ns [new Simulator] #定义不同数据流的颜色(NAM显示时用到) $ns color 1 Blue $ns color 2 Red #开启Trace跟踪和NAM跟踪 set tracefd [open wired.tr w] $ns trace-all $tracefd set nf [open wired.nam w] $ns namtrace-all $nf #定义结束进程 proc finish {} { global ns tracefd