Installing NS2 on Ubuntu 11.04
I have written tcl scripts to create the above topology and to show,
Installing xgraph on Ubuntu 11.04
tcp_tcp.tcl
This script simulates two TCP flows; one between n0 and n4 and one between n1 and n5. Also the bandwidth of L2 has selected in such a way that it becomes the bottleneck link and the "record" procedure has used to measure the throughput at the receivers. .
Above graph visualization clearly shows how two TCP flows fairly share the bandwidth.
How a TCP flow and a UDP flow compete for the bandwidth
tcp_udp.tcl
This script simulates one TCP flow (between n0 and n4) and one UDP flow (between n1 and n5). Again the bandwidth of L2 has selected in such a way that it becomes the bottleneck link and the "record" procedure has used to measure the throughput at the receivers.
Above graph visualization clearly shows, when it comes to a TCP flow and a UDP flow there is no fair sharing the bandwidth, UDP gets the most of it.
sudo apt-get install ns2 namHere I have used NS-2 to simulate a network with the topology as shown below.
I have written tcl scripts to create the above topology and to show,
- When two TCP flows compete for the bandwidth they fairly share the bandwidth
- When a TCP flow and a UDP flow compete for the bandwidth there is no fair sharing
Installing xgraph on Ubuntu 11.04
sudo apt-get install xgraphTo start the simulation
ns XXX.tclHow two TCP flows compete for the bandwidth
tcp_tcp.tcl
This script simulates two TCP flows; one between n0 and n4 and one between n1 and n5. Also the bandwidth of L2 has selected in such a way that it becomes the bottleneck link and the "record" procedure has used to measure the throughput at the receivers. .
#Create a simulator object set ns [new Simulator] #Define different colors for data flows (for NAM) $ns color 1 Blue $ns color 2 Red #Open the trace files outX.tr for Xgraph and out.nam for nam set f0 [open out_tcp0.tr w] set f1 [open out_tcp1.tr w] #Open the NAM trace file set nf [open out.nam w] $ns namtrace-all $nf #Define a 'finish' procedure proc finish {} { global ns nf f0 f1 $ns flush-trace #Close the NAM trace file close $nf #Close the output files close $f0 close $f1 #Execute xgraph to display the results exec xgraph out_tcp0.tr out_tcp1.tr -geometry 600x400 & #Execute NAM on the trace file exec nam out.nam & exit 0 } #Create five nodes set n0 [$ns node] set n1 [$ns node] set n2 [$ns node] set n3 [$ns node] set n4 [$ns node] set n5 [$ns node] #Create links between the nodes $ns duplex-link $n0 $n2 2Mb 10ms DropTail $ns duplex-link $n1 $n2 2Mb 10ms DropTail $ns duplex-link $n2 $n3 1.7Mb 20ms DropTail $ns duplex-link $n3 $n4 2Mb 10ms DropTail $ns duplex-link $n3 $n5 2Mb 10ms DropTail #Set Queue Size of link (n2-n3) to 20 $ns queue-limit $n2 $n3 20 #Give node position (for NAM) $ns duplex-link-op $n0 $n2 orient right-down $ns duplex-link-op $n1 $n2 orient right-up $ns duplex-link-op $n2 $n3 orient right $ns duplex-link-op $n3 $n4 orient right-up $ns duplex-link-op $n3 $n5 orient right-down #record procedure proc record {} { global sink sink1 f0 f1 #Get an instance of the simulator set ns [Simulator instance] #Set the time after which the procedure should be called again set time 0.5 #How many bytes have been received by the traffic sinks? set bw0 [$sink set bytes_] set bw1 [$sink1 set bytes_] #Get the current time set now [$ns now] #Calculate the bandwidth (in MBit/s) and write it to the files puts $f0 "$now [expr $bw0/$time*8/1000000]" puts $f1 "$now [expr $bw1/$time*8/1000000]" #Reset the bytes_ values on the traffic sinks $sink set bytes_ 0 $sink1 set bytes_ 0 #Re-schedule the procedure $ns at [expr $now+$time] "record" } #Setup a TCP connection set tcp [new Agent/TCP] $tcp set class_ 2 $ns attach-agent $n0 $tcp set sink [new Agent/TCPSink] $ns attach-agent $n4 $sink $ns connect $tcp $sink $tcp set fid_ 1 #Setup a FTP over TCP connection set ftp [new Application/FTP] $ftp attach-agent $tcp $ftp set type_ FTP #Setup a TCP connection set tcp1 [new Agent/TCP] $tcp1 set class_ 2 $ns attach-agent $n1 $tcp1 set sink1 [new Agent/TCPSink] $ns attach-agent $n5 $sink1 $ns connect $tcp1 $sink1 $tcp1 set fid_ 2 #Setup a FTP over TCP connection set ftp1 [new Application/FTP] $ftp1 attach-agent $tcp1 $ftp1 set type_ FTP #Start logging the received bandwidth $ns at 0.0 "record" #Schedule events for the FTP agents $ns at 0.1 "$ftp start" $ns at 0.8 "$ftp1 start" $ns at 4.0 "$ftp1 stop" $ns at 4.8 "$ftp stop" #Call the finish procedure after 5 seconds of simulation time $ns at 5.0 "finish" #Run the simulation $ns runGraph
Above graph visualization clearly shows how two TCP flows fairly share the bandwidth.
How a TCP flow and a UDP flow compete for the bandwidth
tcp_udp.tcl
This script simulates one TCP flow (between n0 and n4) and one UDP flow (between n1 and n5). Again the bandwidth of L2 has selected in such a way that it becomes the bottleneck link and the "record" procedure has used to measure the throughput at the receivers.
#Create a simulator object set ns [new Simulator] #Define different colors for data flows (for NAM) $ns color 1 Blue $ns color 2 Red #Open the trace files outX.tr for Xgraph and out.nam for nam set f0 [open out_tcp.tr w] set f1 [open out_udp.tr w] #Open the NAM trace file set nf [open out_udptcp.nam w] $ns namtrace-all $nf #Define a 'finish' procedure proc finish {} { global ns nf f0 f1 $ns flush-trace #Close the NAM trace file close $nf #Close the output files close $f0 close $f1 #Execute xgraph to display the results exec xgraph out_tcp.tr out_udp.tr -geometry 600x400 & #Execute NAM on the trace file exec nam out_udptcp.nam & exit 0 } #Create five nodes set n0 [$ns node] set n1 [$ns node] set n2 [$ns node] set n3 [$ns node] set n4 [$ns node] set n5 [$ns node] #Create links between the nodes $ns duplex-link $n0 $n2 2Mb 10ms DropTail $ns duplex-link $n1 $n2 2Mb 10ms DropTail $ns duplex-link $n2 $n3 1.7Mb 20ms DropTail $ns duplex-link $n3 $n4 2Mb 10ms DropTail $ns duplex-link $n3 $n5 2Mb 10ms DropTail #Set Queue Size of link (n2-n3) to 20 $ns queue-limit $n2 $n3 20 #Give node position (for NAM) $ns duplex-link-op $n0 $n2 orient right-down $ns duplex-link-op $n1 $n2 orient right-up $ns duplex-link-op $n2 $n3 orient right $ns duplex-link-op $n3 $n4 orient right-up $ns duplex-link-op $n3 $n5 orient right-down #record procedure proc record {} { global sink sink1 f0 f1 #Get an instance of the simulator set ns [Simulator instance] #Set the time after which the procedure should be called again set time 0.5 #How many bytes have been received by the traffic sinks? set bw0 [$sink set bytes_] set bw1 [$sink1 set bytes_] #Get the current time set now [$ns now] #Calculate the bandwidth (in MBit/s) and write it to the files puts $f0 "$now [expr $bw0/$time*8/1000000]" puts $f1 "$now [expr $bw1/$time*8/1000000]" #Reset the bytes_ values on the traffic sinks $sink set bytes_ 0 $sink1 set bytes_ 0 #Re-schedule the procedure $ns at [expr $now+$time] "record" } #Setup a TCP connection set tcp [new Agent/TCP] $tcp set class_ 2 $ns attach-agent $n0 $tcp set sink [new Agent/TCPSink] $ns attach-agent $n4 $sink $ns connect $tcp $sink $tcp set fid_ 1 #Setup a FTP over TCP connection set ftp [new Application/FTP] $ftp attach-agent $tcp $ftp set type_ FTP #Setup a UDP connection set udp [new Agent/UDP] $ns attach-agent $n1 $udp set sink1 [new Agent/LossMonitor] $ns attach-agent $n5 $sink1 $ns connect $udp $sink1 $udp set fid_ 2 #Setup a CBR over UDP connection set cbr [new Application/Traffic/CBR] $cbr attach-agent $udp $cbr set type_ CBR $cbr set packet_size_ 1000 $cbr set rate_ 2mb $cbr set random_ false #Start logging the received bandwidth $ns at 0.0 "record" #Schedule events for the CBR and FTP agents $ns at 0.1 "$cbr start" $ns at 0.8 "$ftp start" $ns at 4.0 "$ftp stop" $ns at 4.8 "$cbr stop" #Call the finish procedure after 5 seconds of simulation time $ns at 5.0 "finish" #Run the simulation $ns runGraph
Above graph visualization clearly shows, when it comes to a TCP flow and a UDP flow there is no fair sharing the bandwidth, UDP gets the most of it.
Hello. I just use this guide to learn. implementing the same code as yours, in the first section, I find different graph, which the tcp 1 always is smaller than tcp 0. and this fairness is really not visible so much. Can you help me know why my results is this different with the graph you put the picture?
ReplyDelete