Scenario 1: You you have two servers located a large geographical distance apart, say one in The US, the other in the UK. You are copying a large file between these two locations via scp but you are only averaging 200Kbps. You know it should be faster and want to increase the TCP window scaling size according to the bandwidth delay product, but want a reliable way to test the changes easily to see if there is improvement.
Scenario 2: You have a local two server gigabit network where each server has a nice GigE network card connected via a good gigabit switch. You want higher transfer speeds between the servers and are considering enabling jumbo packets on the servers and network switch but need a way to verify that the changes have actually increased throughput.
Scenario 3: You use VOIP for your phone system and have a remote office that has terrible call quality to your Asterisk server. You think the network connection is to blame but need a way to verify the amount of jitter and packet loss.
For all three of the above scenarios Iperf is a great tool that can be used to measure the maximum network throughput and quality of a network connection between two separate servers. Iperf is available for pretty much every Linux distribution and is available for Windows as well.
For all three of the above scenarios Iperf is a great tool that can be used to measure the maximum network throughput and quality of a network connection between two separate servers. Iperf is available for pretty much every Linux distribution and is available for Windows as well.
Using Iperf in the simplest way possible, you can perform a TCP bandwidth test between two servers. By default Iperf uses TCP and listens on port 5001. When testing, one server will be the ‘server’ while the other is the ‘client’. In this configuration you will be testing bandwidth from the client to the server.
Lets test performance between two servers on a local 100Mbit LAN. First start the server on the server:
1 2 3 4 5 | ]$ iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ |
Notice it has opened a listening socket on TCP 5001, make sure there are no issues with firewalls on this port. You can override the default port and specify a different one with the -p option.
Now on the other machine which will be the client, lets start a test:
1 2 3 4 5 6 7 8 | $ iperf -c 10.20.20.100 ------------------------------------------------------------ Client connecting to 10.20.20.100 TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.20.20.105 port 53423 connected with 10.20.20.100 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.1 sec 117 MBytes 97.2 Mbits /sec |
I used the -c (client) option and specified the ip of the machine we started Iperf on as the server which has an ip of 10.20.20.100. Throughput of this test resulted in 97.2Mbps transfer rate which is excellent for a 100Mbps link.
This test was only in one direction from the client to the server. If we wanted to test throughput in both directions we could add either the -r option which will run a bidirectional test individually one after the other, or the -d option which runs a bidirectional test simultaneously.
The default length of the test is 10 seconds. If you wanted to increase the length of the test to better benchmark a connection for a sustained period of time you can use the -t option to specify the run time in seconds.
UDP testing is a crucial test that you can perform to diagnose issues with VOIP and other udp based protocols. A UDP test can detect jitter and packet loss between servers. To perform a UDP test, use the -u option when starting iperf:
1 2 3 4 5 | $ iperf -s -u ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 126 KByte (default) |
Then start a UDP test on the client:
1 | # iperf -c 10.20.20.100 -u |
When completed you will have a report similar to the following:
1 2 3 4 | ------------------------------------------------------------ [ 3] local 69.160.56.100 port 5001 connected with 69.160.56.105 port 36002 [ ID] Interval Transfer Bandwidth Jitter Lost /Total Datagrams [ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits /sec 0.002 ms 0/ 893 (0%) |
The above results are typical for a small local network with little network congestion, very low jitter and zero packet loss. Connections over larger distances that have more congestion can begin to cause excessive jiter and packet loss which will result in poor voice quality using VOIP. High levels of packet loss above one or two percent will also result in a lot of TCP retransmission of packets which can further increase network congestion.
A couple notes on these tests to keep in mind. Iperf is performing a test between memory on each server. In the real world when copying files between servers disk io is involved. Depending on the type of drives, interfaces being used, and any raid card or operating system file caching, you may see different results than what Iperf is reporting since it is only showing performance between memory and is not including any possible latency from disk io. Iperf has functionality to test by reading a file with the -F option and should be used if you want to rule out your disks being a bottleneck.
If you are seeing very poor performance transferring files between servers and have ruled out any type of physical layer issues causing the errors, look into the network card type and expansion slot type that is being used. Many cheaper network cards have very software based drivers that offload most of the network processing to the host cpu. Bus speeds are also important. If you have a gigabit network card in a 32bit 66Mhz PCI slot, you are limited to the maximum throughput that slot can provide which is 266.7MBps. Proper 64bit PCI-X or PCIe NICs should always be used for best performance with gigabit connections.
REFERENCES