Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Network Speed [SOLVEDish]
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Networking & Security
View previous topic :: View next topic  
Author Message
Tun
n00b
n00b


Joined: 19 Jan 2004
Posts: 58
Location: Stockport, England

PostPosted: Wed Aug 11, 2004 7:57 pm    Post subject: Network Speed [SOLVEDish] Reply with quote

Hi,

Bit of a 'newbie' question but I've played around for a few hours and am getting nowhere fast.

My network file transfer speed over nfs is <1MB/sec which I believe is poor. I'm sure it should be faster than that. Am I right to be expecting faster speeds ?

Both machines are Gentoo, 2.6 kernel. nfs v3 running.

Machine #1 Old 400mhz Dell, Nat Semi 10/100 nic, buffered disk reads 23MB/sec

Machine #2 xp2000+, sis9000 10/100 nic, buffered disk reads 50MB/sec

5 port 10/100 switch connecting the machines, cables between machines can do 100.

I've set the rsize=8192 and wsize=8192 on the client.

What am I missing ? Any ideas what I should be trying next ? Kernel patches to allow bigger than 8192 r/wsize ? NFS over TCP to allow this size increase ?

(scp transfers at roughly 2.6MB a second)

Thanks in advance


Last edited by Tun on Thu Aug 12, 2004 6:54 pm; edited 1 time in total
Back to top
View user's profile Send private message
Hrk
Tux's lil' helper
Tux's lil' helper


Joined: 24 May 2003
Posts: 90
Location: Rome, Italy

PostPosted: Wed Aug 11, 2004 9:06 pm    Post subject: Reply with quote

Please, do not misunderstand me, but can you check if the network is really going at 100Mbs? Should it go at 10Mbs, 1MB/s of traffic would be appropriate.

If I do a
Code:

dmesg | grep eth | grep link

I get
Code:

eth0: link up, 100Mbps, full-duplex, lpa 0x41E1

My NFS transfers reach around 4-5MB/s with 100% CPU usage (both on client and on server). Both my machines are slower than yours. I have around 50MB/sec and 12MB/sec of hdparm -t on client/server respectively.
Back to top
View user's profile Send private message
Tun
n00b
n00b


Joined: 19 Jan 2004
Posts: 58
Location: Stockport, England

PostPosted: Wed Aug 11, 2004 10:10 pm    Post subject: Reply with quote

Thanks for replying. I've not misunderstood, your English is good.

Machine #1 (eth0 is my ISP)
Code:
socrates root # dmesg | grep eth
eth1: NatSemi DP8381[56] at 0xd0852000, 00:02:e3:17:f6:52, IRQ 9.
eth1: link up.
eth1: Setting full-duplex based on negotiated link capability.

Machine #2
Code:
plato root # dmesg | grep eth0             
eth0: SiS 900 Internal MII PHY transceiver found at address 1.
eth0: Using transceiver found at address 1 as default
eth0: SiS 900 PCI Fast Ethernet at 0xd000, IRQ 10, 00:06:4f:03:6f:8c.
eth0: Media Link On 100mbps full-duplex

The "negotiated link capability" in #1 dmesg along with what you said is making me think it's dropping the connection down to 10Mb/sec.

So I looked into it further and found ethtool. Which gives

Code:
socrates root # ethtool eth1
Settings for eth1:
        Supported ports: [ TP MII ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
        Advertised auto-negotiation: Yes
        Speed: 100Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 15
        Transceiver: internal
        Auto-negotiation: on
        Supports Wake-on: pumbags
        Wake-on: pubg
        SecureOn password: 00:00:00:00:00:00
        Current message level: 0x000040c5 (16581)
        Link detected: yes

Which indicates 100MB/sec as expected.

I did a
Code:

socrates root # ethtool -s eth1 speed 100
socrates root # ethtool -s eth1 autoneg off

To ensure the speed and switched off advertising auto-negotitation. But still the same :(

Code:

james@plato james $ time cp -R socserv/music/killers/ .
real    1m3.822s
user    0m0.012s
sys     0m0.424s
james@plato james $ du -sk killers/
62904   killers/

I also took a look at cpu usage when copying via nfs (using top) and usage is very small 4 nfsd threads amounting to no more than 2% each.

Thanks for letting me know what your setup does, at least that confirms that I have an issue to chase down. Any more suggestions ?
Back to top
View user's profile Send private message
devon
l33t
l33t


Joined: 23 Jun 2003
Posts: 943

PostPosted: Thu Aug 12, 2004 2:25 am    Post subject: Reply with quote

You may want to trying running ttcp or iperf between the Gentoo boxes to test the network speeds with the overhead of NFS.
Back to top
View user's profile Send private message
Gherald2
Guru
Guru


Joined: 02 Jul 2003
Posts: 326
Location: Madison, WI USA

PostPosted: Thu Aug 12, 2004 2:28 am    Post subject: Reply with quote

I've allwyas used netperf (emerge netperf). You just run "netperf -H hostname" and it gives you a throughput in bits/s

What are ttcp and iperf like? Any different?
_________________
Unregistered Linux User #17598363
Back to top
View user's profile Send private message
devon
l33t
l33t


Joined: 23 Jun 2003
Posts: 943

PostPosted: Thu Aug 12, 2004 2:49 am    Post subject: Reply with quote

Gherald wrote:
What are ttcp and iperf like? Any different?

AFAIK, they all perform basic bandwidth performance. I believe the general consensus on the forums is that ttcp is a little outdated, however some Cisco IOS versions come with ttcp, so it is handy to have. I could not remember all the network performance tools, so I just typed the two I remembered. :)

Gherald wrote:
You just run "netperf -H hostname" and it gives you a throughput in bits/s

I just emerged netperf to try it and it looks like the receiving machines need to have a netperf-server running on it.
Code:
$ sudo netperf -H www.domain.example
establish_control: control socket connect failed: Connection refused
Are you sure there is a netserver running on www.domain.example at port 12865?

Am I mistaken?
Back to top
View user's profile Send private message
Gherald2
Guru
Guru


Joined: 02 Jul 2003
Posts: 326
Location: Madison, WI USA

PostPosted: Thu Aug 12, 2004 4:17 am    Post subject: Reply with quote

No you aren't mistaken, sorry for neglecting to mention. You need to emerge (or apt-get from non-free) on both the client and server, and then /etc/init.d/netperf on the server and perhaps rc-update if you like it.
_________________
Unregistered Linux User #17598363
Back to top
View user's profile Send private message
Hrk
Tux's lil' helper
Tux's lil' helper


Joined: 24 May 2003
Posts: 90
Location: Rome, Italy

PostPosted: Thu Aug 12, 2004 7:47 am    Post subject: Reply with quote

Tun wrote:

Code:

james@plato james $ time cp -R socserv/music/killers/ .
real    1m3.822s
user    0m0.012s
sys     0m0.424s
james@plato james $ du -sk killers/
62904   killers/


Hey, it's almost perfectly 1MB/sec. :-)

Tun wrote:

I also took a look at cpu usage when copying via nfs (using top) and usage is very small 4 nfsd threads amounting to no more than 2% each.

Know that I somehow envy you: I do not think that 100% CPU usage is good. I could understand some stress on the CPU due to the network device (the HDs have UDMA5/UDMA6 enabled) but this is way too much for my tastes. On the server, I have 3-4 nfsd processes fighting for CPU, more or less equally.

Yesterday, the Maxtor HD on the client died on DMA, transfers went 100% CPU time, but down to 120KB/sec :-) (I shouldn't smile...)

I will now give a try to the netperf mentioned in the other post.
Back to top
View user's profile Send private message
Hrk
Tux's lil' helper
Tux's lil' helper


Joined: 24 May 2003
Posts: 90
Location: Rome, Italy

PostPosted: Thu Aug 12, 2004 8:08 am    Post subject: Reply with quote

I have a question relating to netperf. I have installed it and run it on my two machines. The results aren't bad at all, but there's something which puzzles me.

aiace = XP1800+, 512MB, Maxtor 40GB UDMA6 (on VIA VT233A controller)
ulisse = Pentium 200 MMX, 98MB, Seagate 60GB uDMA5 (on Promise FastTrack PCI controller)

Code:

aiace harlock # netperf -c -C -H ulisse
TCP STREAM TEST to ulisse
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % T      % T      us/KB   us/KB

 87380  16384  16384    10.01        64.38   2.10     79.69    2.670   101.403

And the opposite:
Code:

ulisse root # netperf -c -C -H aiace
TCP STREAM TEST to aiace
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % T      % T      us/KB   us/KB

 87380  16384  16384    10.00        56.53   90.16    3.80     130.649  5.504


The throughput is more or less 60Mbit/sec. Why this strange number? I mean: I could understand 10Mbit or 100Mbit...

It's true that ulisse was (it's still) downloading a file via BitTorrent, but it is taking little CPU (around 5%).

The other thing which puzzles me is that aiace is using as little CPU (~3%) in both tests, whereas "real" nfs transfers suck 100% CPU. I think this means my NFS setup may be screwed. :-)
Back to top
View user's profile Send private message
Tun
n00b
n00b


Joined: 19 Jan 2004
Posts: 58
Location: Stockport, England

PostPosted: Thu Aug 12, 2004 8:51 am    Post subject: Reply with quote

Thanks for the extra suggestions. Am at work now, so will be able to try in 6 hours or so when I get to go home. Will let you know how it goes.
Back to top
View user's profile Send private message
Tun
n00b
n00b


Joined: 19 Jan 2004
Posts: 58
Location: Stockport, England

PostPosted: Thu Aug 12, 2004 4:54 pm    Post subject: Reply with quote

Getting closer. Looks like my network setup is okay. I've run the performance tests and am getting close on 100Mb/sec throughput in both directions over both tcp and udp. Results at the bottom if anybody is interested.

This has made me look at more closely at nfs (possibly portmap). /var/log/messages and dmesg both give me
Code:
nfs warning: mount version older than kernel

each time I mount.

The troubleshooting section of the nfs howto covers this in Section 7.6c..

So I upgraded mount (emerge linux-util) re-emerged nfs-utils just in case too. I don't use an automounter., so can ignore am-utils.

Still no luck :( Slow and I still get the error message. Anymore suggestions ?

I'll keep on googling for an answer.

Thanks again for the suggestions so far, this is driving me mad :evil: but I'm learning a lot of useful stuff.


Code:
plato root # netperf -c -C -H socrates
TCP STREAM TEST to socrates
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % T      % T      us/KB   us/KB

 87380  16384  16384    10.01        94.00   1.60     31.48    1.393   27.430

plato root # iperf -c socrates
------------------------------------------------------------
Client connecting to socrates, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  5] local 192.168.0.2 port 32828 connected with 192.168.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec   112 MBytes  94.1 Mbits/sec

plato root # iperf -c socrates -u -b100M
------------------------------------------------------------
Client connecting to socrates, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  103 KByte (default)
------------------------------------------------------------
[  5] local 192.168.0.2 port 32771 connected with 192.168.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[  5] Server Report:
[  5]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec  0.018 ms    7/81401 (0.0086%)
[  5] Sent 81401 datagrams


Code:
socrates root # netperf -c -C -H plato   
TCP STREAM TEST to plato
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % T      % T      us/KB   us/KB

 87380  16384  16384    10.01        94.05   10.19    5.10     8.879   4.440

socrates root # iperf -c plato
------------------------------------------------------------
Client connecting to plato, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  5] local 192.168.0.1 port 32828 connected with 192.168.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec   112 MBytes  94.1 Mbits/sec

socrates root # iperf -c plato -u -b100M
------------------------------------------------------------
Client connecting to plato, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  106 KByte (default)
------------------------------------------------------------
[  5] local 192.168.0.1 port 32845 connected with 192.168.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[  5] Server Report:
[  5]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec  0.117 ms    2/81414 (0.0025%)
[  5] Sent 81414 datagrams
Back to top
View user's profile Send private message
Hrk
Tux's lil' helper
Tux's lil' helper


Joined: 24 May 2003
Posts: 90
Location: Rome, Italy

PostPosted: Thu Aug 12, 2004 5:47 pm    Post subject: Reply with quote

Hmm... I'm wondering something. How much nfsd processes do you have running at the same time? If I remember correctly, when I decrased their number, I got smaller transfer rates.
Back to top
View user's profile Send private message
Anarcho
Advocate
Advocate


Joined: 06 Jun 2004
Posts: 2970
Location: Germany

PostPosted: Thu Aug 12, 2004 6:07 pm    Post subject: Reply with quote

I'm also sad about poor nfs performance.

I have a Gigabit Network running but nfs only runs with abput 8 - 10 MB/s
but when I copy with ftp, I get ~23 MB/s.

My iperf:

Code:
TCP STREAM TEST to server
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % T      % T      us/KB   us/KB

 87380  16384  16384    10.00       394.55   18.00    76.09    3.737   15.798


and iperf:



Code:
------------------------------------------------------------
Client connecting to server, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  5] local 192.168.0.120 port 33723 connected with 192.168.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec   491 MBytes   412 Mbits/sec
Back to top
View user's profile Send private message
Tun
n00b
n00b


Joined: 19 Jan 2004
Posts: 58
Location: Stockport, England

PostPosted: Thu Aug 12, 2004 6:54 pm    Post subject: Reply with quote

:lol: BINGO :lol:

Just added tcp support to the kernel, added tcp to the fstab line and I'm getting 8Mb/sec !!

I feel like I'm cheating and should try and get it working with UDP but I've spend enough time on this. I'm just happy that I've finally got it running at a decent speed.

Quote:

Hmm... I'm wondering something. How much nfsd processes do you have running at the same time? If I remember correctly, when I decrased their number, I got smaller transfer rates.


I've checked /etc/conf.c.nfs and RPCNFSDCOUNT=8 is set so I don't think it's that. Cheers for your help :)
Back to top
View user's profile Send private message
Hrk
Tux's lil' helper
Tux's lil' helper


Joined: 24 May 2003
Posts: 90
Location: Rome, Italy

PostPosted: Thu Aug 12, 2004 7:24 pm    Post subject: Reply with quote

Tun wrote:

Just added tcp support to the kernel, added tcp to the fstab line and I'm getting 8Mb/sec !!


I'm happy for you, but I thought that TCP added some overhead over UDP... so I am totally surprised of these results. I am recompiling my kernel to give this a try, though. :-)
Back to top
View user's profile Send private message
r4v5
n00b
n00b


Joined: 11 Sep 2004
Posts: 5

PostPosted: Sat Sep 11, 2004 12:55 pm    Post subject: Reply with quote

"negotiated link" stuff doesn't mean that it's dropping it to 10. Rather, some devices do not support full duplex (transmitting and receiving at the same time); these can only transfer a combined total of 100Mbit/sec (or something like that). Tx and Rx are shared. Full duplex, however, allows a device to send and receive at the same time, so you can transfer to the device at 100 and from the device at 100 at the same time. (all performance claims maximum, not typical. ymmv. the poster takes no responsibility, period.)
Now, if a half-duplex device talks to a full-duplex device, it gets confused and lost. Therefore, full duplex is only used when both sides agree. That's the negotiation stuff.

...i think. I might be wrong.

I had the same issue with NFS, moving at around 300k/sec. I do not know what caused it, as the same hardware running Slack was easily able to communicate with the same box that was nfs receiving at much higher speeds. So i waited it out and vowed to use scp next time.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Networking & Security All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum