[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-nfs
Subject:    Re: [NFS] Linux nfs client problems with HPUX servers.
From:       Karl Nelson <kenelson () ece ! ucdavis ! edu>
Date:       2000-06-26 15:31:26
[Download RAW message or body]

> 
> If you can start xosview or a similar utility on your servers/clients,
> could you check the NFS throughput variation ? Here, I have the same
> problem. Linux-2.2.15-6mdk -> Linux-2.2.14-15mdk is ok (2.5-4MB/sec,
> 100MBit Ethernet), but the other way round is terrible (shaky, with pauses,
> etc).

The variation is abysmally high in those tests.  I had to take like
10 samples and average to get good numbers.  This corresponded to
switching of the block sizes and the number of partial pages emitted.

If anything got clobbered in transit linux went from delivering 
4k packets to dumping 1 to 3 k packets thus my throughput over 
10T was anywhere from 0.2 M/S to 0.07 M/s.  Larger files
were worse meaning that a large file always hit the bad end of the 
range.  It was simply a matter of time before linux client entered
into a pattern of very low throughput.  nfsstat indicated there was
not a large number of retries or other network problems.
  

Compare the hpux client communication

1.985722 0.001024  eth0 < .1f85a53d > .sunrpc: 1472  proc #8 (frag 
35713:1480@0+)
1.986974 0.001252  eth0 <  > : (frag 35713:1480@1480+)
1.988204 0.001230  eth0 <  > : (frag 35713:1480@2960+)
1.989434 0.001230  eth0 <  > : (frag 35713:1480@4440+)
1.990687 0.001253  eth0 <  > : (frag 35713:1480@5920+)
1.991478 0.000791  eth0 <  > : (frag 35713:936@7400)
1.991712 0.000234  eth0 > .nfs > .528852285: reply ok 96
Total time:  2.021402s  (for 2M)


To the linux client communication  (to linux server)

3.672894 0.000806  eth0 <  > : (frag 31962:756@1480)
3.674630 0.001736  eth0 < .113c9395 > .sunrpc: 1472  proc #8 (frag 
31962:1480@0+)
3.674744 0.000114  eth0 > .nfs > .289182613: reply ok 96
3.675277 0.000533  eth0 <  > : (frag 31963:756@1480)
3.676925 0.001648  eth0 < .123c9395 > .sunrpc: 1472  proc #8 (frag 
31963:1480@0+)
3.677030 0.000105  eth0 > .nfs > .305959829: reply ok 96
3.679409 0.002379  eth0 <  > : (frag 31964:1324@2960)
3.682118 0.002709  eth0 <  > : (frag 31964:1480@1480+)
3.683348 0.001230  eth0 < .133c9395 > .sunrpc: 1472  proc #8 (frag 
31964:1480@0+)
3.683503 0.000155  eth0 > .nfs > .322737045: reply ok 96
3.683995 0.000492  eth0 <  > : (frag 31965:756@1480)
3.685252 0.001257  eth0 < .143c9395 > .sunrpc: 1472  proc #8 (frag 
31965:1480@0+)
3.685364 0.000112  eth0 > .nfs > .339514261: reply ok 96
Total time: 3.887609s (for 2M)


And this doesn't even cover the type of variation I witnessed.
Paired up with hpux client the results got a lot more random.

7.504947 0.000034  eth0 > .97419395 > .sunrpc: 1472  proc #8 (frag 
34000:1480@0+)
7.504987 0.000040  eth0 < .nfs > .2487325589: reply ok 96
7.506321 0.001334  eth0 < .nfs > .2520880021: reply ok 96
7.506518 0.000197  eth0 >  > : (frag 34001:756@1480)
7.506549 0.000031  eth0 > .98419395 > .sunrpc: 1472  proc #8 (frag 
34001:1480@0+)
7.506695 0.000146  eth0 >  > : (frag 34002:1324@2960)
7.506721 0.000026  eth0 >  > : (frag 34002:1480@1480+)
7.506745 0.000024  eth0 > .99419395 > .sunrpc: 1472  proc #8 (frag 
34002:1480@0+)
7.523174 0.016429  eth0 < .nfs > .2537657237: reply ok 96  ***
7.523282 0.000108  eth0 >  > : (frag 34003:756@1480)
7.523318 0.000036  eth0 > .9a419395 > .sunrpc: 1472  proc #8 (frag 
34003:1480@0+)
7.523402 0.000084  eth0 < .nfs > .2554434453: reply ok 96
7.523504 0.000102  eth0 >  > : (frag 34004:1324@2960)
7.523536 0.000032  eth0 >  > : (frag 34004:1480@1480+)
7.523560 0.000024  eth0 > .9b419395 > .sunrpc: 1472  proc #8 (frag 
34004:1480@0+)
7.545043 0.021483  eth0 < .nfs > .2571211669: reply ok 96  ***
7.567171 0.022128  eth0 < .nfs > .2587988885: reply ok 96  ***
7.567376 0.000205  eth0 < .nfs > .2604766101: reply ok 96
Total time: 8.032432s (for 2 M)

--Karl

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic