[prev in list] [next in list] [prev in thread] [next in thread] 

List:       lartc
Subject:    Re: [LARTC] newbie: TC[NG] with (256kbit/s down and 768kbit/s up)
From:       Andy Furniss <andy.furniss () dsl ! pipex ! com>
Date:       2004-05-05 12:33:21
Message-ID: 4098DF11.8030803 () dsl ! pipex ! com
[Download RAW message or body]

Andreas Klauer wrote:
> Am Wednesday 05 May 2004 10:34 schrieb Andy Furniss:
> 
>>Andreas Klauer wrote:
>>
>>>Maybe my script will do: http://www.metamorpher.de/ipshape/
> 
> 
> I renamed it to 'Fair NAT' and moved it to 
> http://www.metamorpher.de/fairnat/, because there already was another 
> script called ipshape. I didn't like the name anyway :-)
> 
> 
>>Nice script - one thing I found was that HTB dequeued packets in pairs -
>>with MTU 1500 and your 128kbit up this will hurt latency a bit.
>>
>>The solution was to change from 1 to 0
>>
>>#define HTB_HYSTERESIS 0 in net/sched/sch_htb.c
> 
> 
> Thanks for the suggestion. I just recompiled the kernel - we'll see if I 
> notice any change. However, I don't yet fully understand what HYSTERESIS 
> actually does. There's a FAQ on docum.org, but I still don't get it.
> What does 'packets in pairs' mean? Multiple packages at once sounds to me 
> like burst.

YMMV of course - I have posted this here before.

I was using tcpdump a while back, to sus how (e)sfq worked. I had a very 
simple test setup, which just throttled bulk traffic to 51kbit my link 
is 256/512. I had burst set low and quantum to my MTU. Sniffing tcp 
after shaping I could see from the timestamps that the packets were 
being released in pairs - the rate was OK though. I changed timing from 
jiffies to cpu - no difference, I then remembered seeing the hysteresis 
page on stefs' site and tried that and it fixed it.

I saw an improvement in my latency when my upstream was full - doing the 
maths, it behaves as expected now, ie. the worst case delay is my 
baseline latency + bitrate for my speed/mtu.

If your real (ie. not a cable modem) upstream is 128k then a 1500 byte 
packet is going to take about 80-90ms - so in theory when your up is 
full you should be able to notice the difference in max reading on ping. 
It will pull avg down aswell.

There are reasons it may make no difference for you though -

Your setup shares all traffic per IP - so if others are using their 
uprate you will queue anyway, I only do bulk per IP so in theory my 
interactive packets never queue (the rate/burst for my interactive class 
is way higher than the traffic should ever be - easy on a home setup, 
probably not so easy in real world)

I use MTU/quantum 1478 - which may or may not have caused the pairing in 
the first place - I didn't test 1500.

I explicitly set low (c)bursts for bulk - I don't know what the defaults 
will be for you not setting them - but I guess they should soon get used 
up anyway.

Andy.


> 
> I wish they would make such things available in kernel configuration menu,
> with a proper explanation. If you look in the code, there is loads of stuff 
> that can be customized in the kernel by changing defines directly, but you 
> rarely can change those things via kernel config. :-(
> 
> Andreas
> _______________________________________________
> LARTC mailing list / LARTC@mailman.ds9a.nl
> http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
> 


_______________________________________________
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic