[prev in list] [next in list] [prev in thread] [next in thread] 

List:       squid-cvs
Subject:    delay_pool_write squid TODO_delay_pool_write,1.1.2.5,1.1.2.6
From:       Adrian Chadd <adri () users ! sourceforge ! net>
Date:       2008-07-25 7:58:06
Message-ID: 20080725075809.11589.qmail () squid-cache ! org
[Download RAW message or body]

Update of cvs.devel.squid-cache.org:/cvsroot/squid/squid

Modified Files:
      Tag: delay_pool_write
	TODO_delay_pool_write 
Log Message:
Update TODO list!



Index: TODO_delay_pool_write
===================================================================
RCS file: /cvsroot/squid/squid/Attic/TODO_delay_pool_write,v
retrieving revision 1.1.2.5
retrieving revision 1.1.2.6
diff -C2 -d -r1.1.2.5 -r1.1.2.6
*** TODO_delay_pool_write	20 Jul 2008 13:57:25 -0000	1.1.2.5
--- TODO_delay_pool_write	25 Jul 2008 07:58:04 -0000	1.1.2.6
***************
*** 6,45 ****
    class counters.)
  
- * Randomise the dequeueing of the slow write fds so things get a somewhat fairer
-   go at getting a chance at IO when a pool bucket is exhausted.
- 
  * Per-client and per-connection throughput classes.
  
- * Class 5 - per-connection - the statistics returned are a bit pointless as there's no
-   way to return exactly -which- FDs are being delayed under the given pool. Sigh!
- 
- * Should verify that the Class 5 per-fd buckets are being refilled correctly.
-   Since there's no way to know an FD is -open- in a particular delay pool then -all- the
-   "buckets" are being refilled, used or not. Its not a big deal for development but
-   it will become a big deal later on when multiple class 5 pools are used on a busy system
-   with lots of FDs.
- 
-   Solution? Perhaps assume one pool per FD, keep a seperate FD array mapping an FD to a
-   class 5 pool, and then walk -that- array to update? That way the array could be kept
-   up to date with fd_close() and the stats could also then be changed to make sense!
- 
- * The write-side pool IO rates are still wrong. They're not 50% wrong, but they're not
-   exactly correct like they should be on a single connection. Re-enable the debugging
-   to watch the write queueing in commHandleWrite() and figure it out.
- 
- * .. the IO rates are now slightly closer to where they should be but the mgr:delay2
-   output is confusing. Two 8k connections are going but the output is thus:
- 
- pools.pool.1.current=71808
- pools.pool.1.bytes=0
- pools.pool.1.fd.14.aggregate=0
- pools.pool.1.fd.16.aggregate=8192
- 
-   - bytes not being updated!
-   - the aggregate in the fd's bounce every second, 0/8192, 8192/0
-   - even though there are two 8k connections, the current pool only shows 71808
-     (80000 - 8192) rather than 80000 - (2*8192). I wonder why.
- 
- 
  * Why the hell are the delay pool ACLs being evaluated a whole lot of times per
    request, rather than once each? That needs to be looked at sometime.
--- 6,11 ----

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic