[prev in list] [next in list] [prev in thread] [next in thread] 

List:       apr-dev
Subject:    Re: svn commit: r1788334 - in /apr/apr/trunk: CHANGES include/apr_allocator.h memory/unix/apr_pools.
From:       Yann Ylavic <ylavic.dev () gmail ! com>
Date:       2017-03-28 17:24:55
Message-ID: CAKQ1sVOXXY2TtDeXW7KviU+QNJoBw+VTZkCEke-c_mJ-MBG9kw () mail ! gmail ! com
[Download RAW message or body]

On Tue, Mar 28, 2017 at 2:39 PM, Ivan Zhakov <ivan@visualsvn.com> wrote:
>
> I'm not sure that *actual* allocation size could be predicted in all
> cases and possible apr_allocator_t implementation.

Is there multiple apr_allocator_t implementations?

> Currently
> apr_bucket_alloc_aligned_floor() and apr_bucket_alloc() logic should
> be kept in sync, which is error prone imho.

I'd say that thanks to apr_allocator_align() they at least do not
depend on the allocator internals (look at the big assumption in the
!APR_VERSION_AT_LEAST(1,6,0) case in
apr_bucket_alloc_aligned_floor()...).
If one changes SMALL_NODE_SIZE or SIZEOF_NODE_HEADER_T it should
continue to work, but yes if the bucket_alloc implementation changes
radically it may break (I hope that before such change one would grep
SMALL_NODE_SIZE and SIZEOF_NODE_HEADER_T in the file :)

I don't see how we couldn't implement any new version of
apr_bucket_alloc_aligned_floor() though, some specific example of such
a breaking change?

>
> After looking to apr_bucket_alloc code I suggest to add
> apr_bucket_node_size(const void *node) that returns actual allocated
> node size. apr_file_bucket implementation could allocate desired
> buffer size and then query actual allocated size via
> apr_bucket_node_size. Apache Serf bucket_allocator uses similar
> approach.

That's still after the allocation took place.

Let's say a user configures the buffer size of file buckets to 8192
bytes (users like powers of two ;)

With the help apr_bucket_alloc_aligned_floor() we can allocate exactly
8192 bytes, yet have only ~8100 of them available for reads (a bit
like APR_BUCKET_BUFF_SIZE=8000 takes allocators overhead into account,
approximately...).

With your method, we'd allocate 12288 bytes, but either use only 3/4
of them (internal fragmentation), or really try to read ~12200 bytes,
in both cases this is more than what the user asked.
I tend to think that the configured value is an upper bound, more
related to some controlled memory consumption (or some device
capacity) than anything.


Regards,
Yann.
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic