[prev in list] [next in list] [prev in thread] [next in thread] 

List:       dpdk-users
Subject:    Re: [dpdk-users] what's the cache size of rte_mempool_create()?
From:       Olivier Matz <olivier.matz () 6wind ! com>
Date:       2021-09-29 20:25:14
Message-ID: YVTLqojNbRc6JLss () platinum
[Download RAW message or body]

Hi,

On Wed, Sep 29, 2021 at 12:48:24PM +0200, Thomas Monjalon wrote:
> +Cc mempool maintainers
> 
> 08/09/2021 11:18, topperxin:
> > HI list
> > A question about the value of cache size of rte_mempool_crate() function, the \
> > defination of this function like below: 
> > 
> > struct rte_mempool *
> > 
> > rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
> > 
> > unsigned cache_size, unsigned private_data_size,
> > 
> > rte_mempool_ctor_t *mp_init, void *mp_init_arg,
> > 
> > rte_mempool_obj_cb_t *obj_init, void *obj_init_arg,
> > 
> > int socket_id, unsigned flags);
> > 
> > 
> > 
> > 
> > 
> > My question is : what's cache_size value means ? what's difference between if I \
> > set cache_size = 0 and cache_size = 512 ? I get some information from the the \
> > dpdk 20.11 it said that, if we set cache size to 0 , it can be useful to avoid \
> > losing objects in cache , I can't understand this point, does it mean that if we \
> > set the cache size to non zero, it will suffer the risk that some packages will \
> > lost ? right ? 

In summary, a mempool cache is a per-core table where object pointers
(mbuf pointers in case of mbuf pool) are stored temporarilly. When the
cache is full, pointers are returned to the common pool. When the cache
is empty, it is refilled from the common pool.

The advantage is to reduce the contention on the common pool, increasing
performance (the per-core cache does not need lock or atomic
operations). The drawback is that a core cannot get objects from the
cache of another core. In short, when you want to be sure to be able to
allocate 1000 objects, you need to create a mempool with a size of
1000 + (nb_core * cache_size).

If cache size is 0, there is no cache, i.e. all mempool get/put are done
from the common pool, which is slower.

More in the documentation:
https://doc.dpdk.org/guides/prog_guide/mempool_lib.html

Regards,
Olivier

> > 
> > Thanks for your tips.
> > 
> > 
> > BR.
> 
> 
> 
> 


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic