[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-block
Subject:    Re: [PATCH V2 3/3] blk-mq: dequeue request one by one from sw queue iff hctx is busy
From:       Ming Lei <ming.lei () redhat ! com>
Date:       2018-06-29 15:34:53
Message-ID: 20180629153447.GA15227 () ming ! t460p
[Download RAW message or body]

On Fri, Jun 29, 2018 at 08:58:16AM -0600, Jens Axboe wrote:
> On 6/29/18 2:12 AM, Ming Lei wrote:
> > It won't be efficient to dequeue request one by one from sw queue,
> > but we have to do that when queue is busy for better merge performance.
> > 
> > This patch takes EWMA to figure out if queue is busy, then only dequeue
> > request one by one from sw queue when queue is busy.
> > 
> > Kashyap verified that this patch basically brings back rand IO perf
> > on megasas_raid in case of none io scheduler. Meantime I tried this
> > patch on HDD, and not see obvious performance loss on sequential IO
> > test too.
> 
> Outside of the comments of others, please also export ->busy from
> the blk-mq debugfs code.

Good idea!

> 
> > diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
> > index e3147eb74222..a5113e22d720 100644
> > --- a/include/linux/blk-mq.h
> > +++ b/include/linux/blk-mq.h
> > @@ -34,6 +34,7 @@ struct blk_mq_hw_ctx {
> >  
> >  	struct sbitmap		ctx_map;
> >  
> > +	unsigned int		busy;
> >  	struct blk_mq_ctx	*dispatch_from;
> >  
> >  	struct blk_mq_ctx	**ctxs;
> 
> This adds another hole. Consider swapping it a bit, ala:
> 
> 	struct blk_mq_ctx       *dispatch_from;
> 	unsigned int            busy;
> 
> 	unsigned int            nr_ctx;
> 	struct blk_mq_ctx       **ctxs;
> 
> to eliminate a hole, instead of adding one more.

OK

Thanks,
Ming
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic