[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-block
Subject:    Re: [PATCH 2/2] xfs: add 'discard_sync' mount flag
From:       Dave Chinner <david () fromorbit ! com>
Date:       2018-04-30 23:23:20
Message-ID: 20180430232319.GV23861 () dastard
[Download RAW message or body]

On Mon, Apr 30, 2018 at 05:00:14PM -0600, Jens Axboe wrote:
> On 4/30/18 4:40 PM, Jens Axboe wrote:
> > On 4/30/18 4:28 PM, Dave Chinner wrote:
> >> Yes, it does, but so would having the block layer to throttle device
> >> discard requests in flight to a queue depth of 1. And then we don't
> >> have to change XFS at all.
> > 
> > I'm perfectly fine with making that change by default, and much easier
> > for me since I don't have to patch file systems.
> 
> Totally untested, but this should do the trick. It ensures we have
> a QD of 1 (per caller), which should be sufficient.
> 
> If people tune down the discard size, then you'll be blocking waiting
> for discards on issue.
> 
> diff --git a/block/blk-lib.c b/block/blk-lib.c
> index a676084d4740..0bf9befcc863 100644
> --- a/block/blk-lib.c
> +++ b/block/blk-lib.c
> @@ -11,16 +11,19 @@
>  #include "blk.h"
>  
>  static struct bio *next_bio(struct bio *bio, unsigned int nr_pages,
> -		gfp_t gfp)
> +			    gfp_t gfp)
>  {
> -	struct bio *new = bio_alloc(gfp, nr_pages);
> -
> +	/*
> +	 * Devices suck at discard, so if we have to break up the bio
> +	 * size due to the max discard size setting, wait for the
> +	 * previous one to finish first.
> +	 */
>  	if (bio) {
> -		bio_chain(bio, new);
> -		submit_bio(bio);
> +		submit_bio_wait(bio);
> +		bio_put(bio);
>  	}

This only addresses the case where __blkdev_issue_discard() breaks
up a single large discard, right? It seems like a brute force
solution, too, because it will do so even when the underlying device
is idle and there's no need to throttle.

Shouldn't the throttling logic at least look at device congestion?
i.e. if the device is not backlogged, then we should be able to
issue the discard without problems. 

I ask this because this only addresses throttling the "discard large
extent" case when the discard limit is set low. i.e. your exact
problem case. We know that XFS can issue large numbers of
discontiguous async discards in a single batch - this patch does not
address that case and so it will still cause starvation problems.

If we look at device congestion in determining how to throttle/back
off during discard issuing, then it doesn't matter what
max_discard_sectors is set to - it will throttle in all situations
that cause device overloads and starvations....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic