[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-raid
Subject:    Re: Intermittent stalling of all MD IO, Debian buster (4.19.0-16)
From:       Guoqing Jiang <jgq516 () gmail ! com>
Date:       2021-06-18 5:35:08
Message-ID: 33236a83-a14d-a9e0-5384-91aa007858dc () gmail ! com
[Download RAW message or body]

Hi Andy,

On 6/16/21 11:05 PM, Andy Smith wrote:
> Hi Guoqing,
>
> Thanks for looking at this.
>
> On Wed, Jun 16, 2021 at 11:57:33AM +0800, Guoqing Jiang wrote:
>> The above looks like the bio for sb write was throttled by wbt, which caused
>> the first calltrace.
>> I am wondering if there  were intensive IOs happened to the
>> underlying device of md5, which triggered wbt to throttle sb
>> write, or can you access the underlying device directly?
> Next time it occurs I can check if I am able to read from the SSDs
> that make up the MD device, if that information would be helpful.
>
> I have never been able to replicate the problem in a test
> environment so it is likely that it needs to be under heavy load for
> it to happen.

I guess so, and a reliable reproducer definitely  helps us to analysis 
the root cause.

>> And there was a report [1] for raid5 which may related to wbt throttle as
>> well, not sure if the
>> change [2] could help or not.
>>
>> [1]. https://lore.kernel.org/linux-raid/d3fced3f-6c2b-5ffa-fd24-b24ec6e7d4be@xmyslivec.cz/
>> [2]. https://lore.kernel.org/linux-raid/cb0f312e-55dc-cdc4-5d2e-b9b415de617f@gmail.com/
> All of my MD arrays tend to be RAID-1 or RAID-10, two devices, no
> journal, internal bitmap. I see the reporter of this problem was
> using RAID-6 with an external write journal. I can still build a
> kernel with this patch and try it out, if you think it could possibly
> help.

Yes, because both of the two issues have wbt related call traces though 
raid level is different.

> The long time between incidents obviously makes things
> extra challenging.
>
> The next step I have taken is to put the buster-backports kernel
> package (5.10.24-1~bpo10+1) on two test servers, and will also boot
> the production hosts into this if they should experience the problem
> again.

Good luck :).

Thanks,
Guoqing
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic