[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-raid
Subject:    Re: Best way to add caching to a new raid setup.
From:       Nix <nix () esperi ! org ! uk>
Date:       2020-08-31 19:20:54
Message-ID: 87sgc2mxk9.fsf () esperi ! org ! uk
[Download RAW message or body]

On 28 Aug 2020, Ram Ramesh verbalised:
> Mythtv is a sever client DVR system. I have a client next to each of
> my TVs and one backend with large disk (this will have RAID with
> cache). At any time many clients will be accessing different programs
> and any scheduled recording will also be going on in parallel. So you
> will see a lot of seeks, but still all will be based on limited
> threads (I only have 3 TVs and may be one other PC acting as a client)
> So lots of IOs, mostly sequential, across small number of threads. I
> think most cache algorithms should be able to benefit from random
> access to blocks in SSD.

FYI: bcache documents how its caching works. Assuming you ignore the
write cache (which I recommend, since nearly all the data corruption and
starvation bugs in bcache have been in the write caching code, and it
doesn't look like write caching would benefit your use case anyway:
if you want an ssd write cache, just use RAID journalling), bcache is
very hard to break: if by some mischance the cache does become corrupted
you can decouple it from the backing RAID array and just keep using it
until you recreate the cache device and reattach it.

bcache tracks the "sequentiality" of recent reads and avoids caching big
sequential I/O on the grounds that it's a likely waste of SSD lifetime
to do so: HDDs can do contiguous reads quite fast: what you want to
cache is seeky reads. This means that your mythtv reads will only be
cached when there are multiple contending reads going on. This doesn't
seem terribly useful, since for a media player any given contending read
is probably not going to be of metadata and is probably not going to be
repeated for a very long time (unless you particularly like repeatedly
rewatching the same things). So you won't get much of a speedup or
reduction in contention.

Where caches like bcache and the LVM cache help is when small seeky
reads are likely to be repeated, which is very common with filesystem
metadata and a lot of other workloads, but not common at all for media
files in my experience.

(FYI: my setup is spinning rust <- md-raid6 <- bcache <- LVM PV, with
one LVM PV omitting the bcache layer and both combined into one VG. My
bulk media storage is on the non-bcached PV. The filesystems are almost
all xfs, some of them with cryptsetups in the way too. One warning:
bcache works by stuffing a header onto the data, and does *not* pass
through RAID stripe size info etc: you'll need to pass in a suitable
--data-offset to make-bcache to ensure that I/O is RAID-aligned, and
pass in the stripe size etc to the underlying oeprations. I did this by
mkfsing everything and then doing a blktrace of the underlying RAID
devices while I did some simple I/Os to make sure the RAID layer was
doing nice stripe-aligned I/O. This is probably total overkill for a
media server, but this was my do-everything server, so I cared very much
about small random I/O performance. This was particularly fun given that
one LVM PV had a bcache header and the other one didn't, and I wanted
the filesystems to have suitable alignment for *both* of them at once...
it was distinctly fiddly to get right.)
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic