[prev in list] [next in list] [prev in thread] [next in thread] 

List:       lustre-discuss
Subject:    Re: [lustre-discuss] Mixing ZFS and LDISKFS
From:       Backer via lustre-discuss <lustre-discuss () lists ! lustre ! org>
Date:       2024-01-13 0:48:12
Message-ID: CAPq+oA+ze7kLra1vYC5rXorNmMutbVgNWS-mOmLDvavo+YHE0g () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Sounds good. Thank you!

On Fri, 12 Jan 2024 at 19:28, Andreas Dilger <adilger@whamcloud.com> wrote:

> All of the OSTs and MDTs are "independently managed" (have their own
> connection state between each client and target) so this should be
> possible, though I don't know of sites that are doing this.  Possibly thi=
s
> makes sense to put NVMe flash OSTs on ldiskfs, and HDD OSTs on ZFS, and
> then put them in OST pools so that they are managed separately.
>
> On Jan 12, 2024, at 10:38, Backer <backer.kolo@gmail.com> wrote:
>
> Thank you Andreas! How about mixing OSTs?  The requirement is to do RAID
> with small volumes using ZFS and have a large OST. This is to reduce the
> number of OSTs overall as the cluster being extended.
>
> On Fri, 12 Jan 2024 at 11:26, Andreas Dilger <adilger@whamcloud.com>
> wrote:
>
>> Yes, some systems use ldiskfs for the MDT (for performance) and ZFS for
>> the OSTs (for low-cost RAID).  The IOPS performance of ZFS is low vs.
>> ldiskfs, but the streaming bandwidth is fine.
>>
>> Cheers, Andreas
>>
>> > On Jan 12, 2024, at 08:40, Backer via lustre-discuss <
>> lustre-discuss@lists.lustre.org> wrote:
>> >
>> > =EF=BB=BF
>> > Hi,
>> >
>> > Could we mix ZFS and LDISKFS together in a cluster?
>> >
>> > Thank you,
>> >
>> >
>> > _______________________________________________
>> > lustre-discuss mailing list
>> > lustre-discuss@lists.lustre.org
>> > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Whamcloud
>
>
>
>
>
>
>
>

[Attachment #5 (text/html)]

<div dir="ltr">Sounds good. Thank you!  </div><br><div class="gmail_quote"><div \
dir="ltr" class="gmail_attr">On Fri, 12 Jan 2024 at 19:28, Andreas Dilger &lt;<a \
href="mailto:adilger@whamcloud.com">adilger@whamcloud.com</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">



<div style="overflow-wrap: break-word;">
All of the OSTs and MDTs are &quot;independently managed&quot; (have their own \
connection state between each client and target) so this should be possible, though I \
don&#39;t know of sites that are doing this.   Possibly this makes sense to put NVMe \
flash OSTs on ldiskfs,  and HDD OSTs on ZFS, and then put them in OST pools so that \
they are managed separately.   <br> <div><br>
<blockquote type="cite">
<div>On Jan 12, 2024, at 10:38, Backer &lt;<a href="mailto:backer.kolo@gmail.com" \
target="_blank">backer.kolo@gmail.com</a>&gt; wrote:</div> <br>
<div>
<div dir="ltr">Thank you Andreas! How about mixing OSTs?   The requirement is to do \
RAID with small volumes using ZFS and have a large OST. This is to reduce the number \
of OSTs overall as the cluster being extended.  </div> <br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Fri, 12 Jan 2024 at 11:26, Andreas Dilger &lt;<a \
href="mailto:adilger@whamcloud.com" target="_blank">adilger@whamcloud.com</a>&gt; \
wrote:<br> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"> Yes, some systems use ldiskfs for the MDT (for \
performance) and ZFS for the OSTs (for low-cost RAID).   The IOPS performance of ZFS \
is low vs. ldiskfs, but the streaming bandwidth is fine. <br>
<br>
Cheers, Andreas<br>
<br>
&gt; On Jan 12, 2024, at 08:40, Backer via lustre-discuss &lt;<a \
href="mailto:lustre-discuss@lists.lustre.org" \
target="_blank">lustre-discuss@lists.lustre.org</a>&gt; wrote:<br> &gt; <br>
&gt; <br>
&gt; Hi,<br>
&gt; <br>
&gt; Could we mix ZFS and LDISKFS together in a cluster? <br>
&gt; <br>
&gt; Thank you,<br>
&gt; <br>
&gt; <br>
&gt; _______________________________________________<br>
&gt; lustre-discuss mailing list<br>
&gt; <a href="mailto:lustre-discuss@lists.lustre.org" \
target="_blank">lustre-discuss@lists.lustre.org</a><br> &gt; <a \
href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org" \
rel="noreferrer" target="_blank"> \
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</a><br> </blockquote>
</div>
</div>
</blockquote>
</div>
<br>
<div>
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-in \
dent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
 <div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-i \
ndent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
 <div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-i \
ndent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
 <div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-i \
ndent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
 <div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-i \
ndent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
 <div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-i \
ndent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
 <div>Cheers, Andreas</div>
<div>--</div>
<div>Andreas Dilger</div>
<div>Lustre  Principal Architect</div>
<div>Whamcloud</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
</div>
<br>
<br>
</div>
<br>
</div>

</blockquote></div>



_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic