[prev in list] [next in list] [prev in thread] [next in thread]
List: ceph-users
Subject: [ceph-users] =?utf-8?b?5Zue5aSNOiAgUmU6ICBta2ZzIHJiZCBpbWFnZSBp?= =?utf-8?q?s_very_slow?=
From: shadow_lin () 163 ! com (shadow_lin)
Date: 2017-10-31 7:05:15
Message-ID: 2c06928e.295a.15f713fa2c7.Coremail.shadow_lin () 163 ! com
[Download RAW message or body]
Hi Jason,
Thank you for your advice.
The no discard option works gread.It now takes 5min to format 5t rbd image in xfs and \
only seconds to format in ext4. Is there any drawback to format rbd image with no \
discard option? Thanks
2017-10-31
lin.yunfan
????Jason Dillaman <jdillama at redhat.com>
?????2017-10-30 03:07
???Re: [ceph-users] mkfs rbd image is very slow
????"shadow_lin"<shadow_lin at 163.com>
???"ceph-users"<ceph-users at lists.ceph.com>
Try running "mkfs.xfs -K" which disables discarding to see if that
improves the mkfs speed. The librbd-based implementation encountered a
similar issue before when certain OSs sent very small discard extents
for very large disks.
On Sun, Oct 29, 2017 at 10:16 AM, shadow_lin <shadow_lin at 163.com> wrote:
> Hi all,
> I am testing ec pool backed rbd image performace and found that it takes a
> very long time to format the rbd image by mkfs.
> I created a 5TB image and mounted it on the client(ubuntu 16.04 with 4.12
> kernel) and use mkfs.ext4 and mkfs.xfs to format it.
> It takes hours to finish the format and the load on some osds are high and I
> can get slow request warning from time to time.
>
> What is a reasonable time to format a 5TB rbd image?What should I do to
> improve it?
> Thanks
>
> 2017-10-29
> ________________________________
> Frank
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171031/5caf48c8/attachment.html>
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic