[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] Re: Please discuss about Slow Peering
From:       Frank Schilder <frans () dtu ! dk>
Date:       2024-05-21 15:16:15
Message-ID: DB9P192MB18507EB74AA2DF0B3B1173E5D6EA2 () DB9P192MB1850 ! EURP192 ! PROD ! OUTLOOK ! COM
[Download RAW message or body]

> Not with the most recent Ceph releases.

Actually, this depends. If its SSDs for which IOPs profit from higher iodepth, it is \
very likely to improve performance, because until today each OSD has only one \
kv_sync_thread and this is typically the bottleneck with heavy IOPs load. Having 2-4 \
kv_sync_threads per SSD, meaning 2-4 OSDs per disk, will help a lot if this thread is \
saturating.

For NVMes this is usually not required.

The question still remains, do you have enough CPU? If you have 13 disks with 4 OSDs \
each, you will need a core-count of at least 50-ish per host. Newer OSDs might be \
able to utilize even more on fast disks. You will also need 4 times the RAM.

> I suspect your PGs are too few though.

In addition, on these drives you should aim for 150-200 PGs per OSD (another reason \
to go x4 OSDs - x4 PGs per drive). We have 198PGs/OSD on average and this helps a lot \
with IO, recovery, everything.

Best regards,
=================
Frank Schilder
AIT Ris©ª Campus
Bygning 109, rum S14

________________________________________
From: Anthony D'Atri <anthony.datri@gmail.com>
Sent: Tuesday, May 21, 2024 3:06 PM
To: ¼­¹Î¿ì
Cc: Frank Schilder; ceph-users@ceph.io
Subject: Re: [ceph-users] Please discuss about Slow Peering



I have additional questions,
We use 13 disk (3.2TB NVMe) per server and allocate one OSD to each disk. In other \
words 1 Node has 13 osds. Do you think this is inefficient?
Is it better to create more OSD by creating LV on the disk?

Not with the most recent Ceph releases.  I suspect your PGs are too few though.




_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic