[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] Re: Ceph not showing full capacity
From:       Amudhan P <amudhan83 () gmail ! com>
Date:       2020-10-26 15:19:36
Message-ID: CABhA=29ot9S5cduSUTO8v4CordQ0ODmC7Ax3aKJEOUVLMt+0Uw () mail ! gmail ! com
[Download RAW message or body]

Hi,

>>  Your first mail shows 67T (instead of 62)

I have just given an approximate number the first given number is the
right number.

I have deleted all pools and just created a fresh pool for test with PG num
128 and now it's showing a full size of 248TB.

output from " ceph df  "
--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
hdd    262 TiB  262 TiB  3.9 GiB    52 GiB       0.02
TOTAL  262 TiB  262 TiB  3.9 GiB    52 GiB       0.02

--- POOLS ---
POOL   ID  STORED  OBJECTS  USED  %USED  MAX AVAIL
pool3   8     0 B        0   0 B      0    124 TiB

So, PG number is not an issue in showing less size.

I am trying other options also to see what made this issue.


On Mon, Oct 26, 2020 at 8:20 PM 胡 玮文 <huww98@outlook.com> wrote:

>
> 在 2020年10月26日,22:30,Amudhan P <amudhan83@gmail.com> 写道:
>
> 
> Hi Jane,
>
> I agree with you and I was trying to say disk which has more PG will fill
> up quick.
>
> But, My question even though RAW disk space is 262 TB, pool 2 replica max
> storage is showing only 132 TB in the dashboard and when mounting the pool
> using cephfs it's showing 62 TB, I could understand that due to replica
> it's showing half of the space.
>
>
> Your first mail shows 67T (instead of 62)
>
> why it's not showing the entire RAW disk space as available space?
> Number of PG per pool play any vital role in showing available space?
>
>
> I might be wrong, but I think the size of mounted cephfs is calculated by
> "used + available". It is not directly related to raw disk space. You have
> unbalance issue, so you have less available space as explained previously.
> So the total size is less than expected.
>
> Maybe you should try to correct the unbalance first, and see if the
> available space and size go up. Increase pg_num, run balancer, etc.
>
> On Mon, Oct 26, 2020 at 12:37 PM Janne Johansson <icepic.dz@gmail.com>
> wrote:
>
>>
>>
>> Den sön 25 okt. 2020 kl 15:18 skrev Amudhan P <amudhan83@gmail.com>:
>>
>>> Hi,
>>>
>>> For my quick understanding How PG's are responsible for allowing space
>>> allocation to a pool?
>>>
>>
>> An objects name will decide which PG (from the list of PGs in the pool)
>> it will end
>> up on, so if you have very few PGs, the hashed/pseudorandom placement will
>> be unbalanced at times. As an example, if you have only 8 PGs and write
>> 9 large objects, then at least one (but probably two or three) PGs will
>> receive two
>> or more of those 9, and some will receive none just on pure statistics.
>> If you have 100 PGs, the chance of one getting two out of those nine
>> objects
>> is much smaller. Overall, with all pools accounted for, one should aim
>> for something
>> like 100 PGs per OSD, but you also need to count the replication factor
>> for each pool
>> so if you have replication = 3 and a pool gets 128 PGs, it will place
>> 3*128 PGs
>> out on various OSDs according to the crush rules.
>>
>> PGs don't have a size, but will grow as needed, and since the next object
>> to
>> be written can end up anywhere (depending on the hashed result) ceph df
>> must
>> always tell you the worst case when listing how much data this pool has
>> "left".
>> It will always be the OSD with least space left that limits the pool.
>>
>>
>>> My understanding that PG's basically helps in object placement when the
>>> number of PG's for a OSD's is high there is a high possibility that PG
>>> gets
>>> lot more data than other PG's.
>>
>>
>> This statement seems incorrect to me.
>>
>>
>>> At this situation, we can use the balance
>>> between OSD's.
>>> But, I can't understand the logic of how does it restrict space to a
>>> pool?
>>>
>>
>>
>> --
>> May the most significant bit of your life be positive.
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic