[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] Re: Filesystem offline after enabling cephadm
From:       Daniel Ppelzleithner <poelzi () poelzi ! org>
Date:       2021-12-31 17:43:42
Message-ID: 84529EE0-C936-4FB8-BC68-5209F3032461 () poelzi ! org
[Download RAW message or body]

Start MDS with increased debug level. Try to make sense of the logs.

Maybe Try the repair damaged filesystem how to.

Kind regards

On December 30, 2021 2:50:20 PM GMT+01:00, "Tecnologia Charne.Net" <tecno@charne.net> wrote:
>Hello!
>3 days have passed and I can't get cephFS work again.
>I read a lot of the available documentation, posts [0] that mention 
>"magic" words, threads[1], blogs, etc...
>And tried the suggested commands:
>
>* ceph fs set cephfs max_mds 1
>* ceph fs set cephfs allow_standby_replay false
>* ceph fs compat cephfs add_incompat 7 "mds uses inline data"
>* ceph fs set cephfs down false
>* ceph fs set cephfs joinable true...
>* Please, dear mds, become online, please....
>
>But I still have:
>
># ceph mds stat
>cephfs:1 1 up:standby
>
>## ceph fs dump
>e1953
>enable_multiple, ever_enabled_multiple: 0,1
>default compat: compat={},rocompat={},incompat={1=base v0.20,2=client 
>writeable ranges,3=default file layouts on dirs,4=dir inode in separate 
>object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds 
>uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
>legacy client fscid: 1
>
>Filesystem 'cephfs' (1)
>fs_name    cephfs
>epoch    1953
>flags    12
>created    2021-12-29T14:01:39.824756+0000
>modified    2021-12-30T13:37:30.470750+0000
>tableserver    0
>root    0
>session_timeout    60
>session_autoclose    300
>max_file_size    1099511627776
>required_client_features    {}
>last_failure    0
>last_failure_osd_epoch    219687
>compat    compat={},rocompat={},incompat={1=base v0.20,2=client 
>writeable ranges,3=default file layouts on dirs,4=dir inode in separate 
>object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds 
>uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
>max_mds    1
>in    0
>up    {}
>failed
>damaged
>stopped    1
>data_pools    [14]
>metadata_pool    13
>inline_data    disabled
>balancer
>standby_count_wanted    0
>
>
>Standby daemons:
>
>[mds.cephfs.ceph-mds3.mfmaeh{-1:21505137} *state up:standby *seq 1 
>join_fscid=1 addr 
>[v2:192.168.15.207:6800/4170490328,v1:192.168.15.207:6801/4170490328] 
>compat {c=[1],r=[1],i=[77f]}]
>dumped fsmap epoch 1953
>
>
>
>
>
>Please,
>When you have returned from celebrating the new year, do you have any 
>ideas that might help me?
>
>
>Happy new year!
>
>Javier.-
>
>
>
>
>[0] 
>https://forum.proxmox.com/threads/ceph-16-2-6-cephfs-failed-after-upgrade-from-16-2-5.97742/
>[1] 
>https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/KQ5A5OWRIUEOJBC7VILBGDIKPQGJQIWN/
>
>
>El 28/12/21 a las 15:02, Tecnologia Charne.Net escribió:
>> Today, I upgraded from Pacific 16.2.6 to 16.2.7.
>> Since some items in dashboard weren't enabled 
>> (Cluster->Hosts->Versions, for example) because I haven't cephadm 
>> enabled, I activaded it and adopting every mon, mgr, osd on cluster, 
>> following instructions in 
>> https://docs.ceph.com/en/pacific/cephadm/adoption/
>>
>> Everything was fine until point 10: Redeploy MDS daemons....
>>
>>
>> I have now:
>>
>> # ceph health detail
>> HEALTH_ERR 1 filesystem is degraded; 1 filesystem has a failed mds 
>> daemon; 1 filesystem is offline
>> [WRN] FS_DEGRADED: 1 filesystem is degraded
>>     fs cephfs is degraded
>> [WRN] FS_WITH_FAILED_MDS: 1 filesystem has a failed mds daemon
>>     fs cephfs has 2 failed mdss
>> [ERR] MDS_ALL_DOWN: 1 filesystem is offline
>>     fs cephfs is offline because no MDS is active for it.
>>
>>
>> # ceph fs status
>> cephfs - 0 clients
>> ======
>> RANK  STATE   MDS  ACTIVITY  DNS  INOS  DIRS  CAPS
>>  0    failed
>>  1    failed
>>       POOL         TYPE     USED  AVAIL
>> cephfs_metadata  metadata  1344M  20.8T
>>   cephfs_data      data     530G  8523G
>>    STANDBY MDS
>> cephfs.mon1.qhueuv
>> cephfs.mon2.zrswzj
>> cephfs.mon3.cusflb
>> MDS version: ceph version 16.2.5-387-g7282d81d 
>> (7282d81d2c500b5b0e929c07971b72444c6ac424) pacific (stable)
>>
>>
>> # ceph fs dump
>> e1777
>> enable_multiple, ever_enabled_multiple: 1,1
>> default compat: compat={},rocompat={},incompat={1=base v0.20,2=client 
>> writeable ranges,3=default file layouts on dirs,4=dir inode in 
>> separate object,5=mds uses versioned encoding,6=dirfrag is stored in 
>> omap,7=mds uses inline data,8=no anchor table,9=file layout 
>> v2,10=snaprealm v2}
>> legacy client fscid: 1
>>
>> Filesystem 'cephfs' (1)
>> fs_name  cephfs
>> epoch  1776
>> flags  12
>> created  2019-07-03T14:11:34.215467+0000
>> modified  2021-12-28T17:42:18.197012+0000
>> tableserver  0
>> root  0
>> session_timeout  60
>> session_autoclose  300
>> max_file_size  1099511627776
>> required_client_features  {}
>> last_failure  0
>> last_failure_osd_epoch  218775
>> compat  compat={},rocompat={},incompat={1=base v0.20,2=client 
>> writeable ranges,3=default file layouts on dirs,4=dir inode in 
>> separate object,5=mds uses versioned encoding,6=dirfrag is stored in 
>> omap,7=mds uses inline data,8=no anchor table,9=file layout 
>> v2,10=snaprealm v2}
>> max_mds  1
>> in  0,1
>> up  {}
>> failed  0,1
>> damaged
>> stopped
>> data_pools  [14]
>> metadata_pool  13
>> inline_data  disabled
>> balancer
>> standby_count_wanted  1
>>
>>
>> Standby daemons:
>>
>> [mds.cephfs.mon1.qhueuv{-1:21378633} state up:standby seq 1 
>> join_fscid=1 addr 
>> [v2:192.168.15.200:6800/3327091876,v1:192.168.15.200:6801/3327091876] 
>> compat {c=[1],r=[1],i=[77f]}]
>> [mds.cephfs.mon2.zrswzj{-1:21384283} state up:standby seq 1 
>> join_fscid=1 addr 
>> [v2:192.168.15.203:6800/838079265,v1:192.168.15.203:6801/838079265] 
>> compat {c=[1],r=[1],i=[77f]}]
>> [mds.cephfs.mon3.cusflb{-1:21393659} state up:standby seq 1 
>> join_fscid=1 addr 
>> [v2:192.168.15.205:6800/1887883707,v1:192.168.15.205:6801/1887883707] 
>> compat {c=[1],r=[1],i=[77f]}]
>> dumped fsmap epoch 1777
>>
>>
>> Any clue will be most welcomed!
>>
>>
>> Thanks in advance.
>>
>>
>> Javier.-
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-leave@ceph.io
>
>_______________________________________________
>ceph-users mailing list -- ceph-users@ceph.io
>To unsubscribe send an email to ceph-users-leave@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic