[prev in list] [next in list] [prev in thread] [next in thread]
List: scst-devel
Subject: Re: [Scst-devel] Blocked kernel threads under kernel 6.3
From: Sietse van Zanen <sietse () wizdom ! nu>
Date: 2023-06-28 8:45:34
Message-ID: fef6651bffe9493fb0c026a5f37dfc03 () wizdom ! nu
[Download RAW message or body]
Hi Gleb,
That looks to be much better:
[sietse@san ~]$ uptime
10:32:21 up 13:39, 2 users, load average: 0.00, 0.00, 0.00
-Sietse
-----Original Message-----
From: Gleb Chesnokov <gleb.chesnokov@scst.dev>
Sent: Tuesday, June 27, 2023 4:21 PM
To: Sietse van Zanen <sietse@wizdom.nu>; scst-devel@lists.sourceforge.net
Subject: Re: [Scst-devel] Blocked kernel threads under kernel 6.3
Hi Sietse,
On 6/27/23 15:20, Sietse van Zanen wrote:
> Looking at the code, I get the idea that this may actually be expected behavior?
>
> If I trace scst_init_thread():
>
> while (!kthread_should_stop()) {
>
> scst_wait_event_lock_irq(scst_init_cmd_list_waitQ,
>
> test_init_cmd_list(),
>
> scst_init_lock);
>
> scst_do_job_init();
>
> }
>
> #define scst_wait_event_lock_irq(wq_head, condition, lock) \
> \
> do { \
> \
> if (condition) \
> \
> break; \
> \
> __scst_wait_event_lock_irq(wq_head, condition, lock); \
> \
> } while (0)
>
> #define __scst_wait_event_lock_irq(wq_head, condition, lock) \
> \
> (void)___scst_wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 0,
>
> static inline bool test_init_cmd_list(void)
>
> {
>
> return (!list_empty(&scst_init_cmd_list) &&
>
> !scst_activity_suspended()) ||
>
> unlikely(kthread_should_stop()) ||
>
> (scst_init_poll_cnt > 0);
>
> }
>
> So, if I understand correctly the scst_initd thread basically waits in an \
> uninterruptable state until the command queue is not empty and scst is not \
> suspended or is being shutdown.
> -Sietse
>
> *From:*Sietse van Zanen <sietse@wizdom.nu>
> *Sent:* Tuesday, June 27, 2023 11:30 AM
> *To:* scst-devel@lists.sourceforge.net
> *Subject:* [Scst-devel] Blocked kernel threads under kernel 6.3
>
> Hi,
>
> When running under kernel 6.3 there are 9 scst processes that are forever blocked \
> instead of sleeping, leading to a load of 9.0 on an idle system.
> [sietse@san system]$ uptime
>
> 10:11:34 up 13:07, 1 user, load average: 9.01, 9.01, 9.00
>
> [sietse@san system]$ ps axo pid,state,args | grep D
>
> PID S COMMAND
>
> 136 D [scst_uid]
>
> 139 D [scst_initd]
>
> 140 D [scsi_tm]
>
> 141 D [scst_mgmtd]
>
> 142 D [scst_usr_cleanupd]
>
> 143 D [iscsird0_0]
>
> 144 D [iscsird0_1]
>
> 145 D [iscsiwr0_0]
>
> 146 D [iscsiwr0_1]
>
> This is in initramfs without any user space processes running. It seems that as \
> soon as these kernel threads are started, they block.
> I am testing with 6.3 kernel, so haven’t actually tried to configure scst.
>
> Jun 27 11:09:32 san kernel: sysrq: Show Blocked State
>
> Jun 27 11:09:32 san kernel: task:scst_uid state:D stack:0 pid:137 \
> ppid:2 flags:0x00004000
> Jun 27 11:09:32 san kernel: Call Trace:
>
> Jun 27 11:09:32 san kernel: <TASK>
>
> Jun 27 11:09:32 san kernel: __schedule+0x618/0x13a0
>
> Jun 27 11:09:32 san kernel: ? set_next_entity+0xc2/0x100
>
> Jun 27 11:09:32 san kernel: ? set_next_task_fair+0x74/0x1c0
>
> Jun 27 11:09:32 san kernel: schedule+0x50/0x90
>
> Jun 27 11:09:32 san kernel: sysfs_work_thread_fn+0x204/0x350
>
> Jun 27 11:09:32 san kernel: ? wake_bit_function+0x60/0x60
>
> Jun 27 11:09:32 san kernel: ? scst_sysfs_queue_wait_work+0x1b0/0x1b0
>
> Jun 27 11:09:32 san kernel: kthread+0xe1/0x100
>
> Jun 27 11:09:32 san kernel: ? kthread_blkcg+0x30/0x30
>
> Jun 27 11:09:32 san kernel: ret_from_fork+0x22/0x30
>
> Jun 27 11:09:32 san kernel: </TASK>
>
> Jun 27 11:09:32 san kernel: task:scst_initd state:D stack:0 pid:144 \
> ppid:2 flags:0x00004000
> Jun 27 11:09:32 san kernel: Call Trace:
>
> Jun 27 11:09:32 san kernel: <TASK>
>
> Jun 27 11:09:32 san kernel: __schedule+0x618/0x13a0
>
> Jun 27 11:09:32 san kernel: ? set_next_entity+0xc2/0x100
>
> Jun 27 11:09:32 san kernel: ? set_next_task_fair+0x74/0x1c0
>
> Jun 27 11:09:32 san kernel: schedule+0x50/0x90
>
> Jun 27 11:09:32 san kernel: scst_init_thread+0x226/0x370
>
> Jun 27 11:09:32 san kernel: ? wake_bit_function+0x60/0x60
>
> Jun 27 11:09:32 san kernel: ? scst_lookup_tgt_dev+0x30/0x30
>
> Jun 27 11:09:32 san kernel: kthread+0xe1/0x100
>
> Jun 27 11:09:32 san kernel: ? kthread_blkcg+0x30/0x30
>
> Jun 27 11:09:32 san kernel: ret_from_fork+0x22/0x30
>
> Jun 27 11:09:32 san kernel: </TASK>
>
> Jun 27 11:09:32 san kernel: task:scsi_tm state:D stack:0 pid:145 \
> ppid:2 flags:0x00004000
> Jun 27 11:09:32 san kernel: Call Trace:
>
> Jun 27 11:09:32 san kernel: <TASK>
>
> Jun 27 11:09:32 san kernel: __schedule+0x618/0x13a0
>
> Jun 27 11:09:32 san kernel: ? set_next_entity+0xc2/0x100
>
> Jun 27 11:09:32 san kernel: ? set_next_task_fair+0x74/0x1c0
>
> Jun 27 11:09:32 san kernel: schedule+0x50/0x90
>
> Jun 27 11:09:32 san kernel: scst_tm_thread+0x223/0x1850
>
> Jun 27 11:09:32 san kernel: ? check_preempt_curr+0x59/0xd0
>
> Jun 27 11:09:32 san kernel: ? wake_bit_function+0x60/0x60
>
> Jun 27 11:09:32 san kernel: ? preempt_count_add+0x5e/0xb0
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_lock_irqsave+0x39/0x80
>
> Jun 27 11:09:32 san kernel: ? scst_clear_aca+0xe0/0xe0
>
> Jun 27 11:09:32 san kernel: kthread+0xe1/0x100
>
> Jun 27 11:09:32 san kernel: ? kthread_blkcg+0x30/0x30
>
> Jun 27 11:09:32 san kernel: ret_from_fork+0x22/0x30
>
> Jun 27 11:09:32 san kernel: </TASK>
>
> Jun 27 11:09:32 san kernel: task:scst_mgmtd state:D stack:0 pid:146 \
> ppid:2 flags:0x00004000
> Jun 27 11:09:32 san kernel: Call Trace:
>
> Jun 27 11:09:32 san kernel: <TASK>
>
> Jun 27 11:09:32 san kernel: __schedule+0x618/0x13a0
>
> Jun 27 11:09:32 san kernel: ? set_next_entity+0xc2/0x100
>
> Jun 27 11:09:32 san kernel: ? set_next_task_fair+0x74/0x1c0
>
> Jun 27 11:09:32 san kernel: schedule+0x50/0x90
>
> Jun 27 11:09:32 san kernel: scst_global_mgmt_thread+0x194/0x3b0
>
> Jun 27 11:09:32 san kernel: ? wake_bit_function+0x60/0x60
>
> Jun 27 11:09:32 san kernel: ? scst_unregister_session_non_gpl+0x10/0x10
>
> Jun 27 11:09:32 san kernel: kthread+0xe1/0x100
>
> Jun 27 11:09:32 san kernel: ? kthread_blkcg+0x30/0x30
>
> Jun 27 11:09:32 san kernel: ret_from_fork+0x22/0x30
>
> Jun 27 11:09:32 san kernel: </TASK>
>
> Jun 27 11:09:32 san kernel: task:scst_usr_cleanu state:D stack:0 pid:147 \
> ppid:2 flags:0x00004000
> Jun 27 11:09:32 san kernel: Call Trace:
>
> Jun 27 11:09:32 san kernel: <TASK>
>
> Jun 27 11:09:32 san kernel: ? dev_usr_parse+0x10/0x10
>
> Jun 27 11:09:32 san kernel: __schedule+0x618/0x13a0
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_unlock_irqrestore+0x16/0x40
>
> Jun 27 11:09:32 san kernel: ? debug_print_with_prefix+0x1e1/0x220
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_unlock+0xd/0x30
>
> Jun 27 11:09:32 san kernel: ? finish_task_switch+0xc4/0x320
>
> Jun 27 11:09:32 san kernel: ? dev_usr_parse+0x10/0x10
>
> Jun 27 11:09:32 san kernel: schedule+0x50/0x90
>
> Jun 27 11:09:32 san kernel: dev_user_cleanup_thread+0x20b/0x560
>
> Jun 27 11:09:32 san kernel: ? wake_bit_function+0x60/0x60
>
> Jun 27 11:09:32 san kernel: ? dev_usr_parse+0x10/0x10
>
> Jun 27 11:09:32 san kernel: kthread+0xe1/0x100
>
> Jun 27 11:09:32 san kernel: ? kthread_blkcg+0x30/0x30
>
> Jun 27 11:09:32 san kernel: ret_from_fork+0x22/0x30
>
> Jun 27 11:09:32 san kernel: </TASK>
>
> Jun 27 11:09:32 san kernel: task:iscsird0_0 state:D stack:0 pid:148 \
> ppid:2 flags:0x00004000
> Jun 27 11:09:32 san kernel: Call Trace:
>
> Jun 27 11:09:32 san kernel: <TASK>
>
> Jun 27 11:09:32 san kernel: __schedule+0x618/0x13a0
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_unlock_irqrestore+0x16/0x40
>
> Jun 27 11:09:32 san kernel: ? debug_print_with_prefix+0x1e1/0x220
>
> Jun 27 11:09:32 san kernel: ? preempt_count_add+0x5e/0xb0
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_lock_irqsave+0x39/0x80
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_lock+0x14/0x40
>
> Jun 27 11:09:32 san kernel: schedule+0x50/0x90
>
> Jun 27 11:09:32 san kernel: istrd+0x244/0xa80
>
> Jun 27 11:09:32 san kernel: ? preempt_count_add+0x5e/0xb0
>
> Jun 27 11:09:32 san kernel: ? wake_bit_function+0x60/0x60
>
> Jun 27 11:09:32 san kernel: ? iscsi_get_send_cmnd+0x90/0x90
>
> Jun 27 11:09:32 san kernel: kthread+0xe1/0x100
>
> Jun 27 11:09:32 san kernel: ? kthread_blkcg+0x30/0x30
>
> Jun 27 11:09:32 san kernel: ret_from_fork+0x22/0x30
>
> Jun 27 11:09:32 san kernel: </TASK>
>
> Jun 27 11:09:32 san kernel: task:iscsird0_1 state:D stack:0 pid:149 \
> ppid:2 flags:0x00004000
> Jun 27 11:09:32 san kernel: Call Trace:
>
> Jun 27 11:09:32 san kernel: <TASK>
>
> Jun 27 11:09:32 san kernel: __schedule+0x618/0x13a0
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_unlock_irqrestore+0x16/0x40
>
> Jun 27 11:09:32 san kernel: ? debug_print_with_prefix+0x1e1/0x220
>
> Jun 27 11:09:32 san kernel: ? preempt_count_add+0x5e/0xb0
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_lock_irqsave+0x39/0x80
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_lock+0x14/0x40
>
> Jun 27 11:09:32 san kernel: schedule+0x50/0x90
>
> Jun 27 11:09:32 san kernel: istrd+0x244/0xa80
>
> Jun 27 11:09:32 san kernel: ? preempt_count_add+0x5e/0xb0
>
> Jun 27 11:09:32 san kernel: ? wake_bit_function+0x60/0x60
>
> Jun 27 11:09:32 san kernel: ? iscsi_get_send_cmnd+0x90/0x90
>
> Jun 27 11:09:32 san kernel: kthread+0xe1/0x100
>
> Jun 27 11:09:32 san kernel: ? kthread_blkcg+0x30/0x30
>
> Jun 27 11:09:32 san kernel: ret_from_fork+0x22/0x30
>
> Jun 27 11:09:32 san kernel: </TASK>
>
> Jun 27 11:09:32 san kernel: task:iscsiwr0_0 state:D stack:0 pid:150 \
> ppid:2 flags:0x00004000
> Jun 27 11:09:32 san kernel: Call Trace:
>
> Jun 27 11:09:32 san kernel: <TASK>
>
> Jun 27 11:09:32 san kernel: __schedule+0x618/0x13a0
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_unlock_irqrestore+0x16/0x40
>
> Jun 27 11:09:32 san kernel: ? debug_print_with_prefix+0x1e1/0x220
>
> Jun 27 11:09:32 san kernel: ? preempt_count_add+0x5e/0xb0
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_lock_irqsave+0x39/0x80
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_lock+0x14/0x40
>
> Jun 27 11:09:32 san kernel: schedule+0x50/0x90
>
> Jun 27 11:09:32 san kernel: istwr+0x243/0x330
>
> Jun 27 11:09:32 san kernel: ? wake_bit_function+0x60/0x60
>
> Jun 27 11:09:32 san kernel: ? iscsi_send+0xb20/0xb20
>
> Jun 27 11:09:32 san kernel: kthread+0xe1/0x100
>
> Jun 27 11:09:32 san kernel: ? kthread_blkcg+0x30/0x30
>
> Jun 27 11:09:32 san kernel: ret_from_fork+0x22/0x30
>
> Jun 27 11:09:32 san kernel: </TASK>
>
> Jun 27 11:09:32 san kernel: task:iscsiwr0_1 state:D stack:0 pid:151 \
> ppid:2 flags:0x00004000
> Jun 27 11:09:32 san kernel: Call Trace:
>
> Jun 27 11:09:32 san kernel: <TASK>
>
> Jun 27 11:09:32 san kernel: __schedule+0x618/0x13a0
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_unlock_irqrestore+0x16/0x40
>
> Jun 27 11:09:32 san kernel: ? debug_print_with_prefix+0x1e1/0x220
>
> Jun 27 11:09:32 san kernel: ? preempt_count_add+0x5e/0xb0
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_lock_irqsave+0x39/0x80
>
> Jun 27 11:09:32 san kernel: ? _raw_spin_lock+0x14/0x40
>
> Jun 27 11:09:32 san kernel: schedule+0x50/0x90
>
> Jun 27 11:09:32 san kernel: istwr+0x243/0x330
>
> Jun 27 11:09:32 san kernel: ? wake_bit_function+0x60/0x60
>
> Jun 27 11:09:32 san kernel: ? iscsi_send+0xb20/0xb20
>
> Jun 27 11:09:32 san kernel: kthread+0xe1/0x100
>
> Jun 27 11:09:32 san kernel: ? kthread_blkcg+0x30/0x30
>
> Jun 27 11:09:32 san kernel: ret_from_fork+0x22/0x30
>
> Jun 27 11:09:32 san kernel: </TASK>
>
> -Sietse
>
>
>
> _______________________________________________
> Scst-devel mailing list
> https://lists.sourceforge.net/lists/listinfo/scst-devel
Thank you for the report!
Fix candidate has been merged into the master branch, could you retest the issue \
using it?
Thanks,
Gleb
_______________________________________________
Scst-devel mailing list
https://lists.sourceforge.net/lists/listinfo/scst-devel
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic