[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-mm-commits
Subject:    + mm-kmemleak-move-up-cond_resched-call-in-page-scanning-loop.patch added to mm-unstable branch
From:       Andrew Morton <akpm () linux-foundation ! org>
Date:       2023-08-25 17:45:23
Message-ID: 20230825174524.1FFD6C433C8 () smtp ! kernel ! org
[Download RAW message or body]


The patch titled
     Subject: mm/kmemleak: move up cond_resched() call in page scanning loop
has been added to the -mm mm-unstable branch.  Its filename is
     mm-kmemleak-move-up-cond_resched-call-in-page-scanning-loop.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-kmemleak-move-up-cond_resched-call-in-page-scanning-loop.patch


This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code \
***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Waiman Long <longman@redhat.com>
Subject: mm/kmemleak: move up cond_resched() call in page scanning loop
Date: Fri, 25 Aug 2023 12:49:47 -0400

Commit bde5f6bc68db ("kmemleak: add scheduling point to kmemleak_scan()")
added a cond_resched() call to the struct page scanning loop to prevent
soft lockup from happening.  However, soft lockup can still happen in that
loop in some corner cases when the pages that satisfy the "!(pfn & 63)"
check are skipped for some reasons.

Fix this corner case by moving up the cond_resched() check so that it will
be called every 64 pages unconditionally.

Link: https://lkml.kernel.org/r/20230825164947.1317981-1-longman@redhat.com
Fixes: bde5f6bc68db ("kmemleak: add scheduling point to kmemleak_scan()")
Signed-off-by: Waiman Long <longman@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Yisheng Xie <xieyisheng1@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/kmemleak.c |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

--- a/mm/kmemleak.c~mm-kmemleak-move-up-cond_resched-call-in-page-scanning-loop
+++ a/mm/kmemleak.c
@@ -1584,6 +1584,9 @@ static void kmemleak_scan(void)
 		for (pfn = start_pfn; pfn < end_pfn; pfn++) {
 			struct page *page = pfn_to_online_page(pfn);
 
+			if (!(pfn & 63))
+				cond_resched();
+
 			if (!page)
 				continue;
 
@@ -1594,8 +1597,6 @@ static void kmemleak_scan(void)
 			if (page_count(page) == 0)
 				continue;
 			scan_block(page, page + 1, NULL);
-			if (!(pfn & 63))
-				cond_resched();
 		}
 	}
 	put_online_mems();
_

Patches currently in -mm which might be from longman@redhat.com are

mm-kmemleak-move-up-cond_resched-call-in-page-scanning-loop.patch


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic