[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-kernel
Subject:    [RFC PATCH v1 06/13] mm: add lru_[un]lock_all APIs
From:       daniel.m.jordan () oracle ! com
Date:       2018-01-31 23:04:06
Message-ID: 20180131230413.27653-7-daniel.m.jordan () oracle ! com
[Download RAW message or body]

Add heavy locking API's for the few cases that a thread needs exclusive
access to an LRU list.  This locks lru_lock as well as every lock in
lru_batch_locks.

This API will be used often at first, in scaffolding code, to ease the
transition from using lru_lock to the batch locking scheme.  Later it
will be rarely needed.

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
---
 include/linux/mm_inline.h | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index ec8b966a1c76..1f1657c75b1b 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -178,6 +178,38 @@ static __always_inline void move_page_to_lru_list_tail(struct page *page,
 	__add_page_to_lru_list_tail(page, lruvec, lru);
 }
 
+static __always_inline void lru_lock_all(struct pglist_data *pgdat,
+					 unsigned long *flags)
+{
+	size_t i;
+
+	if (flags)
+		local_irq_save(*flags);
+	else
+		local_irq_disable();
+
+	for (i = 0; i < NUM_LRU_BATCH_LOCKS; ++i)
+		spin_lock(&pgdat->lru_batch_locks[i].lock);
+
+	spin_lock(&pgdat->lru_lock);
+}
+
+static __always_inline void lru_unlock_all(struct pglist_data *pgdat,
+					   unsigned long *flags)
+{
+	int i;
+
+	spin_unlock(&pgdat->lru_lock);
+
+	for (i = NUM_LRU_BATCH_LOCKS - 1; i >= 0; --i)
+		spin_unlock(&pgdat->lru_batch_locks[i].lock);
+
+	if (flags)
+		local_irq_restore(*flags);
+	else
+		local_irq_enable();
+}
+
 /**
  * page_lru_base_type - which LRU list type should a page be on?
  * @page: the page to test
-- 
2.16.1

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic