[prev in list] [next in list] [prev in thread] [next in thread] 

List:       dwarves
Subject:    [PATCH v2 dwarves 6/8] dwarf_loader: increase the size of lookup hash map
From:       Andrii Nakryiko <andrii () kernel ! org>
Date:       2020-10-08 23:39:58
Message-ID: 20201008234000.740660-7-andrii () kernel ! org
[Download RAW message or body]

From: Andrii Nakryiko <andriin@fb.com>

One of the primary use cases for using pahole is BTF deduplication during
Linux kernel build. That means that DWARF contains more than 5 million types
is loaded. So using a hash map with a small number of buckets is quite
expensive due to hash collisions. This patch bumps the size of the hash map
and reduces overhead of this part of the DWARF loading process.

This shaves off about 1 second out of about 20 seconds total for Linux BTF
dedup.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 dwarf_loader.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/dwarf_loader.c b/dwarf_loader.c
index d3586aa5b0dd..0e6e4f741922 100644
--- a/dwarf_loader.c
+++ b/dwarf_loader.c
@@ -93,7 +93,7 @@ static void dwarf_tag__set_spec(struct dwarf_tag *dtag, dwarf_off_ref spec)
 	*(dwarf_off_ref *)(dtag + 1) = spec;
 }
 
-#define HASHTAGS__BITS 8
+#define HASHTAGS__BITS 15
 #define HASHTAGS__SIZE (1UL << HASHTAGS__BITS)
 
 #define obstack_chunk_alloc malloc
-- 
2.24.1

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic