[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-kernel
Subject:    Re: [PATCH] Making i/dhash_entries cmdline work as it use to.
From:       David Howells <dhowells () redhat ! com>
Date:       2004-07-13 12:33:40
Message-ID: 4348.1089722020 () redhat ! com
[Download RAW message or body]


> Actualy, it doesnt look like we need MAX_SYS_HASH_TABLE_ORDER at all so
> I'm resending the patch which now limits the max size of a hash table to
> 1/16 total memory pages.  This would keep people from doing dangerous
> things when using the hash_entries.

That's an enormous limit. Consider the fact that you will have a multiplicity
of such hash tables, each potentially eating 1/16th of your total memory
(remember, the bootmem allocator's only real limit is how big a chunk of
memory it can allocate in one go).

Do you have numbers to show that committing an eighth of your memory (8GB if
you have 64GB - two hash tables at 4GB apiece) to hash tables is almost
certainly not worth it.

For instance, the dcache:

	(*) Assuming 64GB of memory, of which 1/16th allocated to the dcache
	    hash table and 1/16th allocated to the inode hash table.

	(*) dcache hash buckets are 16 bytes apiece on PPC64.

	(*) 4GB hash table gives you 256 Meg hash buckets.

		(gdb) p ((4ULL<<30)/16) / 1024 / 1024
		$1 = 256

	(*) The dentry struct on PPC64 is approx 256 bytes in size.

	(*) The remaining 56GB could provide 224 Meg dentries if allocated to
	    nothing else.

		(gdb) p (56ULL<<30) / 256 / 1024 / 1024
		$2 = 224

	(*) So you'd end up with more buckets in the hash table than you could
	    possibly allocate dentry structures.

David
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic