[prev in list] [next in list] [prev in thread] [next in thread]
List: linux-admin
Subject: Re: Files per directory
From: Glynn Clements <glynn () gclements ! plus ! com>
Date: 2008-07-10 20:55:55
Message-ID: 18550.30555.84966.977049 () cerise ! gclements ! plus ! com
[Download RAW message or body]
Yuri Csapo wrote:
> > Is that going to cause performance issues? The current file system
> > ext3. Would anyone suggest a limit I should set for the maximum or
> > say if they think 10K files is acceptable?
>
> I'm no expert but the answer is probably: "depends on the application."
>
> As far as I know there's no limit to the number of files in a directory
> currently in ext3. There IS a limit to the number of files (actually
> inodes) in the whole filesystem, which is a completely different thing.
ext3 also has a limit of 32000 hard links, which means that a
directory can't have more than 31998 subdirectories.
However, the original poster wasn't asking about hard limits, but
efficiency.
If the filesystem wasn't created with the dir_index option, then
having thousands of files in a directory will be a major performance
problem, as any lookups will scan the directory linearly.
Even with the dir_index option, large directories could be an issue. I
think that you would really need to conduct tests to see exactly how
much of an issue.
OTOH, even if you keep the directories small, a database consisting of
many small files will be much slower than e.g. BerkeleyDB or DBM.
--
Glynn Clements <glynn@gclements.plus.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic