[prev in list] [next in list] [prev in thread] [next in thread]
List: hadoop-user
Subject: Underlying file system Block size
From: "Naama Kraus" <naamakraus () gmail ! com>
Date: 2008-06-30 19:10:08
Message-ID: 643aa4870806301210tbd9a947x737c8e1da3624a5a () mail ! gmail ! com
[Download RAW message or body]
Hi All,
To my knowledge, HDFS block size is 64MB - fairly large. Is this a
requirement from a file system, if one wishes to implement Hadoop on top of
it ? Or is there a way to get along with a file system supporting a smaller
block size such as 1M or even less ? What is the case for existing, non
HDFS, implementations of Hadoop (such as S3, KFS) ?
Thanks for any input,
Naama
--
oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo
00 oo 00 oo
"If you want your children to be intelligent, read them fairy tales. If you
want them to be more intelligent, read them more fairy tales." (Albert
Einstein)
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic