[prev in list] [next in list] [prev in thread] [next in thread] 

List:       hadoop-dev
Subject:    Re: Question about HDFS allocations
From:       "Billy" <sales () pearsonwholesale ! com>
Date:       2007-12-31 22:06:08
Message-ID: flbp4m$psr$1 () ger ! gmane ! org
[Download RAW message or body]

There is also a script added but its not in a release yet its in trunk

start-balancer.sh

its in the bin folder

this is from the source code
* To start:
* bin/start-balancer.sh [-threshold <threshold>]
* Example: bin/ start-balancer.sh
* start the balancer with a default threshold of 10%
* bin/ start-balancer.sh -threshold 5
* start the balancer with a threshold of 5%
* To stop:
* bin/ stop-balancer.sh

Billy

"Bryan Duxbury" <bryan@rapleaf.com> wrote in 
message news:DF4AA7D7-ACD4-42B6-B9E3-7F3779CEF5DE@rapleaf.com...
> We've been doing some testing with HBase, and one of the problems we  ran 
> into was that our machines are not homogenous in terms of disk  capacity. 
> A few of our machines only have 80gb drives, where the rest  have 250s. As 
> such, as the equal distribution of blocks went on,  these smaller machines 
> filled up first, completely overloading the  drives, and came to a 
> crashing halt. Since one of these machines was  also the namenode, it 
> broke the rest of the cluster.
>
> What I'm wondering is if there should be a way to tell HDFS to only  use 
> something like 80% of available disk space before considering a  machine 
> full. Would this be a useful feature, or should we approach  the problem 
> from another angle, like using a separate HDFS data  partition?
>
> -Bryan
> 



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic