[prev in list] [next in list] [prev in thread] [next in thread] 

List:       cassandra-dev
Subject:    Re: BinaryMemtable
From:       Avinash Lakshman <avinash.lakshman () gmail ! com>
Date:       2009-04-03 14:00:00
Message-ID: a06de5520904030700g5af3626dub29581554c6194d0 () mail ! gmail ! com
[Download RAW message or body]


Of course. Will send it out in a bit.
Avinash

On Fri, Apr 3, 2009 at 6:58 AM, Eric Evans <eevans@rackspace.com> wrote:

> Avinash Lakshman wrote:
>
>> That is what we used to load large amounts of data into Cassandra using
>> M/R.
>> So we loaded around 12TB of data from Hadoop into Cassandra before we
>> launched Inbox Search. This way we could do all the heavylifting in Hadoop
>> and load data at practically network bandwidth 100 MB/sec. Going the
>> normal
>> route with the same load chewed up lot of CPU resources on the Cassandra
>> servers because of lot of serialization/deserialization.
>>
>
> Avinash, do you have any sample code that demonstrates importing data
> this way?
>
> --
> Eric Evans
> eevans@rackspace.com
>


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic