[prev in list] [next in list] [prev in thread] [next in thread] 

List:       cassandra-dev
Subject:    Re: Cassandra Compactions Speed increase
From:       Ashish Pandey <apandey092 () gmail ! com>
Date:       2017-09-18 1:29:50
Message-ID: CAPAUiP4aX+dhK2jBqpJ9uENtgomU0q=ncf7vCD=jpmuH-hYzTA () mail ! gmail ! com
[Download RAW message or body]


Thanks!, I will try out changing concurrent compactors and provide more
info for further help after trying. I agree that I haven't provided any
info to get help :).
Performance is disk bound (I think the reason is L0 use STCS), host stats
doesn't show any significant CPU and Load. Read latency is impacted (tp99 >
200ms) in high throughput. Once the compaction settles, read latency
improves (free disk increases, disk utlitization drops in reads)




On Fri, Sep 15, 2017 at 9:43 PM, Jeff Jirsa <jjirsa@gmail.com> wrote:

> Check your settings for concurrent compactors (in your yaml) and
> throughput (can check and set with nodetool) - both of those can be adjusted
>
> Beyond that you need to give a bit more info to help us help you - are you
> CPU bound, disk bound, is the read load the limiter or is compaction? We
> need graphs or thread pool stats or something - there's a hundred or a
> thousand things to tune, and you've given almost no info for us to tell you
> which to look at first
>
>
> --
> Jeff Jirsa
>
>
> > On Sep 15, 2017, at 9:29 PM, Ashish Pandey <apandey092@gmail.com> wrote:
> >
> > I haven't throttled cassandra compactions. My assumption is that because
> we
> > are running lot of batch jobs which can updates or delete partition key,
> > number of compactions are too many to finish in time which i see in
> > nodetool compactionstats pending tasks.
> >
> > Multiple Batch write (and delete) of partition keys. Each batch can have
> > overlap of partition keys. High throughput read after couple of hours
> > multiple batch writes
> >
> >> On Fri, Sep 15, 2017 at 9:07 PM, Jeff Jirsa <jjirsa@gmail.com> wrote:
> >>
> >> Do you have compaction throttled now?
> >> What exactly takes 6-8 hours?
> >> Are you writing all the keys at one time, then reading, then deleting
> them
> >> all, or is it a constant stream of writes and reads?
> >>
> >>
> >> --
> >> Jeff Jirsa
> >>
> >>
> >>> On Sep 15, 2017, at 8:40 PM, Ashish Pandey <apandey092@gmail.com>
> wrote:
> >>>
> >>> Hi All,
> >>>
> >>> We are using cassandra 3.9 and experiencing read latency issues.
> >>>
> >>> We are using LeveledCompactionStrategy after experiencing read latency
> >>> issue with SizeTieredCompactionStrategy. For our use case, table keys
> are
> >>> getting batch updated and batch delete of column values for keys. We
> >> almost
> >>> write to entire set of keys everyday.
> >>> After switching to leveledcompactionstrategy read latency for the table
> >>> has improved but we see that compaction takes lot of time (6-8hrs) and
> if
> >>> the reads have to happen before compactions are reasonably done,
> latency
> >> is
> >>> severly impacted. Is there way to increase the speed of compactions as
> >> the
> >>> system resources are available in our use case for compaction to run
> >> faster
> >>> after batch update and delete are run?
> >>>
> >>> This article
> >>> https://www.datastax.com/dev/blog/cassandra-anti-patterns-
> >> queues-and-queue-like-datasets
> >>> suggests
> >>> this is an anti-pattern for cassandra for such use cases, but I would
> >> like
> >>> to try if increase in compaction speed could help.
> >>>
> >>> Thanks very much, appreciate responses.
> >>>
> >>> Ashish
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: dev-unsubscribe@cassandra.apache.org
> >> For additional commands, e-mail: dev-help@cassandra.apache.org
> >>
> >>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@cassandra.apache.org
> For additional commands, e-mail: dev-help@cassandra.apache.org
>
>


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic