[prev in list] [next in list] [prev in thread] [next in thread] 

List:       sqlite-users
Subject:    Re: [sqlite] SQLite vs. Oracle (parallelized)
From:       python () bdurham ! com
Date:       2009-02-24 3:34:50
Message-ID: 1235446490.27369.1302032981 () webmail ! messagingengine ! com
[Download RAW message or body]

Hi Billy,

>> Are there any plans to enhance SQLite to support some of Oracle's
>> parallel processing or partitioning capabilities?

> I realized that you're asking Richard, and not the peanut gallery, but 
> I figured I might as well ask out of curiosity: why do you want to 
> see these features in SQLite?

Most computers these days are multi-core. Oracle has done some excellent
work adding support for parallel processing of many database activities.
It would be great to see SQLite be able to exploit the extra processing
power of multiple cores. This is not a request for handling multiple
simultaneous transactions - it is a request to have single transactions
be processed across multiple cores.

Oracle also supports a rich mix of partitioning features. Partitioning
allows one to divide a table and/or index into logical subsets that
allow additional query optimizations. Partitioning is also useful for
quickly dropping a logical subset of data, eg. if you've partitioned
data by month, you can quickly drop your oldest month of data by
dropping its partition vs. performing a massive number of individual
deletes, followed by a vacuum. Finally, partitions can also support
parallelization tasks such as loading large data sets (each partition
can be loaded and optionally indexed independently of others) and for
building partial result sets for SQL selects (each partition can be
queried independently of other partitions).

Another interesting Oracle feature is compression. Oracle's compression
techniques not only compress data, but also speed up many types of
selects.

Thinking-out-loud: I wonder if some of Oracle's parallelization and
partitioning features could be emulated by creating a physical SQLite
database for each logical partition; loading large logical tables
quickly by using a separate process to load each 'partition specific'
SQLite database, and then creating a high-level code library to
translate a high-level SQL commands (insert, update, delete, select)
into multiple, 'partition specific' SQLite commands that get executed in
parallel. In the case of parallel selects, the intermediate results
would be cached to partition specific SQLite databases and then unioned
together by a master controlling process to create a logical cursor for
processing.

Is anyone using similar techniques with very large SQLite
tables/databases?

Malcolm
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic