[prev in list] [next in list] [prev in thread] [next in thread] 

List:       sqlite-users
Subject:    Re: [sqlite] SQLite vs. Oracle (parallelized)
From:       Alexey Pechnikov <pechnikov () mobigroup ! ru>
Date:       2009-02-27 21:12:37
Message-ID: 200902280012.37912.pechnikov () mobigroup ! ru
[Download RAW message or body]

Hello!

On Friday 27 February 2009 23:35:50 python@bdurham.com wrote:
> I'm interested in exploring whether or not SQLite can be used as an ETL
> tool with large data sets (~80+ Gb). In this context, SQLite would be
> run on 64-bit Intel servers with lots of RAM (16-64 Gb). The data would
> be stored/processed on server local SCSI drives vs. located on a SAN.
> File access would be via a single process per SQLite database. The
> interface language would most likely be Python.

We are using some years SQLite dataset ~10 Gb on 32-bit Linux host (SATA HDD 
and 1 Gb RAM). PostgreSQL version of the dataset was more than 20 Gb and did 
work 60x slowly! It's not possible to upgrade hardware by some reasons and we 
did replace PostgreSQL database to SQLite because there are 50 users in system 
and performance was bad. AOL Server + tcl + SQLite is good for us. For each 
http query is used self database connection to SQLite database (for PostgreSQL 
we did use connection pools) and it's work fine when read operations are 
dominating. For dataset populating are used a batch writes by some demons and 
single write operations performed by users web interface. Dataset is splitted 
by months as single database for monthly data and these chunks can be attached 
when needed.

With your hardware I think 100Gb dataset is not limit.

Best regards.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic