[prev in list] [next in list] [prev in thread] [next in thread] 

List:       sqlite-users
Subject:    Re: [sqlite] SQLite Transaction Rate and speed...
From:       Ken <kennethinbox-sqlite () yahoo ! com>
Date:       2009-03-12 15:42:28
Message-ID: 289172.33196.qm () web81001 ! mail ! mud ! yahoo ! com
[Download RAW message or body]



Calling   dbh->do("BEGIN")/  dbh->do("COMMIT") should get rid of the 
automatic transactions. The txn's are still attomic.

I'm just guessing but I'd suspect the dbi interface.. 

Can you rewrite it in C and call the sqlite API directly, 
You'll get better performance by creating a statement handles and preparing those \
handles once. Then use the reset/bind to rebind data to the handles.

Hope that helps.


--- On Thu, 3/12/09, VF <goodarm@gmail.com> wrote:

> From: VF <goodarm@gmail.com>
> Subject: Re: [sqlite] SQLite Transaction Rate and speed...
> To: sqlite-users@sqlite.org
> Date: Thursday, March 12, 2009, 12:57 AM
> Great suggestion!
> 
> I wonder however how should I implement it though... I am
> upserting tens of
> millions of rows in chunks of several hundreds thousands
> (naturally I can't
> do all this in memory). SQLite tutorial says if each
> update/insert is not
> resulting in transaction, it can make up to 50k updates a
> second. 
> 
> I tried calling:
> 
> my $dbh = DBI->connect_cached(          # connect to
> your database, create
> if needed
> 			    "dbi:SQLite:dbname=$dbpath", # DSN: dbi,
> driver,
> database file
> 			    "",                          # no user
> 			    "",                          # no password
> 			    { RaiseError => 1, AutoCommit => 0 },        
> #
> complain if something goes wrong
> 	) or die $DBI::errstr;
> 	
> 	# set pragmas
> 	$dbh->do('pragma synchronous=off;');
> 	$dbh->do('PRAGMA default_cache_size = 10000;');
> 	$dbh->do('pragma page_size=819;2');
> 	$dbh->do('pragma temp_store=memory;');
> 
> before the actual upserts, however it didn't seem to
> help. 
> 
> Now, if in my bulk_upser() function I do this:
> 	$dbh->do("BEGIN");
> 	foreach <arr> {
> 		$dbh-do("insert or ignore")
> 	}
> 	$dbh->do("COMMIT");
> 
> ...would that work? How do I get rid of these atomic
> transactions?
> 
> In general though - I am a little surprised. This is a
> seemingly simple task
> - hundreds of thousands of upserts in a non-shared DB -
> requiring such
> "gymnastics" and taking hours to complete - seem
> to be neither "SQL" nor
> "lite". Or is it a problem of Perl DBI interface?
> 
> Thanks a lot in advance, Bobby
> 
> > Is it possible for you to do the following?  The
> INSERT should fail
> > silently if you violate a uniqueness constraint; we
> also set the initial
> > counter to 0 as it will be immediately bumped to 1 by
> the UPDATE.  You
> > should be able to wrap the whole thing in a
> transaction.
> > 
> > ***
> > 
> > INSERT OR IGNORE INTO MAPPINGS_$idx
> > (key, mapping, rank, counter, timeCreated,
> timeModified)
> > values (?, ?, 1, 0, CURRENT_TIMESTAMP,
> CURRENT_TIMESTAMP);
> > 
> > UPDATE MAPPINGS_$idx SET counter = counter + 1, 
> > timeModified = CURRENT_TIMESTAMP WHERE key = ? AND
> mapping = ?;
> > 
> 
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic