Re: Moving disk files by export/import?



On Mon, Jan 05, 2009 at 11:18:43PM +0800, Bengt Thuree wrote:
> When I did all my testing, 15 months or so ago, the job database was not
> properly working. At that time the extension finished all its work
> fairly quickly, perhaps 30 seconds or so to process 20k+ photos.
> But after the extension finished, core part of f-spot had another 2-3
> hours before it finished updating the f-spot database. That is, if I
> were doing a manual select in sqlite at the same time, it took 2-3 hours
> before this manual select reported 0 rows (with the old path).

There's something not quite right there.

By "job" are you referring to the "job" table?  What gets put in
there?  I'm not familiar with the code base and I don't know Mono, but
photo_store.Commit(Photo) doesn't queue up the updates, does it?
That would be silly to write to the db to queue up a db request, so
that can't be right.


Oh.  Looks like everything does get sent off to the
QueuedSqliteDatabase class.  Is that an attempt to make the UI more
responsive?  Or an attempt to have only one thread access the database
to deal with the possibility of no locks available where the db is
located (e.g. if the db is on NFS)?  Or an attempt to prevent
mulitple processes/threads try accessing the db at the same time?

Looks like QueuedSqliteDatabase is suppose to be a thread safe
wrapper for SQLite.

SQLite is stated as being thread safe if compiled that way, which is
the default ("Serialized") -- but even then I'd want to have a
separate db handle (via sqlite3_open) for each process/thread.

sqlite3_threadsafe() will return true if compiled with threading.




> But for sure, you are the expert right now on this extension :)

I doubt that, but I do suspect that the extension could be rewritten
in a lot fewer lines of code and would run in very little time instead
of a few minutes (or hours!) if you avoided QueuedSqliteDatabase
completely.  IIRC, Sqlite has to lock for every transaction (which is
every statement when not explicitly wrapped in a transaction) and
that's expensive.  So, doing it 20K+ times probably is poor use of the
database.

Again, I'm not familiar with the code, but if something that should
take a second is taking 3 hours instead then it's probably time to
reevaluate the architecture.

-- 
Bill Moseley
moseley hank org
Sent from my iMutt



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]