Re: Backend design advice
- From: Sam Thursfield <ssssam gmail com>
- To: fr33domlover <fr33domlover mailoo org>
- Cc: desktop-devel-list gnome org
- Subject: Re: Backend design advice
- Date: Thu, 2 Jan 2014 13:33:36 +0000
Hi
On Mon, Dec 23, 2013 at 12:05 PM, fr33domlover <fr33domlover mailoo org> wrote:
Hello,
This question is quite general but I'd like to know how things in GNOME were
designed, in addition to any general advice you have.
Assume a GUI application uses a central data backend, e.g. Tracker.
Currently Tracker is a single central storage service, i.e. one daemon
accepting connection and queries from apps.
Now assume I want to have more than one database: for example a separate
database for some project I work on, a separate database for documenting
file system hierarchy, separate database for desktop items, etc. The common
approach, at least in SQL, is to have a single entry point, an SQL server,
which handles all the databases stored on the computer. All clients connect
to the same server.
Tracker stores all data in a single per-user database because there is
no simple way to aggregate queries across multiple databases. The goal
of Tracker is to provide a single query interface over all of that
user's data, so this is unlikely to change overnight.
If you want to use Tracker for stuff other than storing per-user
desktop metadata, it's not impossible to get it to write to a
different file. The 'tracker-sandbox' program inside 'utils/' in the
Tracker source tree shows how to do this -- basically you need to
start a separate D-Bus session with XDG_CACHE_HOME and XDG_DATA_HOME
pointing to the alternate location. That D-Bus session will have its
own tracker-store process running which will read and write to the
alternative location.
Here's another possible approach: If the number of databases is small, it
may be reasonable to launch a separate server for each database, and have
them communicate with each other through IPC. There would probablly need to
be another service to route network traffic to the right server, but aside
from that, each database would have its own server (daemon) handling access
to it.
Would the second approach be better at anything? Is it something people do,
something reasonable? Or the approach of a single server per computer is
clearly better?
(Would the second approach be more safe or scalable or resilient etc.)
Storage and processing of complex non-desktop stuff is outside the
normal use-case of Tracker and GNOME, so there's not really any point
prescribing one approach over the other without knowing the specific
requirements. We're happy to advise on customising Tracker for
different use cases though if it seems the appropriate solution.
The second approach you describe (one server per database) is much
simpler to actually implement using Tracker. Work on making it easier
to run concurrent Tracker sessions would be welcome because we already
do this in the 'functional-tests' test suite, but the code there is
quite old and fragile.
Sam
[Date Prev][
Date Next] [Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]