Re: Metadata Store



Hi Joe,

thanks for the extensive answer, now here is mine.

I put some effort in getting Sesame and Sesame2 [1] working under C# and it
worked quite well.
Using IKVM or a source translation?
I used IKVM. The dll files can be found here:
   http://sourceforge.net/projects/dotsesame


The more interesting memory benchmark for us is Sesame vs. no Sesame.
What are the differences there?
I just mentioned that in order to address your concerns (in a previous mail) that the memory consumption might increase due to ikvm.

Yes.  The Lucene index and the metadata store will be paired and (at
least initially) be one each per backend.
The problem having a metadata store for each backend would lack the possibility to connect different types of documents through their metadata, e.g. a file (filesystem queryable) which is an attachment of an email (evolution queryable). Or at least these connections would be more difficuilt to exploit on query time.

All text searches will be done through lucene.  All metadata searches
will be done through the store.
I heared about a Lucene Sail, that is a layer for Sesame. This would mean that querying the RDF store could be based on Lucene. This could allow querying only one store, the RDF store, where keyword matching is implemented by Lucene. I think this sounds interesting enough for further investigation.

Pluggable filters and backends make sense because people can drop in or
remove the ones they want to use. ... I don't see that with a pluggable RDF
store.
I see your point and I agree. But what I am having in mind is the following:
I need to get Beagle connected to the NEPOMUK architecture and I am thinking about how to get this in a nice way which is both nicely applicable and accepted by the Beagle developers as well. This NEPOMUK architecture (which is like a backbone of components and services) will, among others, provide a central RDF store, which is shared among all applications using that NEPOMUK architecture. The backbone also wants to use the desktop search services provided by Beagle and expose that to the components of NEPOMUK and to all other NEPOMUK-enabled applications. Right now I am thinking of two solutions: 1) Beagle's rdf store is connected through a API and is easy to be replaced by an other rdf store that complies to that API. Thus, in order to NEPOMUK-enable Beagle one just need to replace that rdf store. Then all metadata could be stored in that rdf store. 2) putting all our stuff into our own backend which uses the NEPOMUK architecture and thus that rdf store. Then all metadata generated by this backend would not be stored at Beagle but at the NEPOMUKs store. Beagle queries forwarded to the backend will then be used to query the NEPOMUK store. A problem will be the connection of those metadata to the documents of other backends.

best regards,
Enrico M.



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]