Plans for gnome-vfs replacement (continued)



hi,

i just wrote down some thoughts on the "gnome-vfs replacement"...

i'm not sure whether the "vfs replacement" should be started by defining a streaming API or rather be designed around a socket mechanism. VFS libraries like KIO or Gnome-VFS seem to do a lot more than streaming file-data: they send dir-entries, progress information, mime-types, etc...

== job & message based design ==

a "job" would be similar to a remote procedure call with the difference that there can be messages in between the initial invocation message and the final reply message.

for instance:

Client <--> protocol handler

=== read file  ===> (invocation message)
<-- data chunk ---
<-- data chunk ---
<-- data chunk ---
...
<== reply(EOF) ===  (reply message)

messages could be pushed one-way through the socket - no acknowledge message for every data-chunk or dir-entry, which probably means less context switches and better performance.


== the vfs socket protocol ==

(the protocol between the client and the protocol-handler)

IMHO D-BUS would be too complex, highlevel and slow for this purpose. also, it cant be used via socket-pairs or pipes. perhaps a basic binary socket protocol with a message format like this would be more appropriate:

[ 16 byte header | body: byte array ]

typedef struct {
   int32 magicNr,
   int32 jobSeqenceNr,
   int32 msgType,
   int32 bodySize
} VfsMsgHeader;


+ there should be no concurrent jobs on the same socket connection. the job-sequence# is only used to throw away messages of previously canceled/finished jobs.
 + sockets can be reused for the next job when idle.
+ if the protocol-handler is multi-threaded it runs separate threads for every client-connection. + the job-sequence-number is created by the client on connection basis (the client sends the invocation message of a job). + the message type (an integer) determines how the message is (un)marshaled. for file data chunks the message body might be plain file-data.


== the vfs daemon ==

when a new protocol handler is requested by a client, it connects to the vfs daemon and asks for the socket-name of a protocol-handler. if there is no (idle) protocol-handler process, the daemon will launch a new one.

perhaps it makes sense to run the "vfs daemon" as D-Bus service.


== the protocol handlers ==

there could be two types of protocol handlers:
+ single-threaded like slaves in KIO (for the local filesystem for instance) + multi-threaded like in gnome-vfs for connection sharing etc (ftp, smb,...) (as said earlier there should be one thread per client connection and no concurrent "jobs" per connection. designing protocol-handlers which handle multiple "jobs" in the same thread seems quite complicated)

perhaps even full featured gui-applications (like communication-, archiving tools) could register themselves as protocol handlers (or network drives) in the vfs-daemon...


== the file:// protocol ==

there could be two modes:
+ for applications which allow threads, the file:// protocol-handler might be run as a thread inside the client application using the vfs protocol through a socket-pair or a pair of pipes. + for other applications which don't want to use glib threading, the vfs-daemon would be asked to launch a protocol-handler process (like in KIO)

(we can always use the same socket protocol, no matter if we use threads or processes)


== asynchronous client ==

the async client is built around the client side of the socket-connections. it could be implemented for various event-loops. because we always use sockets, we don't need wakeup-pipes, locks and idle-sources etc, even if the protocol-handler runs as thread in the client-application.

the job-API should also provide suspend/resume functions for flow control like KIO does (to turn off/on watching of incoming/outgoing file-descriptor events). in KIO the copy job - AFAIK - uses suspend/resume for the so called "the alternating bitburger protocol", something which is not possible with the async Gnome-VFS API at the moment.


== cancellation ==

because there is always a socket (or a pair of pipes) between the client and the protocol-handler there is no need for extra cancellation pipes.

also, the socket might just be closed for cancellation (because there are no concurrent jobs on a socket)


== vfs-protocol message types ==

* job invocation messages
* file data chunk messages
* directory-entry messages
* progress messages
* seek message
* cancel message
* reply messages
* ...

for most message types it might suffice to use a generic key-value map internally, which can be serialized to a byte array (similar the UDS list in kio). the advantage is, that fields can be optional and its easier to add additional fields in the future. the keys could be integers masked with value type information.

possible value types:
int, bool, string, timestamp, filesize,...


== seeking and random file access ==

for seeking it might make sense to have a file-part-sequence number in the file-data-chunk messages - to identify if file-data-chunks "belong" to a certain seek message.

=== open file ==========>
-- start read (part 1) ->
<-- data chunk (part 1)--
<-- data chunk (part 1)--
<-- data chunk (part 1)--
...
-- seek & read (part 2)->
<-- data chunk (part 1)--  (wrong part -> will be dropped)
<-- data chunk (part 2)--
<-- data chunk (part 2)--
<-- data chunk (part 2)--

<-- EOF -----------------

-- close --------------->
<== reply ===============


== authentication dialogs etc ==

authentication dialogs and anything GUI related should be launched out of process (perhaps with desktop specific implementations) to avoid toolkit conflicts.

i think launching them as as executables is no big deal performance-wise, because they don't occur very often...


== migration ==

because the synchronous API of Gnome-VFS works quite efficiently (with in-process modules), Gnome-VFS could be easily used as backend for the "VFS replacement". thus it might not be necessary to migrate all the protocol-handlers at once.

perhaps the "VFS replacement" could also use KIO as backend when running on KDE. from my experience with libxdg-vfs, i believe Gnome-VFS and KIO could serve as pluggable backends for a common VFS interface.

later on, one might want to put the "VFS successor" behind the old KIO and Gnome-VFS APIs for backward compatibility.

regards,
norbert











[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]