Re: Request: Test suite for EFS.



On Thu, Feb 17, 2000 at 07:13:06PM -0600, Miguel de Icaza wrote:
> 
> >  Problem in that case is that Ettore and other gtkhtml folks are working
> > on taking a code base suitable for the first one and trying to extend it
> > to have the extra set of feature. And that's where I think there is an
> > improper decision.
> 
> So far I do not think they are adding XML/CSS/DOM support to GtkHTML,
> so it falls so far in category (1) in your description (which is: a
> simple web browser).

  Editing is the killing gap ! The design of a document renderer only
and the one of a structured editor are grossly incompatible.

> > > But this does not mean that Ettore is the only hacker ever allowed to
> > > touch that code base.  If you want to help with the DOM/libxml effort,
> > > feel free to jump in, and help switching GtkHTML code there.
> > 
> >   Well for the moment I'm focusing on the lower layers. When libxml-2.0
> > will ship I will certainly look at it.
> 
> Good to hear Daniel!
> 
> >   I don't negate. Do not take my words take gtkHtml people ones.
> > I don't think gtkHtml is bad. I say extending a code base primarily
> > targetted at simple rendering to make it an editing widget is not a good
> > decision. I understand this choice was mostly based on time constraints.
> > I believe also that this complicate the code base of the initial simple
> > widget, maybe making it less suitable for simpler task.
> > Am I wrong ?
> 
> Most definetly.  Talk to Ettore about this.
> 
> Basically, his work on making it editable has simplified the code a
> lot.  The original code base had evolved trough many maintainer that
> were just patching it to keep it alive.  Ettore has redesigned various
> pieces and cleared others in the process of making it an editor.


  Well your experience about editing vs. rendering only conflict
with many years of reseach in that field. Maybe all the people
around me working in the structured editing research field are just
wrong.
  Or is taht just taht the original code base wasn't good ?

> And Ettore is an extraordinary hacker.  I do like his code a lot.

  I won't disagree :-) .

> > > Why would I want to store my images as
> > > base64, if I can store them directly in the way they have shown up?
> > 
> >   Store them the way they are, make them referencable as URL. add metadata
> > to those resources, index them and put wrapper front-end as separate
> > resources if you really need to glue them in an unique way. Your problem
> > is in the address space, not in the format !
> 
> Daniel.  Honestly, I do not believe this is a workable approach.  Just
> as making pages on the network does not work after a few days.

  Interesting statement ...

> Lets consider a few examples:
> 
>      1. User copies the XML file that has references to other files in
>         the remote site into local disk, thinking "I got the
>         document".


  How does he copy it ???

    SaveAs : no problem
    cp at the shell level: possible problems yep ...
    drag and drop at a visual shell level: copying a folder containing a set 
             of resources is just transparent
	     agreed you need to add specific code to the shell to handle this
	     format.

> 	A few hours later, when he is disconnected from the network,
> 	he tries to open the document.  Surprise: the document needs
> 	other files.
> 
> 	Possible solutions:
> 
> 		 1. Have every tool on the plane that can transfer this
> 		    file be aware of the special tags used in this
> 		    specific instance of XML to transfer any
> 		    dependencies.
> 
> 		 My veredict on this solution: lame.
> 
> 		 2. User mirrors everything in a directory, hoping
>                     everything is there.  
> 
> 		 My veredict on this one: might not work and it is
> 		 still lame.
> 
>      2. User makes a copy of the document.  The document's
>         dependencies are deleted from the server.  There is no way to
>         fetch those.  Ever (dont tell me this wont happen, because it
>         happens every day to me on the web).
> 
> 	I do not have any possible solutions to this one.
> 
> Both problems (1) and (2) are solved with structured storage files.
> Yes it is not buzz-word compliant but it works.

  Then implement either OLE2 serialization format or a zip/Jar based one
so that people can reopen the damn file. If I put something out of my 
local space it for sharing it. EFS solution don't allow to share it
with 99.99% of the world. Portability is not the problem, installed base is.

  Acceptable address space for most computer users are:
    - local/Distributed FS
    - ZIP hierarchies
    - Web (ie URL based access schemes with common protocol fields)

If you go out of it you won't share data with many people I'm afraid.

> >   By taking the problem in a different way. Why do you need a single unique
> > resource for multiple flows of data ? This makes your life really hard
> > for indexing/searching/extracting. All this because you believe that
> > the compound set of resource should be unique. As a result you're creating
> > (opaque) herarchies, hard to browse/index. 
> >   Shift your mind that's the solution. There is only one address space,
> > it's flat and based on URL (absolute or relative).
> 
> Maybe I am just old fashioned, but as someone pointed out on a nice
> article on slashdot a few days ago: pretending that the user is lame
> because he wont learn a new trick is not going to get you very far.


  I have the feeling that people have shifted their mind toward 
accessing informations in a Web space. That revolution is IMHO done
even it looks unbelievable afterward.

> I am doing this for the sake of the users, not for the sake of me.  If
> it were for me only, I would not bother to write this at all.

  The users are shaing files with their Windows colleague using Office.
I believe it's a fact. Make the file available to Office, eitheir through
OLE2 serialization or Zip or a shared Web space.

> >   Libefs makes sense if you really need to merge everything on a single
> > resource for sharing, say save onto a floppy. But why don't you just 
> > copy every individual resources onto a dir an  make the entry point
> > a simple (why not XML) file pointing at the different resources, giving
> > their mime type, and all the other metadata you would need about them ?
> 
> How is a random transfer application (ftp, mcopy, gmc, etc) and *WHY*
> would those applications care about transfering the references to
> other documents?  Why do they care at all?  Why do they have to know
> about the internals of a file format?

  ftp (command line) and mcopy are hackers tools in that world.

 Gmc/Nautilus or rather libvfs can be made to understand such a format.
This one or another, I do think you will have to add metadata support to
your set of FS handling tools in a way or another, I may be wrong. And
data about data is also data, it needs a serialization too.

 Non Gmc users will want zip-compatible based formats or something
directly compatible with MS Office.

Daniel

-- 
Daniel.Veillard@w3.org | W3C, INRIA Rhone-Alpes  | Today's Bookmarks :
Tel: +33 476 615 257  | 655, avenue de l'Europe | Linux XML libxml WWW
Fax: +33 476 615 207  | 38330 Montbonnot FRANCE | Gnome rpm2html rpmfind
 http://www.w3.org/People/all#veillard%40w3.org  | RPM badminton Kaffe



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]