Re: [gtkmm] Talk about a major rewrite



Paul Davis wrote:
[snip]
to write a custom gtk widget and then realize you have to implement about 5 different versions with 7 different renderers to actually use it throughout your gtk program, etc etc. Perhaps Gtk 3.0 will surprise me


this has not been my experience at all. i've written several custom
widgets, and while its painful, its not because of what you
describe. its more the need to write boilerplate code that as a C++
programmer, i am used to having done for me by the compiler. writing
custom widgets in gtkmm directly, for example, is an exercise in
simplicity. or so i've found ...

Yes but where can you use that widget? Only in a Container. Now you implement it again if you need it in a TreeView or Canvas. The way you draw it also has to be redone, twice for a Canvas and again if you want it to be printed.

Furthermore, how are you implementing the contents of your widget? Perhaps you use a DrawingArea and draw each of the little pieces, then you have to re-create all the geometry layout, event and focus management systems to deal with your sub-widget objects. If you try to use full Widgets as your pieces you find the Widget class is far too limited in its layout and rendering model. Note that this difficulty is why widgets such as TreeView and Canvas had to be created, without support for embedding Widgets.

So you try recreating your custom widget as a Canvas Item, or use a group of stock Items, because it has the necessary support for transformation and depth stacking, but then you have to redo all the layout work done by Gtk Widgets such as boxes and viewports, and you can't use any of the Gtk Widgets such as checkboxes and entries. (Yes I know there's a "Canvas::Widget" hack to plop down a Gtk::Widget on top of a Canvas, it doesn't support the Canvas::Item features.)



with a complete redesign of the widget class hierarchy. But as that is what it would take, a simple implementation patch to gtk won't help.


what i *hope* will happen by GTK+ 3 is that all widgets will render to
a canvas. this will provide not only alpha transparency, but will
unify the wonders of the canvas widget with the rest of the GTK+
widget set - they are currently very incompatible. writing new canvas
items is a breeze compared to a new widget, mostly because of the
rendering model.

You read my mind. "Unify" is the key word, and the key design philosophy lacking in Gtk. Not just for the rendering model but for the flexibility. Essentially Gtk::Widget and Canvas::Item should be unified, along with TreeView::CellRenderer, GnomePrintContext and others I've overlooked. Reimplement layout widgets to use the affine transformation model of a Canvas, and support not-rectangular widget areas and depth composited Widgets. Ironically after 3 List widgets, only the very first, Gtk::List actually let you put widgets in a list! Why is Widget so bloated and inflexible that Gtk widget programmers don't use it for their new, more advanced widgets? It's especially frustating since so much good work was done on the new TreeView, but through no fault of theirs (I assume) they couldn't implement it simply as a container of widgets that provided extra layout and interaction features, instead, again re-doing this work themselves inside the widget bounds.

I think part of the reason was the lack of support for multiple-inheritance or interfaces in the Gtk C object system, so they had to dump all new features into the Widget class. Now that they have interfaces in Gtk 2.0 perhaps they can clean out the Widget class and make it lightweight and flexible, unify the rendering model and support the transformation and layering features of Canvas throughout. If the drawing model can be made device and resolution independent they can solve their printing-support problems as well.



something better to emulate. I don't know. But I do know that there is nothing fundamental about a graphical user interface that should make it so much more difficult to implement than the other parts of the program, and in 10 years we will be using the model that provides this ease.


i would offer one possible reason why its so hard. the GUI is the one
part of the program that interacts directly with the user in an
intentionally open-ended and fluid way. the difference between
writing, a cmdline driven program and a GUI equivalent (even just a
wrapper for a cmdline program) reveals many conceptual differences
that need to be handled by the programmer in some way, and that were
just not there before. what if the user clicks on this then that
instead of that then this? what if the user pastes into the text entry
field? what if she wants to drag-n-drop a file? how to provide more
information without more on-screen widgets? how to provide finer
control over some parameter without a new widget? what about different
mouse buttons? should i grab the pointer - an important new concept
that if done in the cmdline world (grabbing the keyboard driver and
preventing any and all other interactions) would be quite unusual
... etc. etc.

the event-driven nature of GUI toolkits requires a totally different
program organization when compared with most other program designs. it
means that the program doesn't control the program, so to speak. take
a look at apache: although its quite complex code, at any point in
time, the program is 100% in charge of what is going on. with a GUI
(even a curses-based one to some extent) this is no longer true. the
same comparison is true of linux - despite interrupts and other deeply
wierd h/w stuff, the kernel always control "what happens next".

i believe that this difference makes GUI programming conceptually
different from many other kinds of programming. it may be that a
certain toolkit design can help. i haven't seen one yet.

All good points, and I'm not at all trivializing the effort to design a good interface for human interaction. But I would call this interface design, and my problems seem to come during the more mundane implementation of the GUI design that I create on paper.

There may be higher level kits that can help the UI design phase someday. The more GUI programs I write and use, the more I see common patterns being used, regardless of the underlying objects and their features. Something that exposed user-level objects through a GUI using standard techniques could be developed, though I don't have the clarity yet to do it. Then we could concentrate more on our application-specific feature objects, and less on re-creating common object<->GUI interactions each time. The IBM book "Designing for the User with OVID: Bridging User Interface Design and Software Engineering" seems to be about such ideas, though I haven't read it yet.

--
Michael Babcock
Jim Henson's Creature Shop





[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]