Re: [Gimp-developer] Shape layers for GIMP?
- From: "Guillermo Espertino (Gez)" <gespertino gmail com>
- To: gimp-developer-list gnome org
- Subject: Re: [Gimp-developer] Shape layers for GIMP?
- Date: Sun, 09 Sep 2012 23:57:00 -0300
El 09/09/12 18:22, Jeremy Morton escribió:
Yes, processors nowadays are very powerful and indeed applying a bunch
of effects onto a shape layer every time you make a change takes a
good amount of CPU power; but that's fundamentally different, because
at least there, there is a finite limit of the number of operations
that will need to be reapplied per change (ie. every single layer
effect is turned on, the CPU will have to re-apply every single layer
effect). However, the idea of being able to modify any change in the
undo buffer is much more difficult, because there is a potentially
infinite number of past changes. I could design a really complex
shape, apply 100 effects, then go away and do loads more complex work
with loads more complex effects, then design another really complex
shape, etc. By the time I want to go back 1000 (by the way, actually
finding the edit I want to modify would be rather difficult at this
point too!), the CPU has to apply 1000 processor-intensive effects.
That's WAY more work than even the worst-case scenario with something
like a shape layer. But it just gets worse and worse. What will the
performance be like if I want to modify something 10,000 moves back?
100,000? At some point this idea because unfeasible, it seems to me.
Modern CPUs can perform "quite a lot of operations without cumbersome
delay", but even the most powerful CPU is going to start to hurt at
some point, and in advanced graphics editing, that point may arrive
rather quickly.
Well, I guess that it all depends on how GEGL manages the nodetree.
AFAIK GEGL works internally as a nodetree. It's not a linear stack of
operations, so every branch of that nodetree will have its own history
of operations, and layers, text objects, etc. will be different branches.
So if you did 100000 operations in your document, probably only a small
percentage of those operations belong to the layer you're working on.
The rest can be cached, and that cache should survive until you do
something that affects that cached layers/elements.
Apart from that, 1000 move operations don't necessarily mean 1000 cached
bitmaps of every position. It doesn't even mean 1000 move operations
again if you have to recalculate the tree. If you have a stack of 1000
move operations, the only two operations that matter are the original
and the final position. It's not necessary to recalculate all of them if
you don't have to go back to an intermediate position.
I don't know the internals of GEGL and I'm sure that creating a
high-performance non-destructive workflow is a challenging task, but I
can think of several tricks that would work around the complexity of a
node tree. It can be optimized if you can avoid redundant operations.
It's possible to take "snapshots" of everything but the area you're
working on, etc.
I know nothing about programming, but I have some experience with nodal
compositing programs and that's how they work. You don't put 10000 move
nodes if you have to tweak the position of an element 10000 times. You
just tweak the position and the node will store the position. And I
guess that the undo stack of that operation will store only the 10000
changes in the coordinates.
It can be done. Software packages like Nuke or even Blender, which is
far simpler, do something similar and for animation!
This is a good example of a very complex tree in action:
https://www.youtube.com/watch?v=POpi-Jt_EaQ
Gez.
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]