Re: inline pixbufs

Darin Adler <> writes: 
> I agree that we can choose to use your new format instead of what Bonobo
> already does, but I wish you'd take more responsibility for making the two
> match since you introduced the new format. Once Bonobo 1.0 is released we'll
> have a binary compatibility issue if we want to change the format.

I'm happy to cut-and-paste to fix the Bonobo lib if the Bonobo
maintainers want that (it's a 5-minute task).
> A problem with putting row stride in is this: There may be restrictions
> about which row strides can be efficiently supported on different platforms.
> If the code that reads the image uses the row stride stored in the flattened
> form without taking into account what's supported on the platform, you might
> get a slow-to-draw image (or perhaps even an unusable one?). So in the
> general case the reading code needs to select an appropriate row stride.

You can't get an unusable image AFAIK. You could get a slow one, yes;
I would suggest that make-inline-pixbuf always output an image aligned
for the platform it's compiled on, and that
gdk_pixbuf_new_from_inline() pick an optimal rowstride iff copy_pixels
is TRUE.
> A separate issue: With redundant data in the flattened form, you end up with
> a format that is much more complex for reading flattened data from an
> untrusted place like a network stream. Unlike the code in Bonobo, which does
> reality checks on all the parameters and returns an error if there's
> anything wrong, read_raw_inline will construct an incorrect pixbuf if passed
> bad data, which can result in unpredictable behavior later on when GdkPixbuf
> routines are called on an invalid pixbuf. For example, has_alpha can be
> incompatible with n_channels, bits_per_sample is ignored when computing the
> size of the buffer, width and height might not match the actual size of the
> data, width might not match rowstride.
> In the Bonobo code, we chose a simple subset of GdkPixbuf, which is harder
> to write, but much easier to read with complete error checking.

But you can't inline any pixbuf at all, you only support 24-bit RGB
pixbufs. We could get the same security effect by just ignoring all
those fields and forcing 24-bit RGB.

So, concretely: I will edit the current code to be sure the format in
the byte stream is one that GdkPixbuf actually supports, which right
now means it has to be 24-bit RGB with optional alpha channel.
An unsupported format will result in a NULL return value.

I'll also add a "length" arg to gdk_pixbuf_new_from_inline() which can
be -1 to mean "just trust the length in the inline data" and can be
passed in from the CORBA byte array in the Bonobo case.
Will that work?
> > +void
> > +output_int (FILE *outfile, guint32 num, const char *comment)
> > +{
> > +  guchar bytes[4];
> > +
> > +  g_assert (sizeof (bytes) == sizeof (num));
> This may be an overzealous assert. On some platforms, guint32 can be bigger
> than 32 bits and most glib/gtk code will still work. This assert guarantees
> a core dump on those systems. But if the number fits in 32 bits, the
> function will work just fine, and sizeof (num) is beside the point.

AFAIK guint32 is guaranteed to be 32 bits wide, though the assert is
almost certainly useless anyway.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]