Re: 24-bit-per-pixel limit to Gdk RGB ?



On Tuesday, February 18, 2003, at 02:49 PM, Peter Finderup Lund wrote:

On Tue, 18 Feb 2003, individual wrote:

Ok, last attempt: I made a 1000 pixel wide gradient going from 255 red
to 254 red. In the colour selection panel, I can clearly see the
difference between the two colours. When I draw the gradient, there is
no dividing line. HA! I have therefore proved that The Gimp and my X
server  are capable of displaying colour at more than 8 bits per
channel. Or have I?

Did you do this experiment with the gimp?

Yes

 In that case you proved that
the gimp does use dithering to avoid Mach banding.

Ah. I see.


Almost no X servers on Earth will currently give you more than 8 bits per
colour channel.  I don't know which X server you are using and what
graphics card you are using.

XFree86 running inside XDarwin on Mac OS X 10.2.3. Here's my card info:

8 MB video RAM
640x480 32-bit
1024x768 32-bit

and, for what it's worth (pardon the lack of sophistication of this bit) my display menu allows me to choose from "thousands" and "millions".


A few very new consumer cards will give you
10-bits per colour channel (or is it only 10-bits for the green channel?)
and I think XFree86 can be hacked/is being hacked to support this.  It
does not, however, have the infrastructure in place for more than 32 bits for the colour channels + alpha combined. If you need that, you will have to get special equipment, probably from Sun or SGI (or Evans & Sutherland,
if they still exist -- btw. look up their names in citeseer).


No, I'm in no need of special equipment, thanks for the info though.

There is no point in putting support for dithering 48-bit to 24-bit pixels into GdkRgb since it is only needed in very specialized situations. And
if you really need it that much you can probably afford to implement it
yourself.

So, do you have a normal photographic image in 48-bit colour that you want
to display or do you have some specialized purpose?


Yes! See, I want to add a display functionality to a scientific C++ class that I made that is supposed to make dealing with high-quality, high resolution scientific data/images really easy. I use 48 bpp in the data, and that is because the image doubles as data storage, which can be read in at a later time to continue the procedure (images are written out as PNG images, with lossless comprsession). I felt that dithering the image data in order to display it would be a non-ideal situation for this intended use.



In the first case, you can throw away (more than) half the bits with out
anyone being the wiser.  In the second, you will have to be creative.

google for something like error diffusion dithering + some more of Raph
Levien's work.



Thanks!

Paul




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]