Re: [Gimp-developer] How precise is Gimp 32-bit floating point compared to 16-bit integer?

Daniel and Simon, thanks! for answering my questions about Gimp precision.

On 12/16/2013 12:19 PM, Daniel Sabo wrote:
32bit floats have a precision of 24bits*. The exact size of the ulps**
(unit in the last place) in the range [0.0, 1.0] is more complex
because the exponent will give you more precision as you approach 0.
This gets even more complicated because actually doing any math most
likely gives you some rounding error, e.g. the gamma conversions are
not precise to 24bits but are to more than 16bits.

Gamma conversions are the conversions to and from the sRGB TRC, yes?

We have never done an error analysis of the entire gimp pipeline,

That would be very interesting. I've seen a bit of "if less than some value, round to some other value" code in babl and gegl and wondered how it might affect processing accuracy. How would you do an error analysis of the pipeline?

16bits is already beyond human perception (in my unscientific

For LDR photographs, 16bits is plenty to avoid the appearance of posterization, even when using linear gamma image editing - leastways I've never seen banding when using linear gamma image editing at 16bits.

What about various scientific applications? I suppose that's where 32-bit integer comes in, if someone really needs extra precision. Or HDR applications? How precise is 32-bit floating point openexr? It's used to store HDR information, so there must be some cap on the allowed precision of values, yes? no? I'm being lazy, I can look that up.

The real value of using floating point is that it can hold
out of gamut values.

On 12/16/2013 12:30 PM, Simon Budig wrote:
Elle Stone (ellestone ninedegreesbelow com) wrote:
My question is, for Gimp from git, is 32-bit floating point more
precise than 16-bit integer?

Yes, at least for the range from 0.0 to 1.0.

I was curious about how large the RGB values can get, without straying outside the realm of real colors, when converting from a larger to a smaller RGB color space and thereby producing out of gamut values. So I used transicc to see what the equivalent sRGB values are when converting the reddest red, greenest green, etc from larger color spaces to sRGB. Here's some sample values:

Most saturated:         Red      Green    Blue
AllColors/ACES Red      2.4601  -0.2765  -0.0103
BetaRGB Red             1.6142  -0.0758  -0.0211
BetaRGB Green          -0.5470   1.1023  -0.0823
BetaRGB Blue           -0.0672  -0.0265   1.1035
CIE-RGB Red             1.1944  -0.1329  -0.0062
CIE-RGB Green          -0.3139   1.2592  -0.1469
CIE-RGB Blue            0.1195  -0.1263   1.1531
WideGamut Red           1.8280  -0.2054  -0.0077
WideGamut Green        -0.8815   1.2914  -0.0868
WideGamut Blue          0.0535  -0.0859   1.0945
Rimm/ProPhoto Red       2.0354  -0.2288  -0.0085
(AllColors/ACES and Rimm/ProPhoto bluest blues and greenest greens are imaginary colors.)

So real colors can easily fall outside the range 0.0 to 1.0 if they are converted to sRGB. What happens to the precision when dealing with RGB values up around 2.5 or down around -0.9?

If so, by how much, and does it depend
on the machine and/or things like the math instructions of the
processor (whatever that means)? And if not, how much less precise
is it?

AFAIK it does not depend on the processor, floating point numbers are
defined in IEEE 754, and to my knowledge that is what all processors

For 32 bit floats there are 23 bits in the mantissa, so in the range
from 0.0 to 1.0 we easily have more precision than with 16 bit ints.

To restate the question, in decimal notation, 1 divided by 65535 is
0.00001525878906250000. So 16-bit integer precision requires 16
decimal places (lop off the four trailing zeros)

you're barking up a wrong tree here. The length of the decimal expansion
is not necesssarily helpful, because most of them represent rounding errors.

(btw. - you divided by 65536)
1.0 / 65535 = 0.000015259021896696422


1.0 / 0.00001525913 = 65534.53... --> gets rounded to 65535


1.0 / 0.00001525891 = 65535.48... --> gets rounded to 65535

Thanks! that makes things more clear.

So with 11 decimal digits we easily have all the precision we need to
represent the fractions for a 16bit int.

How many decimal places does Gimp 32-bit floating point actually provide?

It has 23 bits mantissa, 8 bit exponent and 1 bit sign.

For decimal notation it depends a lot on the range. Numbers with a
bigger magnitude (exponent > 0) the number of digits after the decimal
points become less.

BTW: You can view 16 bit ints "somewhat like a float with no sign bit,
no exponent bits and 16 bits mantissa". I.e. sign is always positive,
exponent is always 0. That makes it clear that a 32bit float completely
encompasses the 16 bit integer values



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]