# [Gimp-developer] How precise is Gimp 32-bit floating point compared to 16-bit integer?

• From: Elle Stone <ellestone ninedegreesbelow com>
• To: Gimp-developer <gimp-developer-list gnome org>
• Subject: [Gimp-developer] How precise is Gimp 32-bit floating point compared to 16-bit integer?
• Date: Mon, 16 Dec 2013 11:59:20 -0500

To state the obvious, 16-bit integer offers more precision than 8-bit integer:
```*There are 255 tonal steps from 0 to 255 for 8-bit integer precision.
*There are 65535 tonal steps from 0 to 65535 for 16-bit integer precision.
```
*65535 steps divided by 255 steps is 257. So for every tonal step in an 8-bit image there are 257 steps in a 16-bit image.
```
```
I've read, and it makes sense because with floating point you have to share the available precision with the numbers on both sides of the decimal place, that 16-bit integer is more precise than 16-bit floating point. And 32-bit integer is more precise than 32-bit floating point.
```
```
My question is, for Gimp from git, is 32-bit floating point more precise than 16-bit integer? If so, by how much, and does it depend on the machine and/or things like the math instructions of the processor (whatever that means)? And if not, how much less precise is it?
```
```
To restate the question, in decimal notation, 1 divided by 65535 is 0.00001525878906250000. So 16-bit integer precision requires 16 decimal places (lop off the four trailing zeros) in floating point to express the floating point equivalent of 1 16-bit integer tonal step, yes? no?
```
```
The Gimp eyedropper displays 6 decimal places for RGB values. 0.00001525878906250000 rounded to 6 places is 0.000015.
```0.000015 times 65535 is 0.983025.
0.000016 times 65535 is 1.04856.

How many decimal places does Gimp 32-bit floating point actually provide?

Elle
```

• Follow-Ups: