compatibility problem between 'double' and 'long double' versions | was: follow up: Aw: Re: 'testing' in gnumeric




hello @all,

have an addition to the question:

repeated:

storing values in files pretending higher precision than they really have (in the originating fp-format) leads to compatibility problems with software using different precision for values. one of the simpliest cases:
the decimal value
'0.1' is represented by a 'nearest' double which cannot be exact reg. radix problems and has the real decimal value of 
'0.10000000000000000555~' this value is stored in e.g. *.ods files as 
'0.10000000000000001'     (rounded to 17 digits while doubles are 'precise' only in the 16th digit). opening the file with a program using doubles reads
'0.10000000000000001',         finds the 'nearest double' with
'0.10000000000000000555~'     and is fine ...  while extended precision versions (e.g. for 80-bit 'long doubles') read
'0.10000000000000001',         find the 'nearest long' with
'0,100000000000000010003120'   and continue calculating with a !clearly wrong! value ... the deviation is minimal, but it can be done better!?!?

new:

that reminded me on Ulf (ulfjack) Adams who announced a solution for such problems (for printouts?), in his 'ryu' project 'https://github.com/ulfjack/ryu' he commented:
'## Ryu
Ryu generates the shortest decimal representation of a floating point number
that maintains round-trip safety. That is, a correct parser can recover the
exact original number. For example, consider the binary 32-bit floating point
number `00111110100110011001100110011010`. The stored value is exactly
`0.300000011920928955078125`. However, this floating point number is also
the closest number to the decimal number `0.3`, so that is what Ryu
outputs.'
being fast also, but may be 'fast' compared to other algos computing 'shortest' is not the same as compared to computing a fixed length string?

IMHO it could be good if someone with skills and experience would have a look into this ...

if gnumeric or other formats (*.ods?) require a fixed amount of digits it would be ok to pad with zeroes ... IMHO.

best regards,



b.

---

hello @all,

ok, one problem identified which i had seen before, a 'long double' version reads in nonsense for fractional values stored by a 'double' version,

e.g. '=prob({0,1,2,3},{0.10000000000000000555,0.4000000000000000222,0.2000000000000000111,0.2999999999999999889},2)' instead of '=prob({0,1,2,3},{0.1,0.4,0.2,0.3},2)' for cell 'B75' of 'statfuns.xls' ...

difficult point, affecting each use of long double versions, thus i'd like to solve.

which code parts do write to and read from files? and perform the conversions for that? think read will use some 'strtod' and write some 'to_string'?

(already looked up that it writes '0.10000000000000001' instead of '0.1', pretending 17 digit precision which 64-bit doubles don't have (i know! they have 'some significance' in 17th digit in some ranges, but 'precision' (you can count ~1, ~2, ~3, ~4, ~5, ~6, ~7, and calculate 'one by one', just on the fly don't find a better word for it) ... they don't have anywhere, there are always some values missing!))

thus i'd like to try to store less - but then correct - digits,

such will inject other problems, i know!, have to look which and how much impact, but pls. help and let me give it a try ;-)

best regards,



b.

---
 
Gesendet: Mittwoch, 22. September 2021 um 17:01 Uhr
Von: "Morten Welinder" <mortenw gnome org>
An: "newbie nullzwei" <newbie-02 gmx de>
Betreff: Re: 'testing' in gnumeric | was: Aw: Re: Re: where can / must one activate 'LONG DOUBLES'? - works to some extend, feedback and additional questions
I don't see an attachment, but when you load statfuns.xls you will see
the previously calculated values that are stored in the file. Press
F9 to force recalculation, then look for problems.

M.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]