hello @all,
IMHO it need not be that the result of '=log( 8, 3 )' is off by two ULP.
( it is calculated to
1.892789260714371923910E00 while '3' re-powered with that value becomes only
7.999999999999996447286E00 , and even 3 powered to the next representable double
1.892789260714372145955E00 does not! reach 8.0 but it needs the 'nextnext' representable double
1.892789260714372368000E00 as exponent to get 3 powered up to 8.0. )
pls. check and correct if I'm wrong ...
( above may be one of few extreme cases? but there are more where gnumerics calculation is off by 1 ULP, e.g. log( 243, 3 ) results in '4.999999999999999...' despite the correct result '5.0' is! a representable double value. )
( The imprecision is 'FP typical' and - IMHO - results from calculating log( x, y ) as 'ln( x ) / ln( y )', ln( 8 ) and ln( 243 ) are undershot, ln( 3 ) is overshot, results are undershot. )
We can't solve cases where we'd need additional values between representable doubles, but we can try correction for cases which are uneccessary far off.
As long as we don't find a better solution / calculation / libraries we can do the following in the gnumeric calculation:
// edit b. - TESTING - 2022-07-03: correcting weak logs,
{
res = gnm_log (t) / gnm_log (base);
if( gnm_abs( gnm_pow( base, nextafter( res, GNM_MAX ) ) - t ) < gnm_abs( gnm_pow( base, res ) - t ) )
res = nextafter ( res, GNM_MAX );
if( gnm_abs( gnm_pow( base, nextafter( res, GNM_MAX ) ) - t ) < gnm_abs( gnm_pow( base, res ) - t ) )
res = nextafter ( res, GNM_MAX );
if( gnm_abs( gnm_pow( base, nextafter( res, -GNM_MAX ) ) - t ) < ( gnm_abs( gnm_pow( base, res ) ) - t ) )
res = nextafter ( res, -GNM_MAX );
if( gnm_abs( gnm_pow( base, nextafter( res, -GNM_MAX ) ) - t ) < ( gnm_abs( gnm_pow( base, res ) ) - t ) )
res = nextafter ( res, -GNM_MAX );
}
// edit b. - end edit,
it is some overhead ... but IMHO tolerable, there is no footprint in timing the gnumeric test suite.
With a simple test field of the integers from 2 .. 20 as base and argument, and gnumeric double 'standard' we have a problem quota of about 61% where re-powering is imprecise, and can 'only' reduce that to about 48% with above 'afterburner'. That looks weak at a first glance, but it's eliminating all cases with 'big' deviation where a better solution is available in doubles, thus it is an improvement ( from 15.7899 average, 15.2028 min. 'quality' to 15.8491 average, 15.4337 min. quality ( 'quality': ~no. of correct digits ( in the re-powered value, I was too lazy to calculate 361 results with wolframalpha ) calculated similar to other tests by sthg. like '-log10( abs( result - reference ) / abs( reference ) )' ) )
( 'long' has ~45% problem quota in the above scope which can be reduced to ~33% with the patch, improving quality from 19.1354 avg. / 18.6639 min. to 19.1960 avg. / 18.7404 min., needs nextafter**l** instead of nextafter. )
I'm aware that such can't clean up FP-weaknesses in general, it's just one small step to avoid uneccessary imprecision.
I'm aware that 'power' is FP-math too and may have weaknesses ... WIP ... perhaps one will find improvements too ...
The patch above is most likely not well programmed, I do not! write here because I'm good at programming, but because I'm good at spotting weak points, even if I sometimes slip into mistakes ... thus: whoever can do, pls. improve ...
The proposal is based on a hint from John Denker for checking and correcting roundup / rounddown fails.
One of the checks I did against too big mistakes from my side: wolframalpha: 'log( 3, 8 )' ( they use reverse operand ordering ) -> 1.8927892607143723112985813430282825628987569203956412836119648315...