Re: Aw: Re: Re: deco-Math project, step 00_a: exact bin and dec 'ranges' (in gnumeric).



This thread is very confused.

For starters, AFAICT the real topic here is not a gnumeric issue,
but rather a floating-point issue, and perhaps an algorithm issue.

There are issues with floating point. Always have been. Always
will be. The IEEE floating point standard was very carefully
designed. Any attempt to do better would require many years of
skilled, highly specialized effort. This is not the proper forum
for that.

There is considerable accumulated expertise in dealing with the
roundoff errors inherent in any floating point representation.
This is an algorithm design issue. In virtually all applications,
floating point roundoff is not the only source of uncertainty.
In nearly all real-world applications, including science and
engineering, there is uncertainty in the raw data. Also, if you
are doing any sort of modeling, there are imperfections in the
model. For example, if the model involves a power series, there
will be series truncation errors. Floating point imperfections
are part (but only part) of the mix. There are fat books on how 
to deal with this.

////////////

One particularly powerful method is Monte Carlo. That has been
around since the 1940s, which is rather a long time in the
computer business.

Simple Monte Carlo calculations can already be done using
spreadsheets. I've done thousands of them.

Commercial vendors sell plugins that facilitate complicated
Monte Carlo calculations for excel. If you want to do something
useful, you could write a similar plugin for gnumeric.

////////////

Another method is to just use integers. For example, in a financial
calculation, represent everything as an integral number of cents.
At the very last step, use integer_divide and integer_modulo to
format the result in dollars in the conventional way.

In particular, use the FPU. That is, store the integers in what
C calls a "double" ... which is what gnumeric already does. That
can represent integers exactly, over a rather wide range.

If you want, you can enable all the FPU exceptions, including
FE_INEXACT, to give you confidence that nothing bad is happening
behind your back. For example, any attempt to represent 0.1 will
throw the exception.

Beware that some library functions don't behave as expected. For
example, on my machine, sqrt(2.) does not throw the FE_INEXACT
exception.

/////////////

One thing that does *not* work is worrying about rounding modes.
That is strictly amateur hour. If the difference between
rounding_up / rounding_down / rounding_to_even is significant,
the battle is already over and you lost. Start over with a more
robust algorithm.

//////////////

Another thing that generally does not work is "interval arithmetic".
That is, you could represent each number by an ordered pair, namely
a strict lower bound and a strict upper bound. It's not clear, but
I suspect that's what "ranges" are trying to do. The problem is that
these are worst-case bounds. For typical algorithms, the worst case
is verrrry much worse than the typical case, impractically so. There
may be special mathematical situations where interval arithmetic is
usable, but in the other 99.999% of the situations you're vastly
better off with Monte Carlo ... and even if you could use interval
arithmetic you'd be better off with integers.

 «Die ganzen Zahlen hat der liebe Gott gemacht;
  alles andere ist Menschenwerk.»
                        — Kronecker


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]