Re: Getting greatest decimal accuracy out of G_PI
- From: David Nečas (Yeti) <yeti physics muni cz>
- To: gtk-list gnome org
- Subject: Re: Getting greatest decimal accuracy out of G_PI
- Date: Sun, 4 Feb 2007 11:48:30 +0100
On Sun, Feb 04, 2007 at 12:17:18PM +0200, Tor Lillqvist wrote:
> > Well I'm wondering why the header defines G_PI to 49 places
> > (if I counted right), if the biggest Gtk2 data type only holds precision
> > to 15? ...
> The only reason why G_PI is defined in a GLib (not GTK+) header in the
> first place is to aid portability, as there is no standard macro for
> pi. The macro M_PI, although common on Unix, is not mandated by the C
> standard, and does not exist in the Microsoft C headers, for
This was explained in the very first reply in this thread.
However the questions raised are:
1. Why it is defined with 166bit precision that is not
only way too much for IEEE double (52bit mantissa) but
even for Cray (96bit mantissa)?
2. Does any trick exist to get the extra precision from G_PI
when one works with more precise floating point numbers
If the anwsers are
1. Why not, it does not hurt.
I'm fine with it, I just expected some deeper purpose which
] [Thread Prev