Re: Getting greatest decimal accuracy out of G_PI



 > Well I'm wondering why the header defines G_PI to 49 places
 > (if I counted right), if the biggest Gtk2 data type only holds precision
 > to 15?  So I'm wondering if someone knew something about gtk2
 > that would handle such a big number? 

The only reason why G_PI is defined in a GLib (not GTK+) header in the
first place is to aid portability, as there is no standard macro for
pi. The macro M_PI, although common on Unix, is not mandated by the C
standard, and does not exist in the Microsoft C headers, for
instance.

 > I did a rudimentary calculation using the 15 decimal point version
 > of G_PI ( which is available to the standard libraries). and found
 > the resolutions on the surface of the earth and the moon, to be
 > pretty good ( from the center of the earth asssuming
 > 0.0000000000000001 rad angular resolution )

I think what you should make clear to us and yourself is: Are you just
calculating the decimal approximation (to some degree of accuracy) of
mathematical expressions involving mathematical *constants* like
(pi/2)/1000000000000000)? Or are you are modelling actual physical
reality? If the latter, well, then you need to take into account also
what accuracy the input values, measurements, have in the first place.

I guess one could say (with tongue in cheek) that it sounds like you
are writing a flight simulator (thus modelling things like the fuel
flow of an airplane that in reality can only be measured with two or
three digits of precision) that takes continental drift into
consideration.

--tml



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]