Re: Review of gnio, round 1
- From: Havoc Pennington <hp pobox com>
- To: John McCutchan <john johnmccutchan com>
- Cc: gicmo gnome org, desrt desrt ca, danw gnome org, Alexander Larsson <alexl redhat com>, gtk-devel-list <gtk-devel-list gnome org>
- Subject: Re: Review of gnio, round 1
- Date: Mon, 27 Apr 2009 15:24:06 -0400
Hi,
On Mon, Apr 27, 2009 at 1:20 PM, John McCutchan <john johnmccutchan com> wrote:
> It's good practice to avoid using hints unless you have a firm grasp
> on the odds of the branch being taken.
>
The thing I find kind of intractable, is that you have to not only
know the odds, but think your grasp is better than what the compiler
and CPU are going to come up with by default.
Personally, I have no real way to evaluate that, except in cases where
I know one branch is basically a "run once" or otherwise pretty much
never happens.
I mean, say you think a branch happens 10% of the time. Is it a net
win to mark that UNLIKELY or will the 10% of the time be so much worse
that it outweighs the 90% of the time being faster? How about 5% of
the time? 30% of the time? What does "unlikely" mean?
And what heuristics do the compiler and cpu already use?
It's just not at all clear. I've read some people's code where it
seems any less-than-50%-likely case gets marked G_UNLIKELY. And GLib
itself I think only marks the "one-time or should-never-happen" cases,
or that's my impression. And then some people seem to be marking "10%
of the time or so" as unlikely.
Limiting it to one-time-or-should-never-happen, or inner loops that
have in fact been profiled, would seem to follow the "no premature
optimization" rule.
Havoc
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]