Re: Faster UTF-8 decoding in GLib

On Tue, 2010-03-16 at 14:09 -0400, Behdad Esfahbod wrote:
> On 03/16/2010 01:18 PM, Daniel Elstner wrote:
> > Hi,
> > 
> > Am Dienstag, den 16.03.2010, 13:01 -0400 schrieb Behdad Esfahbod:
> >>>
> >>> I've made a glib branch where I tried to optimize the UTF-8 decoding routines:
> >>>;a=shortlog;h=refs/heads/fast-utf8
> >>
> >> Before any changes are made, can you provide real-world performance profiles
> >> suggesting that UTF-8 decoding is taking a noticeable time of any particular
> >> real-world use case?  If no, I don't see the point.
> > 
> > Well, I would see a point since UTF-8 decoding is a fairly generic
> > operation.  It cannot hurt to be as fast as possible at this task.
> > Assuming, of course, that the optimization does not introduce other
> > costs elsewhere, which I think the proposal unfortunately does.
> That's one of the worst ideas as far as software goes.  If an operation takes
> 1% of your application time and you make it 1000 times faster, you know how
> much total faster your application would run? 1.01x faster...

Because in Tracker we need to find word boundaries for indexing of free
text searchable fields, I think UTF-8 performance enhancements would be
a significant improvement for our project.

> That developer time can be put somewhere more useful instead....  Like
> optimizing something that is taking 20% time, or 50%, or 70%.

The developer time has in this case already been committed, though.



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]