Re: Algorimic Complexity Attack on GLIB 2.2.1
- From: Scott A Crosby <scrosby cs rice edu>
- To: Sander Vesik <Sander Vesik Sun COM>
- Cc: gtk-devel-list gnome org
- Subject: Re: Algorimic Complexity Attack on GLIB 2.2.1
- Date: 30 May 2003 19:39:43 -0500
On Sat, 31 May 2003 01:25:13 +0100 (BST), Sander Vesik <Sander Vesik Sun COM> writes:
> > Universal hash functions aren't performance hogs. We include benchmark
> > graphs in the paper. In almost all cases, an L2 cache miss dominates
> > the hashing cost. Also, the uhash function, for all inputs longer than
> > 64 bytes, faster than perl's simplistic hash function.
> >
>
> which show you are at the very least 3 - 9 time slower than an example
> simple hash like xor12.
Correct. However, a single L2 cache miss dominate the hash computation
for such a function. On the bar chart where we compare the hash
functions as a function of the working set size, notice how XOR drops
from insanely fast to barely twice as fast as soon as it incurs a L2
cache miss.
> I really don't know where you got the idea that the string hashing
> function was supposed to be secure or why you overlooked the fact
> that the function to use on the keys is one of the inputs the hash
> table creation function takes.
Correct. However, many applications will use the default function and
leave themselves open to potential attack. I considered this an
unintended design decision. I was informing you and users of your
library that using the default string-hashing function could leave
users of your hash table implementation open to a potential DoS
attack. If it was done this way deliberately, then I apologize.
> I don't understand your insistence of not even first looking at the
> problem domian before attacking it.
Huh?
Scott
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]