Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That’s clear and that is not a reason to have it in general purpose collections or String::hashCode. If your app is vulnerable to this sort of attack, just use a wrapper for keys and specialized collection (you may want to limit the maximum size of it too).


This sounds a great deal like all the C++ devs who say "well just don't write code with buffer overflow bugs then".

The reason why this kind of thing should be the default is because it's unreasonable to expect this level of understanding from your average coder, yet most software is written by the later. That's why PL and framework design has been moving towards safety-by-default for quite some time now - because nothing else works, as proven by experience.


You are misunderstanding and exaggerating this particular risk and assuming there’s only one way to handle it. First of all, it concerns unchecked user inputs: it is not enough just to fix the hash code, the entire HashMap is not great for storing it regardless of hash algorithms. Hash may be ok, but HashMap has no size constraints, so there may exist a vulnerability related to DoS attack exhausting server memory. Developers must always validate inputs as early as possible. Even the dumb ones.

Second, this risk was reliably mitigated in Java as soon as it was discovered. Just because hash collisions may exist, it doesn’t mean they are exploitable. CVE for JDK was not fixed, because it has been taken care of elsewhere, in Tomcat etc, where meaningful validation could take place.

Context matters.


And yet literally everybody else started doing hash randomization. Why?


The vulnerability was reported and addressed in a reasonable way. Why other platforms should matter here? Do you believe they were smarter?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: