Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> difficulty of correctly recovering from an OOM exception is between hard and impossible.

In Java out of memory is signaled with an OutOfMemoryError which is a throwable (and can be caught) but is not technically an exception. Errors should generally never be caught and cannot be recovered from, which is how they differ from exceptions.



It's possible to recover enough to cleanly save the user work and shutdown. You have to mesure of much memory you need for a clean shutdown and allocate a byte[sizeForShutdown] at the start of the application in the top level exception handler (the only one that should catch that error) and free that byte array before doing anything else.

Now you can argue that this is not truly recovering from exception but it is a lot better than what you can do with most of the other Error subclasses. I said most because there are easy to recover from Error like the StackOverflowError where you just have to fail the operation or request that caused the error.


Does the JLS actually guarantee that if you free that byte array then the extra memory will be immediately available? I thought there could potentially be some lag. The safer approach would be to create all of the objects you need for a clean shutdown during program launch and keep them around. Then you won't need to allocate any memory in the exception handler.


Your approach is safer for sure and doesn't require me the read the JLS before going to see my mother for mother's day, so it's doubly better !


It's not great, but you can always catch and retry if your belief is that the GC will free enough memory to allow the attempt to continue after the memory pressure subsides.

Let's say you get 1/100 requests that are randomly sent to your process. That 1 takes 100x the average memory usage of the others. You could spin it out to different services to better handle the weird one-off, but that doesn't always make sense. Sometimes you just need to be ok with working the 100x job and let the other 99 get progressive falloff retry. Different solutions are always possible.


> It's not great, but you can always catch and retry if your belief is that the GC will free enough memory to allow the attempt to continue after the memory pressure subsides.

No, you cannot. Catching, for example, StackOverflowError (which inherits from Error) can lead to very strange deadlocks and such (if locking is relying on try-finally discipline, as it should), even if you do "almost nothing" before re-throwing.

It's a huge hornet's nest of weirdness to even attempt to catch anything which derives directly from Error. (Rather than RuntimeException/Exception.)

EDIT: There are some really strange subclasses of Error now that I think about it. E.g. VirtualMachineError ... I don't think I've ever seen that in any logs, thankfully, but what exactly is the program (running on the failing VM) supposed to do if that is thrown? It'd be like trying to carry on or log an error if suddenly 1==2 turned out to be true.


> There are some really strange subclasses of Error now that I think about it. E.g. VirtualMachineError

An OutOfMemoryError is a VirtualMachineError. The Java runtime doesn't technically contain the idea of "finite memory". The language sort of assumes there's an infinite amount of memory. When there isn't and the VM is forced to throw an OutOfMemoryError it's technically a breach of the abstraction of the language and the VM is unable to continue working.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: