Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the functions of an Operating System is to provide (the illusion of) isolation between Processes and the resources they acquire. Roughly,

Transaction : Database :: Process : OperatingSystem

The four transaction isolation levels, defined nearly fifty years ago, were an attempt at categorizing incomplete isolation. Can you imagine launching an operating system process and saying "It's OK if my memory locations are changed by another process" Or "Don't let my memory locations be changed by another process, but it's OK if another process prevents me from deleting a resource". That's what we're asking of our non-serializable database transactions.

Why not just provide strong isolation? Partly because it is hard and partly because the performance impact of strong transaction isolation is greater than the performance impact of process context switching.

But if you're living with less than strong transaction isolation, then tautologically, strange and unexpected things eventually happen: seeing state that never existed, seeing state changes that you didn't make, failing to see state that you should have seen. Rarely, the application can reliably detect and handle some of these situations. Typically, the application only thinks it can.

Sometimes, I think the core issue is with the notions of isolation and serializability themselves. Developers want to believe that events are noticed simultaneously with their occurrence, and that all observers (transactions) see the same history unfold in the same order. But the pesky physical world doesn't work that way.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: