Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While I take your point, "local" is a leaky concept inside a data-center. On a fast network, "in cache on the neighboring computer" can be closer than "on my hard drive" for most purposes.

It seems like what might be ideal would be a specification language that doesn't care where things are, an implementation that tries to deal with that automagically, and a way to specify portions (to all) precisely that is checked against the high-level specification.



To further your point, you can get faster access directly via RDMA on an Infiniband network than the SATA bus can push data. The fastest SATA busses are around 16gbps I believe and you can get HDR IB switches that clock 50gbps today.

With the direction things are moving, "the data center as a computer" is absolutely the right approach.


Absolutely. In-core on a neighboring machine can be accessed in less than 1ms. On-disk on the local machine (or any machine) can take tens of seconds.


"On-disk on the local machine (or any machine) can take tens of seconds"

Surely you meant 10's of milliseconds. But even so, fastest SSD random access latencies are in sub-millisecond ranges.


No I did not mean that. Contended local disk access under heavy random read/write workloads takes essentially forever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: