Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A couple examples of the kind of thing I'm talking about:

Why is vsync still a thing? Originally it came from how broadcast TV worked with CRTs, but the fundamental technologies are different today. Yet, we still update the entire screen, all at once, on a fixed rate timer with blanking intervals[0]. I'm certainly not an expert in how LCDs work, but my understanding is that there is no fundamental reason we cannot update different parts of the screen at completely different rates or on-demand. We just don't do that, because the convention is a stream of pixel data from top left to bottom right.

Or take process memory isolation and remapping. There's a lot of overhead there, and the whole thing looks like a solution to make a program think it has all the memory to itself and no other processes exist, and then we have to work around that specially to let them share it, and still have to add extra complication to deal with malicious jumps, and now timing attacks. What if there were a better way, so long as your programs didn't essentially act like they were running on a Commodore 64?

What could something like Unicode look like if it had never had to deal with any kind of legacy nonsense, han unification, or the UCS2/UTF-16 debacle? Oh and if it didn't have such an obsession with emoji.

Those are just off the top of my head. Undoubtedly my lack of expertise in these domains means I'm probably missing or misunderstanding some things, but I think there is practical value in occasionally re-examining all of our assumptions and starting from scratch.

[0] Free/Gsync exists now, but we still update the entire screen.



Would updating smaller sections of the screen provide any tangible advantages over updating the entire screen?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: