Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The generation network is therefore an approximate renderer that is learned from data.

> https://www.youtube.com/watch?v=G-kWNQJ4idw&feature=youtu.be

I wonder if this is efficient... I know this isn't the researchers intended application, but the path tracer in me wants to see how far this can be pushed for real time rendering. I welcome the more interesting artefacts that a NN might produce (i'm talking about pigsnails [1] of course :D)

full circle: GPU GLSL for graphics -> GPU cuda/opencl for NN -> GPU cuda/opencl for NN graphics

[1] https://www.newscientist.com/article/dn27755-artificial-brai...



Disney actually does a lot of research combining the world of graphics with deep learning.

Some examples that you might appreciate (from the excellent channel "Two Minute Papers"):

. "Disney's AI Learns To Render Clouds" [0]

. "AI Learns Noise Filtering For Photorealistic Videos" [1]

[0] https://www.youtube.com/watch?v=7wt-9fjPDjQ [1] https://www.youtube.com/watch?v=YjjTPV2pXY0


Thanks! that is really some awesome stuff. It's even simpler in concept than this.


This is also very likely to be useful for video compression.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: