Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How is an analog computer "easier to program" than a digital computer? Making neural networks do what you want is hard enough with the help of tons of libraries, decent scripting languages, the ability to dump the weights into a file and inspect them, etc. Programming with an analog computer, which I'm guessing would be something like programming with FPGAs, sounds like a nightmare in comparison.


Because neural networks are fundamentally dynamic systems that are much easier to model with continuous signals than discrete bits. A lot of the hardness comes from the fact that you are discretizing fundamentally continuous signals.


Well, yes, that's where some of the hardness comes from... but how do you build a predictive model capable of transfer learning without disentangling the latent factors? How do you perform one-shot learning without being able to lay down discrete episodic memories?

I'd say that rather than a continuous analog data stream needing an analog model, the real problem is that the causality (hence predictability - the goal) of this data stream is due to discrete actors and actions and therefore we need to discretize the stream into objects and spatio-temporal events.

Anyhow, we're making great strides with ANNs on the perceptual side to the point where it's almost a solved problem... What's lacking (outside of DeepMind) is more of a focus on intelligent embedded agents, complete with lifetime continuous learning, and adaptive behavior. IMO we're focusing too much on artificial isolated problems rather than the embedded systems/agents that are the real goal!

Just as ImageNet - and human competitiveness - drove vision research, what could accelerate AI research would be a similar annual competition for embedded agents (either in a simulated environment or maybe robots in a competition space), which would at least focus efforts on building systems and addressing the goal of AI rather than breaking it down into someone's (maybe incorrect) notions of the piece-parts necessary to get there.

Some people shy away from robotics as an unwelcome added complexity, but that never stopped the popular micromouse competitions, and these sorts of competition could go a very long way with simple robots/vehicles (e.g. based on Lego mindstorms or R/C vehicles) with remote compute.


Quite the opposite, neural network research & experiments show that discreteness isn't a problem - in particular, there's no benefit on having a model with more fine-grained values and that even extremely discrete models (e.g 8 bits or less) work quite well.


There are advantages to having Calculus operations as first class citizens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: