Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The person you're largely agreeing with is Rodney Brooks (formerly of MIT CSAIL, http://people.csail.mit.edu/brooks/publications.html ).

And to sum it up a bit, the hypothesis is that humanlike AI is as much a product of the experience and reality of being physically (and limitedly!) human as it is any abstract algorithm.

You may also know him from a small company called iRobot (aka Roomba).



Thank you very much for the link, and nice summary. This looks very interesting, I'm grateful: my learning habits mean I'm liable to playing with second-hand scraps of ideas, and missing out on sources.


I think his article Intelligence Without Representation is particularly worth reading at least once.

http://people.csail.mit.edu/brooks/papers/representation.pdf


To elaborate a bit, most of the excerpts I've heard have to do with the availability of sensors.

E.g. Task: grasp an egg without cracking it

Physical platform 1: actuators, no pressure sensors

Physical platform 2: actuators, pressure sensors where egg contacts robot

Inarguably, the simplest successful implementation of the task in driving code will be much more concise for platform 2 than platform 1.

... Now generalize the same idea to trying to teach disembodied AI to be human.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: