Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article is riddled with errors that undermine its own thesis.

It starts badly:

> Artificial Intelligence is colossally hyped these days, but the dirty little secret is that it still has a long, long way to go

This is not a secret, let alone a dirty one. Even 5 minutes casual research into the state of AI will reveal what it can do and what it can't.

It says:

> Such systems can neither comprehend what is going on in complex visual scenes (“Who is chasing whom and why?”) nor follow simple instructions (“Read this story and summarize what it means”).

In fact comprehension of (very) simple stories is now more or less a solved problem. I wrote about performance on the bAbI tests here:

https://blog.plan99.net/the-science-of-westworld-ec624585e47

Summarisation of stories is also something with good recent results:

https://research.googleblog.com/2016/08/text-summarization-w...

Summarisation of arbitrary video is harder but given that object and path extraction already works well, it doesn't seem very implausible that we'll see some good research results in video summarisation systems within a few years. Extrapolation from what's happening to hypothesised explanations is a lot harder but not hard to imagine it being possible given the direction research is going.

> My daughter had never seen anyone else disembark in quite this way; she invented it on her own. Presumably, my daughter relied on an implicit theory of how her body moves, along with an implicit theory of physics — how one complex object travels through the aperture of another. I challenge any robot to do the same.

Challenge accepted:

https://www.youtube.com/watch?v=gbYiKMisbME

And for the imagination component:

http://www.wired.co.uk/article/googles-deepmind-creates-an-a...

> To get computers to think like humans, we need a new A.I. paradigm

That's not clear at all, given recent research. It is an odd statement from someone who has worked in AI. But then as the author is not a computer scientist, perhaps not that odd.

Modern neural networks are so similar to how humans think that psychological techniques are being used to understand and "debug" them:

https://deepmind.com/blog/cognitive-psychology/

I'm not sure how "think like humans" can be easily defined, but using strategies developed to understand human thinking on robots seems like a good starting point. Making mistakes similar to what you'd expect humans to make is also a good sign.

> But it is no use when it comes to top-down knowledge. If my daughter sees her reflection in a bowl of water, she knows the image is illusory; she knows she is not actually in the bowl

She does now. But it takes time for babies to learn how to interpret mirrors.

http://www.thoughtfulparent.com/2009/10/child-psychology-cla...

Animals usually never learn this, though a few very intelligent species can.

I don't see any obvious theoretical reason why image recognition engines shouldn't be able to understand mirrors, given sufficient research.

> Corporate labs like those of Google and Facebook have the resources to tackle big questions, but in a world of quarterly reports and bottom lines, they tend to concentrate on narrow problems like optimizing advertisement placement or automatically screening videos for offensive content.

Another bizarre statement given the author's background. Google and Facebook have been investing massively in very long term AI research and building many things along the way of no direct commercial value, like AIs that play games. I don't see Google's public AI research focusing on the cited problems, although it would not surprise me if there are parallel efforts to apply research breakthroughs in these areas.

> An international A.I. mission focused on teaching machines to read could genuinely change the world for the better — the more so if it made A.I. a public good, rather than the property of a privileged few.

And here we have it ladies and gentlemen .... the reason the article is so filled with factually false and logically dubious statements. It is an advocacy piece for new social policy: a vast new government research investment in academia, in which presumably Mr Marcus would like to be employed (rather than at Uber).

Besides, even this last paragraph is disingenuous. There does not seem to be any risk of AI becoming "the property of the few". In fact the large corporate research labs are doing fantastically well at publishing research papers and making the results of their work publicly available and useful ... in fact given the relative quality of corporate vs academic open source releases I'd say they're doing better than academia is. It's hard to imagine universities producing something as robust and well documented as TensorFlow.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: