Can you describe what you mean by "he doesn't understand"?
His background is in psychology and neuro science so his view on intelligence is probably quite different than person coming from computer science.
For me he puts words onto something I've felt recently, that what we're doing is cool and all, but just doesn't feel like the right way to approach it. We're just putting loads of data and computing power into something that produces results that looks intelligent, but digging deeper bares no resemblence to what a neuro scientist would call intelligent..
> Even Google Translate, which pulls off the neat trick of approximating translations by statistically associating sentences across languages, doesn’t understand a word of what it is translating.
This is just another incarnation of "AI is the thing we haven't done." He's parroting Chomsky's disdain for statistical models and John Searle's fundamental misunderstanding of AI. For the former, Norvig has a fair rundown of Chomsky's complaints (http://norvig.com/chomsky.html).
> bears no resemblance to what a neuroscientist would call intelligent
TensorFlow gets results. The neuroscientist can claim it's a P-zombie, but they need to point to some criteria for accepting something as intelligence. Otherwise we're just moving goalposts.
>> Even Google Translate, which pulls off the neat trick of approximating translations by statistically associating sentences across languages, doesn’t understand a word of what it is translating.
>This is just another incarnation of "AI is the thing we haven't done."
I don't think so - it appears to be an objectively correct assessment of the current state of the art.
> Otherwise we're just moving goalposts.
The first movement of the goalposts was to call '80s technology AI. Now they are drifting back to where they started.
On the other hand, I am surprised by the claim that AI is stuck; my outsider's impression is that progress has accelerated. Perhaps the impression of being stuck comes from more people realizing how difficult a problem it is.
> On the other hand, I am surprised by the claim that AI is stuck; my outsider's impression is that progress has accelerated. Perhaps the impression of being stuck comes from more people realizing how difficult a problem it is.
Deep learning made practical a large number of applications that were previously intractable by neural network approaches. Advances over the last 10 years have pushed the boundaries of what machine learning systems are capable of doing. However machine learning has algorithmic limits to what it can accomplish, and we are starting to hit those limits. A change in paradigm is required to begin making real progress again. Either a change to something new or a regression to older ideas that were temporarily put on the back burner.
That's not a universally held view, but I think it is the sentiment behind this editorialized title.
>>> Even Google Translate, which pulls off the neat trick of approximating translations by statistically associating sentences across languages, doesn’t understand a word of what it is translating.
>> This is just another incarnation of "AI is the thing we haven't done."
> I don't think so - it appears to be an objectively correct assessment of the current state of the art.
How deep an understanding is required to meet the threshold? The skepticism feels like "no true Scotsman" applied to the definition of understanding.
I observe the following in young children when exposed to a new word:
0. First exposure to totally new word used in a sentence with more familiar words.
1. Brief pause
2. Mimic pronunciation 1-2 times
3. Process for minutes, hours, or days.
4. Use the word in a less than 100% correct way
5a. Maybe hear the phrase repeated back with the error "corrected" (hello internet)
5b. Maybe hear more usage of the word in passing from others (with varying degrees of "correctness")
6. Recurse for life.
At what point did the person understand the word? How is AI translation substantially different?
I'm not sure I understand any word in a way that would satisfy AI skeptics.
This is a discussion of the current state of affairs, not, for example, a Searle-like claim that understanding can not and will never be achieved. To substantiate a claim of 'no true Scotsman', I think you have to present an actual case where you think a machine has achieved understanding, but which is being unreasonably dismissed.
Ironically, your last sentence has 'no true Scotsman'-like reasoning, along the lines of 'no true AI sceptic would fairly evaluate a claim of machine understanding.'
BTW, I am not a skeptic of the potential of AI, though I am skeptical of some claims being made.
I think the thing that bothers people like the author and Chomsky is that deep nets can't explain or justify how they make decisions in a way that a human could fit in their brain. There is no book called "How To Play Go as well as I Do" by AlphaGo. This is something we're going to have to live with : The machines are smarter than we are, so we won't be able to understand them except at the most basic levels where we use inductive proof to extrapolate our understanding of very small models to enormously large ones that are beyond our ability to fit into our brains.
We can understand how individual chemical reactions in Einstein's brain work, but that doesn't make us smarter than him.
That's why I like the turing test. That one is already pretty good. Is there something more advanced ? Since by now i would expect that the AI community (or others) have defined a better test to accept true AI ?
I feel like you're conflating sentience and intelligence. Algorithms don't have to reduce to anything special to display intelligent behavior. After all, humans reduce to chemistry. All that is required is the ability to solve problems.
As for sentience, we don't understand it, so the only way we'd recognize it is if it was extremely similar to human sentience.
Your concern about "the right way to approach it" is getting closer to the real conflict which is not in the "way" but in what "it" is. In your mind, what is the desired goal or outcome for AI R&D?
If your objectives are in medicine, cognitive science or philosophy of the mind, you might want simulations which are isomorphic to biological minds. You probably hope that AI work will provide illumination into how the mind works, or why it sometimes fails, or how to improve it.
If your goals are in computing and product engineering, you want predictable, reproducible, and adaptive methods for making smarter tools on time and on budget. You may want the product to have behaviors compatible with humans (as a product feature) but you shouldn't care whether the implementation technique in any way resembles an actual human mind. Behaviorism is all that matters for a product evaluation. The design and marketing teams can take care of imbuing the product with intangible properties imagined by consumers.
And honestly, if you want a biological mind, we already have techniques to build them: go find a mate, procreate, and raise your offspring. Nobody tasked to deliver a commercial AI product is actually going to want a solution that behaves like real human minds, where individual units off the same assembly line may require psychotherapy, develop self-destructive habits, or worse slip through QA with an undetected sociopathy or psychopathy which creates a manufacturer liability.
Some of us old school engineering types may harbor a disdain for the current neural net renaissance because it feels a little too black box to us. Deep down, we'd prefer a tool-building tool that had more directly visible logic and rules in it, because we tend to believe (rightfully or not) that such a method is more amenable to engineering practices and iterative designs. But, the risk in this mindset is in forgetting that even complex, logical systems can exhibit emergent properties and chaotic behavior. We probably need to engage in more statistical methods whether we like it or not...
great perspective! My desired goals are more in something like cognitive science..
I think most of my grumpiness about this is not that things arent progressing (check out Santa Fe Institut and their work on complexity) it's just that AlphaGo and self driving cars are getting all the attention
For me he puts words onto something I've felt recently, that what we're doing is cool and all, but just doesn't feel like the right way to approach it. We're just putting loads of data and computing power into something that produces results that looks intelligent, but digging deeper bares no resemblence to what a neuro scientist would call intelligent..