Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>>> Even Google Translate, which pulls off the neat trick of approximating translations by statistically associating sentences across languages, doesn’t understand a word of what it is translating. >> This is just another incarnation of "AI is the thing we haven't done." > I don't think so - it appears to be an objectively correct assessment of the current state of the art.

How deep an understanding is required to meet the threshold? The skepticism feels like "no true Scotsman" applied to the definition of understanding.

I observe the following in young children when exposed to a new word:

0. First exposure to totally new word used in a sentence with more familiar words.

1. Brief pause

2. Mimic pronunciation 1-2 times

3. Process for minutes, hours, or days.

4. Use the word in a less than 100% correct way

5a. Maybe hear the phrase repeated back with the error "corrected" (hello internet)

5b. Maybe hear more usage of the word in passing from others (with varying degrees of "correctness")

6. Recurse for life.

At what point did the person understand the word? How is AI translation substantially different?

I'm not sure I understand any word in a way that would satisfy AI skeptics.



This is a discussion of the current state of affairs, not, for example, a Searle-like claim that understanding can not and will never be achieved. To substantiate a claim of 'no true Scotsman', I think you have to present an actual case where you think a machine has achieved understanding, but which is being unreasonably dismissed.

Ironically, your last sentence has 'no true Scotsman'-like reasoning, along the lines of 'no true AI sceptic would fairly evaluate a claim of machine understanding.'

BTW, I am not a skeptic of the potential of AI, though I am skeptical of some claims being made.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: