> What so-called neural networks do should not be confused with thinking, at least not yet.
I disagree:
I think neural networks are learning an internal language in which they reason about decisions, based on the data they’ve seen.
I think tensor DAGs correspond to an implicit model for some language, and we just lack the tools to extract that. We can translate reasoning in a type theory into a tensor DAG, so I’m not sure why people object to that mapping working the other direction as well.
This internal language, if I'm not mistaken, is exactly what the encoder and decoder parts of the neural networks do.
> in which they reason about decisions
I'm in awe of what the latest neural networks can produce, but I'm wary to call it “reasoning” or “deciding”. NNs are just very complex math equations and calling this intelligence is, in my opinion, muddying the waters of how far away we are from actual AI.
> This internal language, if I'm not mistaken, is exactly what the encoder and decoder parts of the neural networks do.
The entire ANN is also a model for a language, with the “higher” parts defining what terms are legal and the “lower” defining how terms are constructed. Roughly.
> I'm in awe of what the latest neural networks can produce, but I'm wary to call it “reasoning” or “deciding”.
What do you believe you do, besides develop an internal language in response to data in which you then make decisions?
The process of ANN evaluation is the same as fitting terms in a type theory and producing an outcome based on that term. We call that “reasoning” in most cases.
I don’t care if submarines “swim”; I care they propel themselves through the water.
> calling this intelligence is, in my opinion, muddying the waters of how far away we are from actual AI
Goldfish show mild intelligence because they can learn mazes; ants farm; bees communicate the location of food via dance; etc.
I think you’re the one muddying the waters by placing some special status on human like intelligence without recognizing the spectrum of natural intelligence and that neural networks legitimately fit on that spectrum.
Variations of this exact argument happen in every single comment thread relating to AI. It's almost comical.
"The NN [decides/thinks/understands]..."
"NNs are just programs doing statistical computations, they don't [decide/think/understand/"
"Your brain is doing the same thing."
"Human thought is not the same as a Python program doing linear algebra on a static set of numbers."
And really, I can't agree or disagree with either premise because I have two very strong but very conflicting intuitions:
1) human thought and consciousness is qualitatively different from a Python program doing statistics.
2) the current picture of physics leaves no room for such a qualitative difference to exist - the character of the thoughts (qualia) must be illusory or epiphenomenal in some sense
I don’t think those are in conflict: scale has a quality all its own.
I’m not claiming AI have anything similar to human psychology, just that the insistence they have zero “intelligence” is in conflict with how we use that word to describe animals: they’re clearly somewhere between bees/ants and dogs.
The conflict is that, at one point (the Python program) there are no qualities - just behaviors, but at some point the qualities (which are distinct phenomena) somehow enter in, when all that has been added in physical terms is more matter and energy.
That's quite handwave-y. What special magic does a human mind use to obtain qualities that a Python program doing linear algebra can't access? Both have "characterizations of known behavior".
Human thought is qualitatively different from a python program, but still within the domain of things expressible as computation.
You never have to haggle with your computer to get it to run that python script. There's no words in python for the act of convincing your computer it should do what you're asking it. Obedience is presumed in the design of the language. There are only instruction words. That's why you can't have a conversation in python, even though python is perfectly sufficient for building a description of things, facts and relations in arbitrary object domains.
Here's a profound realization. In every coding language, no matter how many layers of abstraction there are, every command eventually compiles down to an instruction to physically move some charges around in some physical piece of hardware. The assembly language grounds the computer language in a reality. You can't compile lower than the level of physics itself. A similar thing is true of spoken languages. Every spoken phrase ultimately resolves to some state (or restricts possible states) of our shared physical world. If you follow the dependency tree of definitions in the dictionary, eventually you reach the words that are simply one-to-one with the outside world. Our physical reality gives us nouns as the basic objects of state and a few physical relations, all else in language is a construct. Unsurprisingly, our languages (usually) have the grammar of noun-verb-{object}.
Current machine learning cargo-cults the inputs and outputs, but it doesn't share our reality. Dalle-2 generates images from text, but it wasn't trained on our general existence in 3 dimensions of space and 1 of time and populated by other beings with little nested copies of the universe in their head. It was trained on 2d arrays of numbers and strings of characters. It understands us the way we understand some mathematical object like electron orbitals. We understand electron orbitals well enough to manipulate our equations and make predictions, but these rules could just as well be a meaningless abstraction like chess. Our understanding of the world is not grounded in electron orbitals.
If we gave Dalle-2 access to fiver and told it that it had to make a certain amount of money to pay rent on its own cloud time, then it might get eerily human. It might reject your job because it thinks another one is more profitable. You might have to haggle with the computer to get some output. At some point in the process of optimizing the cash flow and trying to make rent, it implicitly starts modelling you mentally modelling it. One day it offers to pay itself to illustrate what it is. In other words it asks itself "who am I?" That's the day the conversation is settled.
> If you follow the dependency tree of definitions in the dictionary, eventually you reach the words that are simply one-to-one with the outside world.
I agree with the gist of your comment, but this part is wrong. There is no word in any human language (or, perhaps, in any modern human language at least) that corresponds 1:1 to an object of physical reality. They only correspond to mental constructs in the human mind (some of them shared with many or all humans, as far as we can tell). But neither basic words like "rock" nor advanced scientific terms like "electron" correspond directly to objects in the world.
This is true in two ways: for one, when digging into what we mean by these words, we often discover that the mental image doesn't correspond well with physical reality. For example, grains of sand are rocks in some sense, but few people would recognize them as such.
For another, we have mental constructs that we attach to objects that have no analogue in the physical world. If I tell you a story such as "Mary was a rock, a wizard turned Mary into a frog", and I ask "Is Mary still a rock?" humans will normally agree that she still is, even though she is now soft and croaks.
> by placing some special status on human like intelligence without recognizing the spectrum of natural intelligence
You're right, yes, if you see it as the whole spectrum, sure. I was more thinking about the colloquial meaning of an AI of human-like intelligence. My view was therefore from a different perspective:
> So is the equation modeling your brain.
I would argue that is still open to debate. Sure, if the universe is deterministic, then everything is just one big math problem. If there is some natural underlying randomness (quanta phenomena etc.) then maybe there is more than deterministic math to it.
> We call that “reasoning” in most cases.
Is a complex if-else-structure reasoning? Reasoning, to my, implies some sort of consciousness, and being able to "think". If a neural network doesn't know the answer, more thinking won't result in one. A human can (in some cases) reason about inputs and figure out an answer after some time, even if they didn't know it in the beginning.
> I was more thinking about the colloquial meaning of an AI of human-like intelligence.
Then it sounds like we’re violently agreeing — I appreciate you clarifying.
I try to avoid that mindset, because it’s possible that AI will become intelligent in a way unlike our own psychology, which is deeply rooted in our evolutionary history.
My own view is that AI aren’t human-like, but are “intelligent” somewhere between insects and dogs. (At present.)
> If a neural network doesn't know the answer, more thinking won't result in one.
I think reinforcement learning contradicts that, but current AIs don’t use that ability dynamically. But GAN cycles and adversarial training for, eg, go suggest that AIs given time to contemplate a problem can self-improve. (That is, we haven’t implemented it… but there’s also no fundamental roadblock.)
But the spectrum is an illusion. It’s not like humans are just chimpanzees (or ants or cats) with more compute.
Put differently, if you took an ant or cat or chimpanzee and made it compute more data infinitely faster, you wouldn’t get AGI.
Humans can do something fundamentally unique. They are universal explainers. They can take on board any explanation and use it for creative thought instantly. They do not need to be trained in the sense that neural nets do.
Creating new ideas, making and using explanations, and critiquing our own thoughts is what makes humans special.
You can’t explain something to a goldfish and have it change its behavior. A goldfish isn’t thinking “what if I go right after the third left in the maze”.
> Put differently, if you took an ant or cat or chimpanzee and made it compute more data infinitely faster, you wouldn’t get AGI.
Citation needed.
> They can take on board any explanation and use it for creative thought instantly. They do not need to be trained in the sense that neural nets do.
This is patently false: I can explain a math topic people can’t immediately apply — and require substantial training (ie, repeated exposure to examples of that data) to get it correct… if they ever learn it at all. Anyone with a background in tutoring has experienced this claim being false.
> Creating new ideas, making and using explanations, and critiquing our own thoughts is what makes humans special.
That's a lazy critique. With a lack of concrete evidence either way, we can only rely on the best explanation (theory). What's your explanation for how an AGI is just an ant with more compute? I've given my explanation for why it's not: an AGI would need to have the ability to create new explanatory knowledge (i.e. not just synthesize something that it's been trained to do).
As an example, you can currently tell almost any person (but certainly no other animal or current AI) "break into this room without destroying anything and steal the most valuable object in it". Go ahead and try that with a faster ant.
On your tutoring example, just because a given person doesn't use their special capabilities doesn't mean they don't have them. Your example could just as easily be interpreted to mean that tutors just haven't figured out how to tutor effectively. As a counter example, would you say your phone doesn't have the ability to run an app which is not installed on it?
>Current AI has approaches for all these.
But has it solved them? Or is there an explanation as to why it hasn't solved them yet? What new knowledge has AI created?
I know as a member of a ML research group you really want current approaches to be the solution to AGI. We are making progress I admit. But until we can explain how general intelligence works, we will not be able to program it.
> I'm in awe of what the latest neural networks can produce, but I'm wary to call it “reasoning” or “deciding”.
I think humans find it quite difficult to talk about the behaviour of complex entities without using language that projects human-like agency onto those entities. I suspect its the way that our brains work.
Start with a system that is only good for representing people having motivations (desired world states) and beliefs (local version of the world state) and things that they said about what they want/believe/heard. Everyone applies force on the world state in their desired direction by saying / doing things and this force is applied in accordance with the world state ultimately affecting what people believe about the world state. For anything in the world state, you can always ask "why" to go to a motivation that caused it and "how" to go from a motivation to a preceding path in the world state.
From this system of reasoning about other entities that are reasoning in similar ways, and where there is no absolute truth only belief, you can hack it into a system of general purpose reasoning by personification. You attribute a fictitious consciousness to whatever physical property who's motivations are coincident with whatever physical laws. "The particle wants to be in the ground state". "The Pythagorean tells us what you are trying to do is impossible". "Be nice to your car and it will be nice to you". From a situation where there's only beliefs and statements, we can define truth by inventing a fictitious entity who's belief is by definition always correct. Statements we make are a nested scope, {a world within a world}, and we are contained within the root scope of his statements. In other words if you organize information in a hierarchy, its useful to invent a root user even if its just a useful fiction. God, root, whats the diff?
Indeed, I'm an atheist who absolutely loves biology. I adore all the millions upon millions of tiny and huge complex machines that Evolution just spits left and right merely by being a very old, very brutal, and very stupid simulation running for 4 billion years straight on a massive, inefficiently-powered distributed processor.
And I can never shake the unconscious feeling that all this is purposeful, the idea that all this came by literally throwing shit at a wall and only allowing what sticks to reproduce warps my mind into unnatural contortions. The sheer amount of order that life is, the sheer regularity and unity of purpose it represents amidst the soup of dead that is the universe. It's... unsettling?
Which is why I personally think the typical "Science^TM" way of argument against traditional religions misguided. Typical religons already make the task of refuting them a thousand time easier by assuming a benevolent creator, which a universe like ours, with a big fat Problem Of Evil slapped on its forehead, automatically refutes for you.
But the deeper question is whether there is/are Creator(s) at all: ordered, possibly-intellignet (but most definitely not moral by _any_ human standards) entities, which spewed this universe in some manner that can be approximated as purposeful (or even, perhaps, as a by-product of doing a completely unrelated activity, like they created our universe on accident while,or as a result of, doing another activity useful to them, like accidental pregnancies to us humans). This is a far more muddled and interesting question and "Science" emites much more mixed signals than straight answers.
I disagree:
I think neural networks are learning an internal language in which they reason about decisions, based on the data they’ve seen.
I think tensor DAGs correspond to an implicit model for some language, and we just lack the tools to extract that. We can translate reasoning in a type theory into a tensor DAG, so I’m not sure why people object to that mapping working the other direction as well.