Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not an AI researcher (although from my internet reading that's not a hard title to claim ;)) but I feel that ML/DL can't go much farther than they already have. The concept of "we just need more power" is an obvious fallacy to me.


I agree. In fact, I don't like the term Artificial Intelligence. It kind of implies that we understand what intelligence is, thereby able to emulate it. In my opinion, it is not the trait of an advanced species to attempt to create something that we cannot adequately describe.

I mean, all these "structures" and algorithms that people create in software, each a little different than the last, each with different performance/result trade offs. But nobody actually understands the fundamentals as to what is behind some of the very impressive results we are seeing (due to the technology allowing us to throw computing horsepower at it).


Agreed. Throwing more computing power at the problem is a cheap way to make it look like we're making progress; our AI should be efficient enough to run on limited hardware. Not to say that ML/DL isn't useful, but I think that whatever the next revolution is in AI is likely to come from a completely different direction.

I'm not a researcher either, so it's difficult for me to articulate exactly what problems I see, but (shameless self plug) I did write an article a little while ago attempting to: https://medium.com/@danShumway/modern-ai-techniques-arent-wo...


It's worth noting that the training-time inference-time distinction matters here. Most of the fancy tech you've heard of recently (Semantic Seg, Pose, Localisation etc) can be pretty easily optimised for fast inference, and indeed it's not really so much of a research focus because it's so tractable (see MobileNets, v2, etc). Training, however, is still quite daunting.


That's a good point.

I tend to be fairly dismissive of inference because what you end up with is a highly specialized algorithm rather than something that can easily continue to adapt. I suspect inference would probably fall into Jordan's category of "things that we call AI but probably shouldn't."

But that's not to dismiss how important fast/cheap inference has been in allowing companies to actually build things with AI.


Also even on the learning side, perhaps the raw task of making things faster isn't really that hard

https://eng.uber.com/accelerated-neuroevolution/


What's special about 2018 computing power? Neural networks have been declared dead several times in the past. Because they couldn't do anything interesting with the measly computing power available at the time. Currently the biggest RNNs have a few thousand neurons. Which is absolutely puny compared to even animal brains.

Of course this is defending a strawman. I don't know anyone that said it was just a computing power issue. In fact I think most people vastly underestimate the role of computing power. Even algorithmic improvements are enabled by computers letting researchers do experiments that would have been impossible before. And by doing lots of experiments they gain intuition about the problem, that they wouldn't develop in a vacuum.


The real fallacy is calling neural networks the analogue of animal brains. We still have a lot of research to accomplish in this sphere.


They aren't. You're looking for "hebbian learning". Very different from neural networks.

In 50 years people will laugh about how poorly these things were named. Just like we now laugh about what symbol we chose for a source of electrons in a circuit : "+". Whoops.


You both seem to have really confident opinions given that you admit you have no expertise in the field.


for the sake of argument i will go with the opposite idea - why not? Do you have evidence that scaling up will fail to produce more intelligence? We already know that the current deep networks (small,compared to the brain) produce fragments of intelligent behavior. we also have evidence that the brain is a connectionist system much much larger than the current ANNs. It would follow logically that intelligence could arise by simply scaling up.


>we also have evidence that the brain is a connectionist system much much larger than the current ANNs. It would follow logically that intelligence could arise by simply scaling up.

the little problem here is that it took 4 billion years of computing and a computer the size of a planet to come up with the nifty machines that are our brains. So unless you have brought a lot of tea and biscuits I think we should really think twice if just throwing more training and power at overly generalised algorithms is a good and realistic path forward.

It's not so much that the idea of "throw more things at it" is impossible, it's that it's a questionable path towards human intelligence. If you just want human intelligence without any further understanding of how to make it there are cheaper ways already


> 4 billion years of computing and a computer the size of a planet

That is not impossible to simulate, even if it includes the entire lineage of the humans. Also, nature prefers generalized algorithms as well, starting from DNA.


But we already have those machines to inspire ourselves from! Evolution had nothing. That's why it took so long. I literally can't understand you people's pessimism.


Humans are proof that machine intelligence can be improved quite a bit. We are just complicated machines, no?


Yes. But two things.

1. We don't know all the mechanisms that a brain employs to achieve intelligence. We see billions of interconnected neurons and we assumes "Yea, this might be generating intelligence".

2. We don't know if we are already at some fundamental limits of intelligence. For example, you can see may instances in nature where a pattern emerged that maximizes some sort of efficiency. (Like Honeycomb pattern). So, the end result of this will be that, even if we transfer the process by which our intelligence work to a machine, it will have the same performance as an average human brain...


We're animals who have to worry about surviving and passing our genes on in a variety of social settings.

Machines are the tools we make to aid the above.


And your body and worries are just tools that help your genes reproduce.


And your genes are just tools that help the chemical environment they regulate reproduce.


And the chemical environment is just trying to maximize entropy.


It's almost like that chicken-and-egg scenario!


yes but we're not built on silicon nor were we guided by curated data sets nor did we have a deadline nor does another parallel universe rely on our decisions for potentially life and death outcomes. I do not like the idea of this simple equality and I think it misses the point. We might not get to us with this tech, the model might not be near enough.


I'm specifically talking about the strategies used in Deep / Machine learning to approximate intelligence through probability.


Wait, how is that actually different than what our brains do? From what I know, our cognitive system is built in quite a similar fashion of probabilistic pattern matching with backpropagation, coupled with some "ad-hoc" heuristic subsystems.


Eh no. Our brains and how human mind works are actually very poorly understood. To claim we have a good idea how our cognitive systems work under the hood is an incorrect statement.


there is nothing like backpropagation in the brain, or a probabilistic pattern matcher. there is evidence that a connectionist model is applicable, but learning is not deciphered, and there are aspects of it, like neuronal excitability, local dendritic spiking, oscillations, up and down states etc, which do not translate at all to DL systems. That said, the increasing success of connectionist architecture does point to the conclusion that the brain is also a connectionist machine.


I'm not sure neuroscientists would quite agree with that.


'if I asked my customers what they wanted, they would have said a faster horse'.

Maybe throwing more power at the current solutions won't ever make the progress we want, we need to find our car, so to speak. And a lot of people are working on that.


I used to be of the same oppinion, but not so anymore.

I believe 3 orders of magnitude more processing power we would achieve amazing results in a decade; not AGI type of results but very close to it from our perspective.

Part of the reason is that I now believe you can simply "bruteforce" some problems with existing ML algorithms (like thousands of layers deep neural nets) but more importanly, one(not me) could test new ML algorithms that are not feasible now (I don't have any examples) and people would be able to itterate much faster in developing such algorithms.


Why don't you think it will continue to progress?


I am of a similar opinion. There was a real revolution in ~2011-2013 with deep learning. These have achieved much better results in image/speech tasks. These gains have leveled off, and we're seeing some limitations.

At current trajectory, we're not headed towards a general intelligence. Progress has been made, but there are big gaps. Smart home devices are a great case in point. They are somewhat flexible in the voice commands they accept. Specific phrasing and pronunciation are not necessarily required. Their responses and speech, however, are all pre-programmed and templated by humans.

Edit: There is potential for more breakthroughs in the future, but I am not seeing them on the horizon at the moment.


Well, reinforcement learning should also be given a mention and revolutions there (e.g AlphaGo to AlphaZero) were much more recent


Post-2013 breakthroughs off top of my head are:

* Wavenet, now productionized at Google as text to speech

* Alpha[Go]Zero

* Neural Machine Translation, on production at Google


AFAIK, these are all implementations of deep learning or similar, not a fundamentally new architecture. We'll continue to see these as DL matures, but it doesn't address the shortcomings of the technique.

For more perspective: https://arxiv.org/abs/1801.00631


Where does the article say "we just need more power"?


Nowhere, I wasn't really referencing the article directly. Just a related thought I have.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: