Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At least the neural networks were at some point self-assessing and self-modifying, and plausibly could be said to “learn” something. Here it seems more plausible to say that the humans learned what structure to produce than that the glass did.

But you’re right, I think many “AIs” shouldn’t really be named that either!



They didn't manually adjust the glass until it worked (which would be infeasible), they wrote a differentiable simulator and used it to determine the material to use at each point via gradient descent, which is quite a feat.

That's exactly as self-assessing and self-modifying as a neural network implemented using any other kind of computation substrate.


I skimmed the paper linked there. They did use a digital model for glass-impurity substrate, to adjust the location of the impurities. This doesn’t sound much different than training with activation weight propagation — except, here one can literally see those weights. I don’t see why it wouldn’t fit usual definition of a neural network.


It may have been designed using AI, but it's not AI. The impurities are not the weights, they are the output of the design software. It is the design software that has been learning something, not the pane of glass.

It is like using AI to design, say, the most aerodynamic plane. Only here they used an AI to design something that performs a task that we traditionally use as a benchmark for AI models. But this piece of glass, just like the plane mentionned above, is not learning anything and it's not an AI.


Thanks for giving this analogy, It made more sense than my imagination above.

If I understand correctly, it’s the design process, not glass, that used learning. Along the same analogy, I guess the sculpture in London (glass, here), which was designed using random walk (neural nets), would be the same: the sculpture in itself isn’t “random walk”, but the design process was.

(I couldn’t recall what was the name of the sculpture. Here’s the wiki link: https://en.m.wikipedia.org/wiki/Quantum_Cloud )

Edit: I read the other comments and it’s getting more confusing! AI, from my school courses, would be implementation of algorithms like Hill climbing where a system is online: it takes some input, and tries its best to find a solution. Now if I take the output itself for use in, say, signal processing — that “output” would be a “device” to do something and won’t be an “AI device”. Does this make any sense at all? I’d love to get some pointers on this to read.


Out of curiosity - did you ever work with neural networks? (as in - the algorithms, not the high level abstractions?)


Yes, pretty much every day.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: