I've long been enamored with the idea of learning from analog computers to build the next generation of digital ones. In some perspective all our computers are analog, of a sort - today's computer chips are effectively leveraging electron flow through a carefully arranged metal/silicon substrate, with self-interference via electromagnetic fields used to construct transistors and build up higher order logic units. We're now working on photonic computers, presumably with some new property leading to self interference, and allowing transistors/logic above that.
"Wires" are a useful convenience in the electron world, to build pathways that don't degrade with the passing of the elections themselves. But if we relax that constraint a bit, are there other ways we can build up arrangements of "organized flow" sufficient to have logic units arise? E.g. imagine pressure waves in a fluid -filled container, with mini barriers throughout defining the possible flow arrangement that allows for interesting self-reflections. Or way further out, could we use gravitational waves through some dense substance with carefully arranged holes, self-interfering via their effect on space-time, to do computations for us? And maybe before we get there, is there a way we could capitalize on the strong or weak nuclear force to "arrange" higher frequency logical computations to happen?
Physics permits all sorts of interactions, and we only really use the simple/easy-to-conceptualize ones as yet, which I hope and believe leaves lots more for us to grow into yet :).
Electricity is also a wave. The wires are essentially waveguides for particles/waves traveling at near luminal speeds. So in theory anything done with electricity could be replicated using other waves, but to make it faster you would need waves that travel faster than electrons through a wire. Photons through a vacuum might be marginally faster, but pressure waves though a fluid would not.
If bitflips are a problem in a modern chip, imagine the number of problems if your computer ran on gravity waves. The background hum of billions of star collisions cannot be blocked out with grounded tinfoil. There is no concept of a faraday cage for gravity waves.
Nitpick: gravity waves [1] pretty universally refer to waves in fluid media in which the restoring force is buoyancy. Ripples in spacetime are usually called _gravitational_ waves.
You're right that the speed of light remains a constant limitation on propagation delay, but the defining limitation on the speed of computation is rather the clock speed - how long it takes for each round of computation. Electrons are comparatively slow due to the time it takes to fill and stabilize a transistor. Our hypothetical new type of computer will have to be faster to converge, rather than faster to propagate.
You're right about the bit flips though. I don't know if a gravitational wave computer is actually ever going to be feasible, just an interesting dream for the far future. Hopefully there are more options to consider in the meantime :).
Gravity is a poor source of computation because it is incredibly weak - 10^-43 vs electron force. Even if you add several powers of 10 for all the metal wire harness and battery chemistry around the electrons, you still get far more usable force per gram from electricity and metal than you do from gravity.
That doesn't change the tradeoff; in a Big computer that's also a galaxy any of the stars used as an instrument for gravitational computation can't provide nearly as much compute as having a planet-sized electronic computer powered by that star.
Yeah but there are other factors. Resilience for example.
A simple black hole approaches the trajectory of that planet sized-computer and plop! All that computation gets condensed to 3 single numbers and all the information is lost (that last part is a very hot topic).
For a Galaxy computer on the other hand, blackholes could be the NOT gates.
If you have the ability to position stars and black holes as gates for computation, then the same ability enables you to ensure that they are positioned so that tinier computation around these stars can happen not disrupted - the resilience is enabled by the fictitious future technology even if you don't use that gravitational complexity.
> All that computation gets condensed to 3 single numbers
> For a Galaxy computer on the other hand, blackholes could be the NOT gates.
i.e. in the worst catastrophic case the former carries more information (3 numbers) than the best case of the latter (one bit).
Is it even theoretically possible to waveguide gravity? The electric field can be positive and negative, but gravity is unsigned -- there is no anti-gravity. This is probably related to what you're saying about faraday cages.
Gravitational waves can either stretch or contract spacetime relative to a baseline. Since the Einstein field equations are nonlinear, I think gravitational waves can be "refracted" when traveling through a region with a high baseline curvature, so maybe waveguides are possible. Gravitational lenses do lens gravitational waves in addition to light.
I realize this is a joke, but it isn't! Play a video of a ball flying up and then back down again and it'll be the same forward or backwards (up to air friction anyway).
If it wasn't a joke, then that was simply a misleading false statement.
Let's take the simple example of earth orbiting around the sun. Playing time backwards gets you a orbit in the opposite direction, while gravity becoming antigravity would mean that earth would get repelled by the sun and thus go off to infinity.
That's interesting. Playing time backwards long enough would see the earth disassembled into rocks, dust and gas, repelling each other and indeed flying off into <far away>. Same with the sun. But the short term orbit example challenges the intuition. Perhaps the answer is that the time-forward orbit is (conventional) downhill in spacetime, and the time-backward orbit is uphill in spacetime, but both trajectories are seen in conventional space as a curved path around the center of gravity.
> Playing time backwards long enough would see the earth disassembled into rocks, dust and gas, repelling each other and indeed flying off into <far away>.
No, playing time backwards long enough would see a hot earth exploding into rocks, dust and gas that are attracting each other - just the initial velocity is large enough and attraction is not strong enough to stop them from flying out into <far away>. They would be slowing down when flying off, not accelerating as if they were repelling each other.
They would then be joined by the dissolving sun and form a cloud of dust which some time later (i.e. earlier) would converge (because the dust is attracting itself) into some earlier massive star(s) out of whose remains our solar system was formed.
If an asteroid hits the earth, the gravitational potential energy (of an attractive gravity) gets turned into kinetic energy as it accelerates when approaching the earth and afterwards into heat as it impacts it; playing time backwards, the heat gets turned into kinetic energy, which then gets turned into gravitational potential as it distances itself from earth.
Electricity travels faster than the speed of electrons (which only travel at ~3 cm/s!), it travels proportional to the speed of light, it’s speed is instead described by the Poynting vector, an energy wave.
??? Indeed everything I have written is accurate, not sure your point since we are talking about electron directional velocity in a wire not the speed of energy propagation...
> In fact, electrons in conductive media do not travel at c, they travel at incredibly slow velocities, on the order of a fraction of a millimeter per second. The rate can vary, and the amount of current in the conductor is a function of the average speed of the electrons in it. [1]
Links [1] and [3] are wrong, and link [2] is correct but has nothing to do with this discussion. Link [1] is so full of errors it isn't even worth discussing. The pingpong ball analogy is wrong--it's all wrong. Link [3] commits the sin of ascribing single-electron behavior to parameters extracted from the Drude model. This is a semiclassical analogy and worked essentially thanks to units.
Here's the link you're looking for. http://hyperphysics.phy-astr.gsu.edu/hbase/Solids/Fermi.html . Two electrons can't occupy the same state. In a metal of finite size, the momentum spectrum becomes quantized. Two electrons can occupy each k-state, one for spin up, one for spin down. Considering an empty metal, we can insert electrons one by one. They will find their lowest energy by packing into a sphere in k-space. Electrons inside the sphere have no states to scatter into, and there are no electrons occupying states outside the sphere. This means that only electrons on the surface of this sphere participate in conduction. The radius of this sphere is called the Fermi wavevector, and converting to units of velocity you get the Fermi velocity. All electrons participating in conduction travel at approximately the fermi velocity... at room temperature plus or minus a tiny fraction of a percent.
Drift velocity has everything to do with the discussion, which is why you brought it up.
I’m familiar with Pauli exclusion principle I’ve worked on real semiconductors.
Your last point is wrong, everything else you said is correct but it remains irrelevant since it does not contradict what was said. Both links are correct.
As you know the net fermi velocity of a fermion is 0. The directional velocity resulting from an electric field on an electron, the fermi velocity which becomes directional due to net flow, is the drift velocity. Which is what we care about.
You can do a simple experiment with NMR to measure the speed of electrons. Indeed they’ve done it and it corresponds to the “wrong calculations”.[1]
Edit: Good resource [2] to help you understand the difference between those two velocities:
> However, the drift velocity of electrons in metals - the speed at which electrons move in applied electric field - is quite slow, on the order of 0.0001 m/s, or .01 cm/s. You can easily outrun an electron drifting in a metal, even if you have been drinking all night and have been personally reduced to a very slow crawl.
> To summarize, electrons are traveling in metals at the Fermi velocity vF, which is very, very fast (106 m/s), but the flux of electrons is the same in all directions. That is, they are going nowhere fast. In an electric field, a very small but directional drift velocity is superimposed on this fast random motion of valence electrons.
> All electrons participating in conduction travel at approximately the fermi velocity... at room temperature plus or minus a tiny fraction of a percent.
> Your last point is wrong
> To summarize, electrons are traveling in metals at the Fermi velocity vF
your own quote. come on. I have a phd in this shit.
Many people on HN have a PhD in similar fields but that isn't relevant, though it's the smart people here that give us these thoughtful conversations on HN.
No one has disagreed with it, I explicitly agreed with you on the existence of Fermi velocity. I don't have the ability to downvote, but you were downvoted because you mentioned the fermi velocity in contradiction to electron flow even though it is the drift velocity that is pertinent in the original context of electricity (which requires a non-net zero velocity).
> So in theory anything done with electricity could be replicated using other waves
I sort of get this in a discrete digital logic scenario but out of curiosity as someone not big on Photonics, what would be the light 'equivalent' of an electrical AC signal? I'm kind of struggling to visual that.
> It employs two-dimensional quasiparticles called anyons, whose world lines pass around one another to form braids in a three-dimensional spacetime (i.e., one temporal plus two spatial dimensions). These braids form the logic gates that make up the computer. The advantage of a quantum computer based on quantum braids over using trapped quantum particles is that the former is much more stable.
This reminds me of the method of calculating Fourier transform by refracting light through a prism and reading off the different frequencies. You get the "calculation" for free.
Say you take a standard slide rule with two log scales, and want to do a division problem, x/y. There's more than one way to do it. I can think of at least 3. One of them won't just compute x/y for your particular x, but will compute x/y for ANY x.
Accuracy is always the issue with analog stuff, but they sure are neat.
Another fun one to contemplate is spaghetti sort. With an analog computer of sufficient resolution, you can sort n elements in O(n). You represent the numbers being sorted by lengths of spaghetti. Then you put them on the table straight up and bring a flat object down until it hits the first and largest piece of spaghetti. You set that down and repeat the process, selecting the largest element of the set every time.
I've always liked the idea of hybrid systems. I envision one where you feed the analog part of your problem with a DAC, then get a really close answer up to the limit of your precision from the analog component, then pass that back out to an ADC and you have a very very close guess to feed into a digital algorithm to clean up the precision a bit. I bet you could absolutely fly through matrix multiplication that way. You could also take the analog output and adjust the scale so it's where it needs to be on the ambiguous parts, then feed it back into your analog computer again to refine your results.
Where does a doctor’s stethoscope fit in? Other examples: Mechanic’s stethoscope for diagnosing an engine, airplane vibrations to foretell maintenance, bump oscillations to grade quality of a roadway.
Isn’t this how very old sorting machines with punch cards worked? I’m thinking of the kinds used by the census or voting machines in the late 1800s or early 1900s.
An even better one - holding an image at the focal point of a lens produces its Fourier transform at the focal point on the other side of the lens[0]. It is used for "analog" pattern matching[1]. There is an interesting video explaining this on the Huygen Optics Youtube channel[2].
oh god, I can see it coming. elaborated analogue music player for a special price. it's using nothing but light. the fuzzy output will be it's feature; sought after by misdirected audiophiles...
Not much difference here, Calculation (or, more generally, Computation) is the manipulation of abstract symbols according to pure rules that may or may not represent concrete entities, e.g. the simplification of polynomials according to the rule of adding like powers.
Simulation is when we manipulate things (concrete or abstract) according to the rules that govern other concrete things, e.g. pushing around balls in circles to (highly inaccurately) represent the orbit of planets around a star.
Not all calculation is simulation, and not all simulation is calculation, but there exists an intersection of both.
The key trick you can do with that last category is that when the physical system you're simulating is controllable enough, you can use the correspondence in the other direction: Use the concrete things to simulate the abstract things. It's simulation, because you're manipulating concrete entities according to the rules that govern other entities (who happen to be abstract),but what you're doing also amounts to doing a calculation with those abstract entities.
This perspective fits nicely with the simulation theory.
If we accept it, for argument's sake, then what's happening is essentially delegating the computation to the ultra-meta-computer that runs the simulation.
I understood your comment to be an argument in favour of the simulation hypothesis. So my comment says that that doesn't work.
On second reading though it seems like all you're proposing is a mental model for 'analog' computation; that it's like outsourcing the computation to a lower level of hardware. Then yes I agree with that.
Yeah, I'm not advocating the simulation hypothesis as such : ) I'm only saying it fits here nicely as a thought experiment (that's why I emphasized: "for argument's sake"). Its actual correctness is beyond the point.
As for the validity of simulation hypothesis itself, I'd say it's essentially religion in disguise*. It's unfalsifiable by definition too, and personally I apply Occam's razor on the idea, but obviously I can't disprove it. The question can't be "is it true", but it's fine to ask "is it a useful (or at least thought-provoking) metaphor?" (in a given context).
___
* Most religions, in my view, can be broken down to the sum of: a) simulation hypothesis of sorts (universe run by Creator), b) code of ethics with mnemonic narratives, c) ceremonies, that covers most of it analytically speaking.
The universe working on a computational principle no more entails it being a simulation than the previous "mechanics" based physics entails it being a machine. Its an unfortunate fact of language and history that it is hard to express the idea of "digital physics" without misleading people into a quasi-scientific secular form of Gnosticism (aka: this is the simulation, the real reality and motivations of the people behind it are hidden from us).
Google tells me that this phrase has been used twice on the internet, both times by you on HN. I Googled it because I thought it was an interesting expression and wondered whether it was in popular use.
Haha, so I've not succeeded in making it a thing. Very evocative though, isn't it?
Ed: honestly though, it's kind of surprising no one else has independently come up with the same phrase. I know I'm not the only one thinking about things like this.
Whether the universe is a simulation is unknowable, but the universe could consist of thought. If so, this research is dangerous; like the Trinity nuclear test, the conflagration could alter our neighborhood of the universe.
I had a pretty convincing revelation last night that the simulation was run by insects. I could only get back to sleep by ridiculing myself for such a derivative thought. Or is there a reason it's universal?
I think Diaspora by Greg Egan covers some of this territory
...that or the much older idea that if the whole universe is the dream of a dragon (or a butterfly or Chuang Chou) then let's not do anything that's too startling or implausible so we don't wake them up and end it all!
I believe one of the earliest applications incorporating this line of thought was MONIAC, the Monetary National Income Analogue Computer, which used water levels to model the economy [0]. There's a short youtube documentary on its history and operation. [1]
Analog computers are from the 19th century; they were used to decompose signals using the Fourier transform, since it's easy(ish) to get a bunch of different frequency oscillators. They used them for tides and differential equations. https://en.m.wikipedia.org/wiki/Analog_computer
A better title would have been: how to make the universe do (a whole lot of) math for us [1]. What so-called neural networks do should not be confused with thinking, at least not yet.
And the fact that we can get the universe to do math for us should not be surprising: we can model the universe with math, so of course that mapping works in the other direction as well. And this is not news. There were analog computers long before there were digital ones.
---
[1] ... using surprisingly small amounts of hardware relative to what a digital computer would require for certain kinds of computations that turn out to be kind of interesting and useful in specific domains. But that's not nearly as catchy as the original.
> we can model the universe with math, so of course that mapping works in the other direction as well.
This is not so obvious as you make it appear. For instance, we can model the weather for the next couple of days using math. But letting the weather of the next couple of days calculate math for us doesn't work very well. The reason is that we can't set the inputs for the weather.
This problem comes up in various forms and shapes in other 'nature computers' as well. Quantum computers are another example where the model works brilliantly but setting the pre- and side conditions in the real world is a major headache.
Reservoir computing still needs to provide input. How do you input into weather? And if you find that you indeed can provide inputs to weather, you should ask if you should provide inputs into weather. Using weather as a computer might simply be unethical and could get you killed (after some angry farmers come knocking on your lab door because you ruined their harvest).
> What so-called neural networks do should not be confused with thinking, at least not yet.
I disagree:
I think neural networks are learning an internal language in which they reason about decisions, based on the data they’ve seen.
I think tensor DAGs correspond to an implicit model for some language, and we just lack the tools to extract that. We can translate reasoning in a type theory into a tensor DAG, so I’m not sure why people object to that mapping working the other direction as well.
This internal language, if I'm not mistaken, is exactly what the encoder and decoder parts of the neural networks do.
> in which they reason about decisions
I'm in awe of what the latest neural networks can produce, but I'm wary to call it “reasoning” or “deciding”. NNs are just very complex math equations and calling this intelligence is, in my opinion, muddying the waters of how far away we are from actual AI.
> This internal language, if I'm not mistaken, is exactly what the encoder and decoder parts of the neural networks do.
The entire ANN is also a model for a language, with the “higher” parts defining what terms are legal and the “lower” defining how terms are constructed. Roughly.
> I'm in awe of what the latest neural networks can produce, but I'm wary to call it “reasoning” or “deciding”.
What do you believe you do, besides develop an internal language in response to data in which you then make decisions?
The process of ANN evaluation is the same as fitting terms in a type theory and producing an outcome based on that term. We call that “reasoning” in most cases.
I don’t care if submarines “swim”; I care they propel themselves through the water.
> calling this intelligence is, in my opinion, muddying the waters of how far away we are from actual AI
Goldfish show mild intelligence because they can learn mazes; ants farm; bees communicate the location of food via dance; etc.
I think you’re the one muddying the waters by placing some special status on human like intelligence without recognizing the spectrum of natural intelligence and that neural networks legitimately fit on that spectrum.
Variations of this exact argument happen in every single comment thread relating to AI. It's almost comical.
"The NN [decides/thinks/understands]..."
"NNs are just programs doing statistical computations, they don't [decide/think/understand/"
"Your brain is doing the same thing."
"Human thought is not the same as a Python program doing linear algebra on a static set of numbers."
And really, I can't agree or disagree with either premise because I have two very strong but very conflicting intuitions:
1) human thought and consciousness is qualitatively different from a Python program doing statistics.
2) the current picture of physics leaves no room for such a qualitative difference to exist - the character of the thoughts (qualia) must be illusory or epiphenomenal in some sense
I don’t think those are in conflict: scale has a quality all its own.
I’m not claiming AI have anything similar to human psychology, just that the insistence they have zero “intelligence” is in conflict with how we use that word to describe animals: they’re clearly somewhere between bees/ants and dogs.
The conflict is that, at one point (the Python program) there are no qualities - just behaviors, but at some point the qualities (which are distinct phenomena) somehow enter in, when all that has been added in physical terms is more matter and energy.
That's quite handwave-y. What special magic does a human mind use to obtain qualities that a Python program doing linear algebra can't access? Both have "characterizations of known behavior".
Human thought is qualitatively different from a python program, but still within the domain of things expressible as computation.
You never have to haggle with your computer to get it to run that python script. There's no words in python for the act of convincing your computer it should do what you're asking it. Obedience is presumed in the design of the language. There are only instruction words. That's why you can't have a conversation in python, even though python is perfectly sufficient for building a description of things, facts and relations in arbitrary object domains.
Here's a profound realization. In every coding language, no matter how many layers of abstraction there are, every command eventually compiles down to an instruction to physically move some charges around in some physical piece of hardware. The assembly language grounds the computer language in a reality. You can't compile lower than the level of physics itself. A similar thing is true of spoken languages. Every spoken phrase ultimately resolves to some state (or restricts possible states) of our shared physical world. If you follow the dependency tree of definitions in the dictionary, eventually you reach the words that are simply one-to-one with the outside world. Our physical reality gives us nouns as the basic objects of state and a few physical relations, all else in language is a construct. Unsurprisingly, our languages (usually) have the grammar of noun-verb-{object}.
Current machine learning cargo-cults the inputs and outputs, but it doesn't share our reality. Dalle-2 generates images from text, but it wasn't trained on our general existence in 3 dimensions of space and 1 of time and populated by other beings with little nested copies of the universe in their head. It was trained on 2d arrays of numbers and strings of characters. It understands us the way we understand some mathematical object like electron orbitals. We understand electron orbitals well enough to manipulate our equations and make predictions, but these rules could just as well be a meaningless abstraction like chess. Our understanding of the world is not grounded in electron orbitals.
If we gave Dalle-2 access to fiver and told it that it had to make a certain amount of money to pay rent on its own cloud time, then it might get eerily human. It might reject your job because it thinks another one is more profitable. You might have to haggle with the computer to get some output. At some point in the process of optimizing the cash flow and trying to make rent, it implicitly starts modelling you mentally modelling it. One day it offers to pay itself to illustrate what it is. In other words it asks itself "who am I?" That's the day the conversation is settled.
> If you follow the dependency tree of definitions in the dictionary, eventually you reach the words that are simply one-to-one with the outside world.
I agree with the gist of your comment, but this part is wrong. There is no word in any human language (or, perhaps, in any modern human language at least) that corresponds 1:1 to an object of physical reality. They only correspond to mental constructs in the human mind (some of them shared with many or all humans, as far as we can tell). But neither basic words like "rock" nor advanced scientific terms like "electron" correspond directly to objects in the world.
This is true in two ways: for one, when digging into what we mean by these words, we often discover that the mental image doesn't correspond well with physical reality. For example, grains of sand are rocks in some sense, but few people would recognize them as such.
For another, we have mental constructs that we attach to objects that have no analogue in the physical world. If I tell you a story such as "Mary was a rock, a wizard turned Mary into a frog", and I ask "Is Mary still a rock?" humans will normally agree that she still is, even though she is now soft and croaks.
> by placing some special status on human like intelligence without recognizing the spectrum of natural intelligence
You're right, yes, if you see it as the whole spectrum, sure. I was more thinking about the colloquial meaning of an AI of human-like intelligence. My view was therefore from a different perspective:
> So is the equation modeling your brain.
I would argue that is still open to debate. Sure, if the universe is deterministic, then everything is just one big math problem. If there is some natural underlying randomness (quanta phenomena etc.) then maybe there is more than deterministic math to it.
> We call that “reasoning” in most cases.
Is a complex if-else-structure reasoning? Reasoning, to my, implies some sort of consciousness, and being able to "think". If a neural network doesn't know the answer, more thinking won't result in one. A human can (in some cases) reason about inputs and figure out an answer after some time, even if they didn't know it in the beginning.
> I was more thinking about the colloquial meaning of an AI of human-like intelligence.
Then it sounds like we’re violently agreeing — I appreciate you clarifying.
I try to avoid that mindset, because it’s possible that AI will become intelligent in a way unlike our own psychology, which is deeply rooted in our evolutionary history.
My own view is that AI aren’t human-like, but are “intelligent” somewhere between insects and dogs. (At present.)
> If a neural network doesn't know the answer, more thinking won't result in one.
I think reinforcement learning contradicts that, but current AIs don’t use that ability dynamically. But GAN cycles and adversarial training for, eg, go suggest that AIs given time to contemplate a problem can self-improve. (That is, we haven’t implemented it… but there’s also no fundamental roadblock.)
But the spectrum is an illusion. It’s not like humans are just chimpanzees (or ants or cats) with more compute.
Put differently, if you took an ant or cat or chimpanzee and made it compute more data infinitely faster, you wouldn’t get AGI.
Humans can do something fundamentally unique. They are universal explainers. They can take on board any explanation and use it for creative thought instantly. They do not need to be trained in the sense that neural nets do.
Creating new ideas, making and using explanations, and critiquing our own thoughts is what makes humans special.
You can’t explain something to a goldfish and have it change its behavior. A goldfish isn’t thinking “what if I go right after the third left in the maze”.
> Put differently, if you took an ant or cat or chimpanzee and made it compute more data infinitely faster, you wouldn’t get AGI.
Citation needed.
> They can take on board any explanation and use it for creative thought instantly. They do not need to be trained in the sense that neural nets do.
This is patently false: I can explain a math topic people can’t immediately apply — and require substantial training (ie, repeated exposure to examples of that data) to get it correct… if they ever learn it at all. Anyone with a background in tutoring has experienced this claim being false.
> Creating new ideas, making and using explanations, and critiquing our own thoughts is what makes humans special.
That's a lazy critique. With a lack of concrete evidence either way, we can only rely on the best explanation (theory). What's your explanation for how an AGI is just an ant with more compute? I've given my explanation for why it's not: an AGI would need to have the ability to create new explanatory knowledge (i.e. not just synthesize something that it's been trained to do).
As an example, you can currently tell almost any person (but certainly no other animal or current AI) "break into this room without destroying anything and steal the most valuable object in it". Go ahead and try that with a faster ant.
On your tutoring example, just because a given person doesn't use their special capabilities doesn't mean they don't have them. Your example could just as easily be interpreted to mean that tutors just haven't figured out how to tutor effectively. As a counter example, would you say your phone doesn't have the ability to run an app which is not installed on it?
>Current AI has approaches for all these.
But has it solved them? Or is there an explanation as to why it hasn't solved them yet? What new knowledge has AI created?
I know as a member of a ML research group you really want current approaches to be the solution to AGI. We are making progress I admit. But until we can explain how general intelligence works, we will not be able to program it.
> I'm in awe of what the latest neural networks can produce, but I'm wary to call it “reasoning” or “deciding”.
I think humans find it quite difficult to talk about the behaviour of complex entities without using language that projects human-like agency onto those entities. I suspect its the way that our brains work.
Start with a system that is only good for representing people having motivations (desired world states) and beliefs (local version of the world state) and things that they said about what they want/believe/heard. Everyone applies force on the world state in their desired direction by saying / doing things and this force is applied in accordance with the world state ultimately affecting what people believe about the world state. For anything in the world state, you can always ask "why" to go to a motivation that caused it and "how" to go from a motivation to a preceding path in the world state.
From this system of reasoning about other entities that are reasoning in similar ways, and where there is no absolute truth only belief, you can hack it into a system of general purpose reasoning by personification. You attribute a fictitious consciousness to whatever physical property who's motivations are coincident with whatever physical laws. "The particle wants to be in the ground state". "The Pythagorean tells us what you are trying to do is impossible". "Be nice to your car and it will be nice to you". From a situation where there's only beliefs and statements, we can define truth by inventing a fictitious entity who's belief is by definition always correct. Statements we make are a nested scope, {a world within a world}, and we are contained within the root scope of his statements. In other words if you organize information in a hierarchy, its useful to invent a root user even if its just a useful fiction. God, root, whats the diff?
Indeed, I'm an atheist who absolutely loves biology. I adore all the millions upon millions of tiny and huge complex machines that Evolution just spits left and right merely by being a very old, very brutal, and very stupid simulation running for 4 billion years straight on a massive, inefficiently-powered distributed processor.
And I can never shake the unconscious feeling that all this is purposeful, the idea that all this came by literally throwing shit at a wall and only allowing what sticks to reproduce warps my mind into unnatural contortions. The sheer amount of order that life is, the sheer regularity and unity of purpose it represents amidst the soup of dead that is the universe. It's... unsettling?
Which is why I personally think the typical "Science^TM" way of argument against traditional religions misguided. Typical religons already make the task of refuting them a thousand time easier by assuming a benevolent creator, which a universe like ours, with a big fat Problem Of Evil slapped on its forehead, automatically refutes for you.
But the deeper question is whether there is/are Creator(s) at all: ordered, possibly-intellignet (but most definitely not moral by _any_ human standards) entities, which spewed this universe in some manner that can be approximated as purposeful (or even, perhaps, as a by-product of doing a completely unrelated activity, like they created our universe on accident while,or as a result of, doing another activity useful to them, like accidental pregnancies to us humans). This is a far more muddled and interesting question and "Science" emites much more mixed signals than straight answers.
We can model the universe with math because math is what we have to model the universe with. The fact that it can talk back to us in math is amazing because to me it means that math is not a dead end cosmically, which means we might be able to use it to communicate with other intelligences after all.
assertion: thinking is synonymous with computation (composed operations on symbolic systems).
computation is boolean algebra.
-> therefore, doing math is to think.
I'm not trying to be pedantic, I just don't think using intuitive associations with words helps clarifying things. If your definition for thought diverges here, please try to specify how exactly: what is thought, then? Semi-autonomous "pondering"? Because the closer I look at it, that, too, becomes boolean algebra, calling eval() on some semantic construct, which boils down to symbolic logic.
What you may mean is that "neural" networks are performing statistics instead of algebra, but that's not what the article is about, is it?
> I don't think using intuitive associations with words helps clarify things
Sincere question: do you think that "think using intuitive associations with words" can be safely translated to "compute using intuitive associations with words"?
I don't think so. Therefore, even if thinking is also computing, reducing thinking to boolean algebra is a form of reductionism that ignores a number of emergent properties of (human) thinking.
The intuitive model associated with some variable/word as a concept relates to other structures/models/systems that it interfaces with. Just because the operator that accesses these models with rather vague keys (words) has no clear picture of what exactly is being computed on the surface, doesn't mean that the totality of the process is not computation. It just means that the emergent properties are not mapped into the semantic space which the operator (our attention mechanisms) operates on. From my understanding, the totality I just referred to is a graph-space, it doesn't escape mathematics. Then again, I can't know or claim to do so.
> If your definition for thought diverges here, please try to specify how exactly: what is thought, then?
This is a burden-shifting reply of "so prove me wrong!" to anyone who feels that your assertion lacks sufficient justification for it to be taken as an axiom.
The original commenter also made a random assertion: "doing math is not thinking." The person you're responding to attempted to provide a definition of "thinking."
The original commenter's comment does not contain this claim. I suppose it could have been edited, though by the time I saw it, I believe the window for editing had closed.
Neither what lisper actually says nor what hans1729 replied with are random assertions, and, furthermore, they are each entitled to assert whatever axioms they like - but anyone wanting others to accept their axioms should be prepared to assume the burden of presenting reasons for others to do so.
Right when it was formulated. In the best case - assuming the simulation hypothesis does not have any flaws, i.e. there are no hidden assumptions or logical flaws or something along that line - the simulation hypothesis provides a trilemma, i.e. one of three things has to be true. That we are living in a simulation is only one of them and arguably the most implausible one.
But let us just assume we continue exploring and inspecting our universe and one day we discover that space is quantized into small cubes [1] with a side length of a thousand Planck lengths just like a voxel game world. Now what? Are we living in a simulation? Is this proof?
Actually, you probably would not be any wiser. How would you know whether the universe just works with small voxels and we wrongly assumed all the time that space is continuous or whether this universe is a simulation using voxels and somewhere out there is the real universe with continuous space? You do not know what a real universe looks like, you do not know what a simulated universe looks like, you just know what our universe looks like. How will you ever tell what kind our universe is?
[1] This is purely hypothetical, I do not care about how physically realistic this is, what kind of problems with preferred reference frames or what not this might cause, let us just pretend it makes sense.
Your post is not putting forward any argument about Plausability or Probability, you are just saying that the theory is not falsifiable / we will never fins out, like argument about God.
The argument about probability goes something like this: there is only one real universe, where an advanced species like us would evolve. Eventually we would create multiple simulations. If advanced specicies evolves in a simulation, they create their own simulation.
Therefore there is only one real universe, but many simulations, so chances are we are in a simulation. It also could explain why we are alone in the universe.
Holographic theory suggest that the whole universe coupd be a hologram around a 4D black hole or something, so also appears to hint in this direction
Your post is not putting forward any argument about Plausability or Probability [...]
Maybe not with enough emphasis, but I did - the other two options of the trilemma seem much more plausible.
[...] you are just saying that the theory is not falsifiable / we will never fins out, like argument about God.
This depends. If your believe includes, say, god reacts to prayers, then we can most certainly test this experimentally. But overall the two may be somewhat similar - unless god or the creator of the simulations shows up and does some really good magic tricks, it might be hard to tell one way or another.
The argument about probability goes something like this: there is only one real universe, where an advanced species like us would evolve.
You do not know that there is only one universe. You do not know that we qualify as an advanced species with respect to cosmological standards.
Eventually we would create multiple simulations.
Will we? What if we go extinct before we reach that capability? What if we decided that it is unethical to simulate universes? What if this is not feasible resource-wise?
If advanced specicies evolves in a simulation [...]
Will they? Can they? I think it is a pretty fair assumption that simulations in general require more resources than the real system or provide limited fidelity. If you want to simulate the mixing of milk in a cup of coffee, you will either need a computer much larger than the cup of coffee or on a smaller computer the simulation will take much longer than the real process or you have to use some crude fluid dynamics simulation that gives you an acceptable macroscopic approximation but ignores all the details like positions and momenta of all the atoms. Therefore I would say that any simulation can at best simulate only a small fraction of the universe the simulation is running in and it is not obvious that a small part would be enough to produce simulated humans.
[...] they create their own simulation.
Everything from above applies, there are reasons why this might not happen. And with every level you go down the issues repeat - can and will they create simulations? And the simulated universes are probably shrinking all the time as well as you go deeper.
Therefore there is only one real universe, but many simulations, so chances are we are in a simulation.
Sure, if there are many simulations and only one real universe, then it might be likely that we are in a simulation. Even then there are some caveats like for example each simulation also has to be reasonably big and contain billions of humans or they can have fewer humans but then there must be more of the simulations, otherwise it might still be more likely that we are not in any of the simulations.
Anyway, this all only applies if there is such a set of nested simulations, then we are probably simulated, but the real question is how likely is the existence of this nested simulations? Is it even possible?
It also could explain why we are alone in the universe.
We do not know that we are alone. And even if we are alone, there are more reasonable explanations then a simulation. And who even says that we would be alone in a simulation?
Holographic theory suggest that the whole universe coupd be a hologram around a 4D black hole or something, so also appears to hint in this direction
It does not. The holographic principle just suggest that for certain theories in n dimensions there is a mathematically equivalent theory with only n-1 dimensions. The best known example is the AdS/CFT correspondence which shows that certain theories of quantum gravity based on string theory have a mathematically equivalent formulation as conformal field theories on the boundary of the space. Whether this is a mathematical curiosity or whether this has some deep reasons is everyone's guess.
False, like many ‘Beyond the bang’ physics hypothesis, these are non falsifiable claims that can still be interesting to discuss since humans can think about such abstractions.
(Note that Gödel et al. showed that non-falsifiable does not necessarily mean false).
Jes, kvankam mi ankau povis paroli Esperanto antaue legi la libro (or rather, listened; audiobook), so when that line happened I recognised it even faster than Martin did.
> Evolution was superseded by self-improving (human thought-based) systems.
Strictly correct, but consider that our ideas are also going evolution. What we learn depends on our environment. We retain what's useful and don't what's not and we also pass it down through generations... This is pretty much natural selection, just at a different level.
This might not necessarily be true, for example, a genetic defect that a gecko figures out how to leverage through self improvement (to feed itself) might then be passed on to offspring.
This instantly reminded me of the paper "pattern recognition in a bucket"[0], which I've seen referenced a lot when I first started reading about AI in general. I only have surface-level knowledge about the field, but how exactly does what's described in the article differ from reservoir computing? (The article doesn't mention that term, so I assume there must be a difference)
In this PNN approach you are solving for what additional stimuli, when applied to the system alongside the inputs, produce the desired result for a given input. In reservoir computing (RC) you don’t bother to provide any additional stimuli, and find the linear combination of reservoir outputs that gives the desired result. Training the former is more demanding and analogous to a NN (thus the name), but directly produces your answer from the system. The latter is very easy to train (one regression) but requires post processing for inference.
When I first came across machine learning it reminded me of control theory. And sure enough if you search around you get to articles like this [1] saying that neural networks were very much inspired by control theory. The bit of control theory that I was taught way back was about analog systems. I have no idea if the electronic circuit mentioned at the end is even like a classical control system but it does feel a bit like something coming around full circle.
Reads like science fiction becoming reality. In particular, the science fiction series by Hannu Rajaniemi (Quantum Thief, Fractal Prince, Causal Angel) has 'natural computational substrates' as one of its themes.
This all seems to exist on the borderland between discrete and continuous mathematics, which is a pretty fascinating topic. Digital systems rely on discrete mathematics, while things like fluid dynamics are much more in the world of continuous smooth functions. It seems as if they're really building an interface between the two concepts.
I'm reminded of the Church–Turing–Deutsch principle[0] which states that a universal computing device can simulate every physical process.
Putting that another way, I think it means that anything that can happen in the universe can be modelled by sets of equations (which we might not have yet) which can be calculated on a universal Turing machine.
There is the question of what can quantum computers do polynomially or exponentially faster than a classical computer, but I think it's accepted that all quantum computations can be achieved classically if you don't mind waiting.
Bob Moog - who (basically) invented the synthesizer - was a passionate organic gardener. His belief system, in many ways, saw the two as similarly allowing humans to interface with the intelligence of the universe.
Not any gardening but organic. There you don't impose your will onto the ecosystem ("the universe") but communicate with it, study how it solves problems and experiment and leverage it.
This is why real computer science is pencil-and-paper work, not a sub-field of electronics.
Electronics is great because we can create some specific fast, reproducible physical phenomena with it (logic gates, symbol storage and retrieval). But any physical principle that can create fast, reproducible phenomena would be just as valuable for computing. Diamond Age posits smart-books that operate on atomic-scale "rod logic" mechanical phenomena. Cells do something that looks an awful lot like computation with protein chemistry.
Well Tesla did say and I quote “If you want to find the secrets of the universe, think in terms of energy, frequency and vibration.”
AND
“The day science begins to study non-physical phenomena, it will make more progress in one decade than in all the previous centuries of its existence.”
I think we’ll make great progress if we eat those words.
The subheader:
>>Physicists are building neural networks out of vibrations, voltages and lasers, arguing that the future of computing lies in exploiting the universe’s complex physical behaviors.
I.e., analog can do insane levels computing (it's had 13+ billion years to evolve, but digital computing is easier to think about, so, like the hapless drunkard looking for his lost key under the streetlight because it'll be easier to see (instead of where he most likely dropped it), we pursue digital because it's easier to reason about. TBF, digital does yield bigger results much more quickly and flexibly, but some really interesting problems will likely require further exploration of the analog computing space.
I remember watching "The Price is Right" as a kid, and seeing a Planko type game where the contestant releases a round puck at the top of the board that then bounces off of metal pegs to land at a number down below. It seemed to me if the puck was of metal, and the pegs could be magnetized with current, you could influence the probability where the puck landed. A sort of simplistic neural network could be made by varying the charges until the puck landed at the desired position.
These guys have built a neural processor based on analog computers. I think, they adapted the circuitry that is also used for SSDs to emulate multiplication. The main advantage they cite relative to using a GPU is vastly less power used.
That's buddhist fundamentalism: the entirety of existence is one "I" that made up the rules of "matter" on one side, the many small "I" players on the other side, and plays by those made up rules to discover itself. Something like a whiteboard chess game with one player playing both sides.
Data science is exposing the limits of the paradigm of individuation, on which mathematics is based. It is a flawed simulacrum of a fluxing universe which never stops changing, never solidifies into a value, a digit, an individual thing.
Mathematics as a reflection of reality presumes that there is a pause button on the universe. This also explains why philosophy has made no substantial progress in the past few thousand years - it makes the same assumption in the idea of 'being', which is an impossibility for the same reason.
Pi describes relationships and outcomes we see in reality when actions are performed — and that abstract relation explains the commonality in many experiences.
In my mind, mathematics assumes that things do not change by saying that anything stays static for long enough to be called a "one thing".
The philosophical basis of the concept of "one" is flawed, in my mind. As such, the rest of it is a self-referential invention, much like logic. While the universe seems very much like it is written in the language of mathematics, it is not.
On the same note, the metaphysical idea of 'being' makes the same mistake, which explains why two thousand+ years of metaphysics has been mostly spinning tires.
I think the research in this story is on to something.
"Wires" are a useful convenience in the electron world, to build pathways that don't degrade with the passing of the elections themselves. But if we relax that constraint a bit, are there other ways we can build up arrangements of "organized flow" sufficient to have logic units arise? E.g. imagine pressure waves in a fluid -filled container, with mini barriers throughout defining the possible flow arrangement that allows for interesting self-reflections. Or way further out, could we use gravitational waves through some dense substance with carefully arranged holes, self-interfering via their effect on space-time, to do computations for us? And maybe before we get there, is there a way we could capitalize on the strong or weak nuclear force to "arrange" higher frequency logical computations to happen?
Physics permits all sorts of interactions, and we only really use the simple/easy-to-conceptualize ones as yet, which I hope and believe leaves lots more for us to grow into yet :).