Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not only that. For a current human time just "stops" when taking a nap. That very much prevents it from being proactive: you can't tell a sleeping human to remind you of something in 5 minutes without an external alarm. I don't think it is possible for a human to achieve sentience without this either.


Not a very good analogy. Humans already have a continuous stream of thought during the day between any tasks or when we are "doing nothing". And even when asleep the mind doesn't really stop. The brain stays active: it reorganizes thoughts and dreams.


Humans do not have a continuous stream of thought when they are asleep, even if their brain is still doing things. Your original example (the LLM can't take actions between problems) is literally the same as the fact that the human can't take actions while asleep.

Of course, nobody has a clear enough definition of "sentience" or "consciousness" to allow the sentence "The LLM is sentient" to be meaningful at all. So it is kind of a waste of time to think about hypothetical obstacles to it.


I'm not sure we always have a sense of time passing when we're awake either.

We do when we are focusing on being 'present', but I suspect that when my mind wanders, or I'm thinking deeply about a problem, I have no idea how much time has passed moment to moment. It's just not something I'm spending any cycles on. I have to figure that out by referring to internal and external clues when I come out of that contemplative state.


> It's just not something I'm spending any cycles on

It's not something you are consciously spending cycles on. Our brains are doing many things we're not aware of. I would posit that timekeeping is one of those. How accurate it is could be debated.


A person being deeply sedated during surgery does not mean the person can't be sentient while it is not sedated. Therefore, arguing that LLMs can't be sentient because they are not always processing data is very poor.

I am not arguing that LLMs are sentient while they process tokens, either. I am saying that intermittent data processing is not a good argument against sentience.


The phenomenon of waking up before an especially important alarm speaks against the notion that our cognition ‘stops’ in anything like the same way that an LLM is stopped when not actively predicting the next tokens in an output stream.


Folks are missing the point, so let me offer some clarification.

The silly example I provided in this thread is poking fun at the notion that LLMs can't be sentient because they aren't processing data all the time. Just because an agent isn't sentient for some period of time it doesn't mean it can't be sentient the rest of the time. Picture somebody who wakes up from a deep coma, rather than sleeping, if that works better for you.

I am not saying that LLMs are sentient, either. I am only showing that an argument based on the intermittency of their data processing is weak.


Granted.

Although, setting aside the question of sentience, there’s a more serious point I’d make about the dissimilarity between the always-on nature of human cognition, versus the episodic activation of an LLM in next-token prediction—namely, I suspect these current model architectures lack a fundamental element of what makes us generally intelligent, that we are constantly building mental models of how the world works, which we refine and probe through our actions (and indeed, we integrate the outcomes of those actions into our models as we sleep).

Whether a toddler discovering kinematics through throwing their toys around, or adolescents grasping social dynamics through testing and breaking of boundaries, this learning loop is fundamental to how we even have concepts that we can signify with language in the first place.

LLMs operate in the domain of signifiers that we humans have created, with no experiential or operational ground truth in what was signified, and a corresponding lack of grounding in the world models behind those concepts.

Nowhere is this more evident than in the inability of coding agents to adhere to a coherent model of computation in what they produce; never mind a model of the complex human-computer interactions in the resulting software systems.


They’re not missing the point, you have a very imprecise understanding of human biology and it led you to a hamfisted metaphor that is empirically too leaky to be of any use.

Even when you tried to correct it, it doesn’t work, because a body in a coma is still running thousands of processes and responds to external stimuli.


I suggest reading the thread again to aid in understanding. My argument has precisely nothing to do with human biology, and everything to do with "pauses in data processing do not make sentience impossible".

Unless you are seriously arguing that people could not be sentient while awake if they became non-sentient while they are sleeping/unconscious/in a coma. I didn't address that angle because it seemed contrary to the spirit of steel-manning [0].

[0] https://news.ycombinator.com/newsguidelines.html


If you cut someone who is in a deep coma, they will respond to that stimuli by sending platelets and white blood cells. There is data and it is being received, processed, and responded to.

Again, your poor understanding of biology and reductive definition of "data" is leading you to double down on an untenable position. You are now arguing for a pure abstraction that can have no relationship to human biology since your definition of "pause" is incompatible not only with human life, but even with accurately describing a human body minutes and hours after death.

This could be an interesting topic for science fiction or xenobiology, but is worse than useless as a metaphor.


> There is data and it is being received, processed, and responded to.

And that is orthogonal to this thread. The argument to which I originally replied is this:

>>> For a current LLM time just "stops" when waiting from one prompt to the next. That very much prevents it from being proactive: you can't tell it to remind you of something in 5 minutes without an external agentic architecture. I don't think it is possible for an AI to achieve sentience without this either.

Summarizing, this user is doesn't believe that an an agent can achieve sentience if the agent processes data intermittently. Do you agree that is a fair summary?

Now, do you believe that it's a reasonable argument to make? Because if you agree with it then you believe that humans would not be sentient if they processed stimuli intermittently. Whether humans actually process sensory stimuli intermittently or not does not even matter in this discussion, a point that has still not stuck, apparently.

I am sorry if the way I have presented this argument from the beginning was not clear enough. It remains unchanged through the whole thread, so if you perceive it to be moving goalposts it just means either I didn't present it clearly enough or people have been unable to understand it for some other reason. Perhaps asking a non-sentient AI to explain it more clearly could be of help.


Human mind reminds active during sleeping. Dreams are like, what happens to the mind when we unplug the external inputs?

We rarely remember dreams though - if we did, we would be overwhelmed to the point of confusing the real world with the dream world.


> if we did, we would be overwhelmed to the point of confusing the real world with the dream world.

How do you know? That seems a bold claim, and not one that I suspect has any experimental evidence behind it.


I dunno, I’ve done some of my best problem solving in dreams.


i'm pretty sure i can wake up at 8am without external alarm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: