No you "know" what your name is and you retrieve that information. It's infinitely different to rolling a dice and picking a name based on what number comes up
I feel like I work in similar ways, if you ask my name there's a probability that I'll answer with the shorten version or the longer one pretty much randomly and there's no conscious effort about it.
While the LLMs do have hallucinations, basic stuff like this will never trigger any.
Where I feel the most differences are aren't in the token concept but rather on the deep reasoning (which we don't use as much in my opinion)
That's an interesting example, since an LLM has no concept of a "self" and literally does not know who it is. It can only answer it "correctly" if you prefix it with a prompt telling it who and what it is.
It's really not though, generally maybe but you also give it a split second to think about if it's a good time to lie, make a joke or maybe just not reveal your name.
And any of those choices will be your best guess at the next tokens in the text of the conversation respecting not only the conversation but also your self image and surroundings.