You seem to be making a point in good faith so I would like to give you a slightly different perspective.
I'm not entirely sure what you mean by "cultish community", from my perspective there are a few distinct communities around LLMs, all focusing on different aspects, all excited about different things.
One common theme across all the groups though is that they used an LLM for the first time and their mind ran wild with the possibilities. That first moment when the LLM does something better than you expected, or even completely unexpected. I think most people understand that their imagination might be overactive in that moment. But it's a rare feeling to be surprised by a new technology (at least for me) these days.
On the other end, we have social media platforms where being a pessimistic curmudgeon ends up getting the likes and shares. And it's just easier to be a pessimistic curmudgeon; the vast majority of ideas never work as well in the real world as they do in your head. I'm just as guilty of this as anyone else. But the real problem is that it puts us into tribes. As someone who is very excited about what LLMs are going to bring to our futures, when I see someone post on Mastodon or HN, or wherever, I become defensive and my monkey brain feels the urge to push back. In particular because I think the criticisms generally voiced are not well reasoned or thought out. Your own post has a tone of dismissal, painting a lot of people, all of whom excited about different things, as a cult who is obsessed with their LLM girlfriend. I would agree that anyone today trying to draw some deeper meaning from the outputs of these systems are probably worthy of dismissal, but I don't think that's the vast vast majority of people who are excited about LLMs. And it makes _me_ sad that the extremists are the ones that get to suck all the oxygen out of the conversation.
We're in the beginning days of this new technology. LLMs are good at doing things traditional software isn't, and bad at doing a lot of things computers are traditionally good at. Natural language answer engines and sex bots might have been some of the first obvious applications of LLMs, but I'm willing to bet there are a lot more undiscovered use cases out there. Simon Willison has some great advice, which is for newcomers to try to break the LLM as quickly as they can, get it to lie to you, or do something wrong. Test its limits. That's part of the process! We're going to need some time to figure it all out and make these systems work well for us. I'm a technologist, and exploring this technology is exciting.
I mean I think the biggest problem with the LLM community is that a sizable portion of not a majority of it consists of the exact same opportunistic "ground floor" people as just jumped off of cryptocurrency as the "thing they refuse to shut up about" which is all well and good when it's an exciting new thing that might change the world, but gets notably more insufferable when it's obviously just the newest route that person sees to getting greater amounts of money and/or social media clout, and they clearly do not know anything about it beyond it's potential to do those things. The colloquial term for that, I think, being "grifter." And to be clear, LLMs are not unique in their ability to draw in those types of insufferable people: see the aforementioned comment about crypto, it was the last big one, and before that probably dropshipping. But I digress.
And I bring all that up to say: no, the vast majority of people, I don't think expect the computer to come to life and tell them it loves them. I think the vast majority, in fact, don't know a fucking thing about LLMs beyond maybe the rote copy/pasted code it takes to bring one into existence, or if we're being honest, more likely, the websites to put their credit card information into to gain access to one for their use, and that shift in base assumptions I think explains why they speak so incoherently: they do not understand it in any depth, and thereofre, they might think the computer will come to life, because they don't know much about computers in general and to a layman, what an LLM does can indeed look like a vague imitation of life.
Like, I mean this in the nicest way possible though just by virtue of what I'm going to say, it is going to sound mean, but: tons of the really big pro-AI hype people just, clearly, bluntly, full disclosure, do not know shit about LLM. Quite a large slice of that pie also don't know shit about technology in general, or seemingly, much of anything beyond a business degree? But irrespective of that, to the wider world who aren't in this and don't participate in the groups at hand... those are your representatives, by default. The attention economy has produced them and you have my most sincere sympathies for that.
I'm not entirely sure what you mean by "cultish community", from my perspective there are a few distinct communities around LLMs, all focusing on different aspects, all excited about different things.
One common theme across all the groups though is that they used an LLM for the first time and their mind ran wild with the possibilities. That first moment when the LLM does something better than you expected, or even completely unexpected. I think most people understand that their imagination might be overactive in that moment. But it's a rare feeling to be surprised by a new technology (at least for me) these days.
On the other end, we have social media platforms where being a pessimistic curmudgeon ends up getting the likes and shares. And it's just easier to be a pessimistic curmudgeon; the vast majority of ideas never work as well in the real world as they do in your head. I'm just as guilty of this as anyone else. But the real problem is that it puts us into tribes. As someone who is very excited about what LLMs are going to bring to our futures, when I see someone post on Mastodon or HN, or wherever, I become defensive and my monkey brain feels the urge to push back. In particular because I think the criticisms generally voiced are not well reasoned or thought out. Your own post has a tone of dismissal, painting a lot of people, all of whom excited about different things, as a cult who is obsessed with their LLM girlfriend. I would agree that anyone today trying to draw some deeper meaning from the outputs of these systems are probably worthy of dismissal, but I don't think that's the vast vast majority of people who are excited about LLMs. And it makes _me_ sad that the extremists are the ones that get to suck all the oxygen out of the conversation.
We're in the beginning days of this new technology. LLMs are good at doing things traditional software isn't, and bad at doing a lot of things computers are traditionally good at. Natural language answer engines and sex bots might have been some of the first obvious applications of LLMs, but I'm willing to bet there are a lot more undiscovered use cases out there. Simon Willison has some great advice, which is for newcomers to try to break the LLM as quickly as they can, get it to lie to you, or do something wrong. Test its limits. That's part of the process! We're going to need some time to figure it all out and make these systems work well for us. I'm a technologist, and exploring this technology is exciting.