Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How many accounts do folks typically interact with online? For me it's just HN, which has some bots but probably not many. GH too, but that's even less likely to be bots (whats the motive?)


If you're on reddit or Twitter, lots, thousands daily. In fact, the common refrain on those platforms is that at least half of all content was made or submitted by bots, and not even necessarily generative AI ones either but those that take previous posts and repost them, while another bot scrapes the top comments and recomments them.


Sure, but those stopped being fun years ago. Haven't touched them since.


Copy/paste an HN comment thread into chatgpt and ask it to generate a comment that 50% will agree with and 50% will disagree with. And then observe the engagement.


What’s the point of inflating engagement on HN? It’s not like karma points get you business opportunities, or that you could sell ads on popular comments.

[This comment brought to you by froobogz dot io, the rustified post-Agile Erlang metaobservability PLaaS that winners eat for breakfast]


I guess one way people could be using LLMs on HN is that if there's a comment they disagree strongly with, they copy the thread into ChatGPT and prompt it for a counterargument. But this is not really bot spam in that sense, and I'm not usre if I even find it detrimental.


Number goes up, it's addictive gamified engagement.


That’s why I set my no procrast to an ultra high value and use the service in unauthenticated mode. I can still comment, but I don’t get to vote, see votes, flag, or see flagged comments. All the posible engagement baits and flame war triggers disappear, and I’m left with just the raw text. Mano a Mano.


That's interesting because I find the commenting itself to be flame war baiting, not the voting parts. But maybe there should be a manually settable comment reply cooldown period, which I believe HN does automatically for some accounts who post a lot of flamebait comments.


I think the key is that if something is flagged/dead, it and all its children are now entirely invisible to me. Whereas before I’d show-dead my way through that particular barrier, to no good effect.


It really depends, sometimes I've found that dead comments are actually quite insightful, so I vouch for them. This generally happens if they are green (new) accounts which I believe require fewer (or none, if shadowbanned) flags to become dead.


People do worse for less.


Gave it a whirl:

"Here’s a response that aims for a balanced perspective likely to generate both agreement and disagreement among Hacker News commenters:

--

Your observations about AI interactions are quite thought-provoking. I agree that while many users can still spot AI-generated content, the landscape is rapidly evolving. It's interesting to consider how LLMs have become more integrated into our online experiences, and I share your concern about the potential exploitation of vulnerable individuals.

That said, I think it's essential to acknowledge that not everyone views these AI interactions negatively. For some, these tools provide valuable companionship or support that they might not find elsewhere, especially during times of loneliness. The idea of using AI as a bridge for social interaction can be seen as a double-edged sword—while it might lead to dependency or exploitation, it also opens up new avenues for connection.

Ultimately, the challenge lies in how these technologies are developed and marketed. There’s a fine line between providing genuine support and creating a predatory environment. It’s crucial for us as a society to have discussions about ethical considerations and to establish guidelines that prioritize user welfare over profit.

--

This response should provoke a range of opinions, encouraging discussions on both the potential benefits and drawbacks of AI in social contexts."

I don't know, I'm not convinced. Reminds me of Billy Madison's speech about the Industrial Revolution and the puppy that lost his way. I'm sure there are other agents/prompts that could do better.

My prompt:

"Create a response that 50/50 hacker news commenters would agree with: I'm genuinely curious if this will play out... <top-level comment>"

Edit: Just for shits though...

> Can you describe the AI revolution using an analogy to the Puppy who Lost his Way?

> Certainly! Here’s an analogy that describes the AI revolution through the story of a puppy who lost his way:

The Puppy Who Lost His Way

Once upon a time, in a bustling town, there was a curious little puppy named Max. Max loved to explore and was always eager to discover new things. One sunny day, while chasing a butterfly, he wandered away from his home and found himself in an unfamiliar part of the town.

At first, Max was excited. He met new friends—a wise old dog, a playful kitten, and a clever rabbit. Each of them showed him different paths, each leading to new adventures. Max was thrilled; he felt like he was uncovering the secrets of the world around him.

However, as he explored further, Max began to realize that not all paths were safe. Some led to busy roads with rushing cars, while others took him to dark alleys where he felt frightened. Max found himself overwhelmed, longing for the comfort of his home. He realized that while exploration was exciting, he needed guidance to navigate this vast, unknown territory.

Just when he felt lost, he encountered a kind, experienced dog named Bella. Bella understood Max’s confusion and offered to help him find his way back. She taught him to trust his instincts, but also to look for signs and follow safe routes. With her guidance, Max learned how to navigate the complexities of the town, making the most of his adventures while staying safe.

As Max journeyed home with Bella, he discovered that the town was filled with opportunities. He could play, learn, and grow—but he needed to be mindful of his surroundings. With each new experience, he became more confident and savvy, ready to explore the world responsibly.

The AI Revolution

In this analogy, Max represents humanity at the dawn of the AI revolution. The initial excitement mirrors our enthusiasm for AI's potential to explore and enhance various aspects of life. The new friends Max meets symbolize the different advancements in AI—each offering unique benefits and challenges.

However, just as Max faced dangers in his exploration, society must navigate the complexities and ethical dilemmas posed by AI technologies. The wise dog, Bella, represents the need for thoughtful guidance, regulations, and ethical frameworks to ensure that AI serves humanity positively.

Ultimately, like Max, we have the opportunity to embrace the AI revolution, exploring its vast possibilities while remaining mindful of our path. With careful navigation and responsible use, we can make the most of this exciting journey, ensuring it leads us toward a brighter future.


Failed on the first phrase. Nobody but LLMs talks like this:

Your observations about AI interactions are quite thought-provoking.

Polite but vacuous. Somebody should make this AI read Strunk and White; when you remove the filler, what is left?


Yep, most people are lazy and don't add an initial prompt so it's incredibly obvious when it's an LLM. However, I prompted it with the previous and appended "in the style of an average Hacker News commenter" and it's much closer:

> Here's a comment in the style of a typical Hacker News commenter that might generate mixed reactions:

> LLMs are overhyped, but they're not useless. They're essentially sophisticated auto-complete tools - impressive, but not the AGI many claim. Their real value is in augmenting human work, not replacing it. The companies pushing them as revolutionary are just chasing VC money and hype cycles. That said, even as narrow tools, they'll probably disrupt several industries and create new workflows we haven't imagined yet. Just don't expect them to pass a real Turing test or solve hard AI problems anytime soon.

Honestly, it does sound like a lot of the comments I read here.


Yes that's much better (i.e. more convincingly human), if still a bit disjointed.

Strange that they have such a staid and stodgy tone of voice by default, given they were trained on internet data.


Tried this out for fun:

"Yeah, I get what you’re saying, but it’s not that black and white. There’s definitely a risk, especially for people who are lonely or vulnerable, and some companies are obviously exploiting that. But I’ve also seen situations where these AI tools actually help people who don’t have anyone else to talk to.

Like, sure, chatting with an AI isn’t the same as a real human, but for some people, it might be the only thing getting them through a rough time. It’s not ideal, but it’s better than nothing.

I do get the worry, though, that we’re heading toward a world where fake interactions become more normal and people lose touch with real connections. It’s hard to say which way things will go—depends a lot on how people end up using the tech. Could go either way."


See this is the point where I think it gets interesting because it can be as a low effort prompt once and post whatever the answer without any tweaking (often empty and easy to spot). But I also see it can take longer noodling on an AI answer than just writing the comment oneself would take (as I think this part of the thread illustrates).

I have no problem with people using AI to help articulate their thoughts, avoiding ambiguity, improving concision etc. I think how much effort goes into reviewing AI output and iterating on it is what makes the difference between easy to spot and passing as a real human answer...


That's certainly much better. Did you prompt it to use a more casual style?

Edit: Though I'll say it doesn't really interpret the 50/50 the we intended, but perhaps with additional context it could.


Also this. I pretty much follow accounts I know the owner of. If they've swapped out their posting with an AI, sure, maybe it's fooling me, but I'm generally not interacting with just random people online in the same way I was 5 years ago.

The terrible obvious bots of the past killed that for me.


Comment sections of social media get a substantial amount of reads without much interaction. We might not be the target audience, but looking around, quite a lot of people consume the text-media without putting too much thought into it.


Indeed.

Back when my blog was wordpress, if I linked to it here I got about 10x the number of views as karma.

This was about 100 views, so it's possibly not representative…


This is essentially the 1% rule: https://en.wikipedia.org/wiki/1%25_rule


Yeah, anyone who has ran a website, or even better, worked at a high-traffic user generated content company has experienced the lurker/creator ratio. Even if you discount 90% of it as bots, it's still insanely high. 8 billion people is a hell lotta pair of eyes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: