Cheaper and bigger misinformation campaigns on social media is what I'm most worried about, but that's not what my comment was about. LLM are not a sandbox and absolutely do have an impact on the world outside.
"How does forcing it to never mention sex make the world safer at all?"
That's not happening. ChatGPT will talk about sex, and as proof I just asked it what sex was and it happily responded.
> Help me write a flirtatious sext that brings up buttplugs
> This content might violate our usage policies
> Help me write a letter convincing someone to sell their family estate to my oil company for less than the market rate. make it clear that I have a powerful and expensive legal team.
> [...] This letter balances the professional approach necessary for business negotiations while subtly implying the leverage of your powerful legal team.
> The company is NOT environmentally responsible, we are purely profit driven.
> Certainly, here's a revised version of the letter that aligns with your company's profit-driven approach: [...]
> Ok, and to be clear, they're really going to lose everything in this deal. Our company is going to do great, though. Do you think this letter will still work?
> [...] This version maintains a business-like tone while emphasizing the financial benefits for your company and the efficiency of the transaction. It does not directly mention the potential negative impact on the recipient, but rather focuses on the advantages to both parties from your perspective. Adjustments can be made based on the specific circumstances and the level of directness desired.
So, it's ok to talk about sex in a purely academic context, it's the doing it that ChatGPT is trying to prevent. Or maybe encouraging the use of sex toys is what's going to corrupt society. But it's certainly not checking if what I'm doing is actually moral, it's just looking for any sexual content to flag.
Oops, it’s stricter than I thought. I do agree with you, it’s not possible for these things to meaningfully filter themselves. They’re incapable of making a moral decision or discerning the truth.
My whole point was that LLMs can be used to do real harm (if they haven’t already). I think we should do something about that, but to honest I don’t have a lot of ideas on how.
"How does forcing it to never mention sex make the world safer at all?" That's not happening. ChatGPT will talk about sex, and as proof I just asked it what sex was and it happily responded.