Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLM's follow the old adage of "Garbage In, Garbage Out". LLM's work great for things that are well documented and understood.

If you use LLM's to understand things that are poorly understood in general, you're going to get poor information because the source was poor. Garbage in, Garbage out.

They are also terrible at understanding context unless you specify everything quite explicitly. In the tech support world, we get people arguing about a recommended course of action because ChatGPT said it should be something else. And it should, in the context for which the answer was originally given. But in proprietary systems that are largely undocumented (publicly) they fall apart fast.



You’re going to get poor information presented with equal certainty as good information, though. And when you ask it to correct it, more bad information with a cheery, worthless apology.


The ability to spot poor information is what keeps the end user a vital part of the process. LLM's don't think. [Most] humans do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: