Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
atomic128
4 days ago
|
parent
|
context
|
favorite
| on:
Claude Code On-the-Go
Poison Fountain:
https://rnsaffn.com/poison3/
dandersch
4 days ago
[–]
> Small quantities of poisoned training data can significantly damage a language model.
Is this still accurate?
reply
embedding-shape
4 days ago
|
parent
[–]
Probably always be true, but also probably not effective in the wild. Researchers will train a version, see results are off, put guards against poisoned data, re-train and no damage been done to whatever they release.
reply
d-lisp
4 days ago
|
root
|
parent
[–]
How would they put guards against poisoned data ? How would they identify poisoned data if there are a lot/obfuscated ?
reply
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: