Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm pretty sure that you can make LLM to produce indefinite output. This is not desired and specifically trained to avoid that situation, but it's pretty possible.

Also you can easily write external loop which would submit periodical requests to continue thoughts. That would allow for it to remind of something. May be our brain has one?



this would introduce a problem: a periodical request to continue thoughts with, for example, the current time - to simulate the passing of time - would quickly flood the context with those periodical trigger tokens.

imo our brain has this in the form of continuous sensor readings - data is flowing in constantly through the nerves, but i guess a loop is also possible, i.e. the brain triggers nerves that trigger the brain again - which may be what happens in sensory deprivation tanks (to a degree).

now i don't think that this is what _actually_ happens in the brain, and an LLM with constant sensory input would still not work anything like a biological brain - there's just a superficial resemblance in the outputs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: