Hey! Robotopia looks awesome, I'm excited to try it out when it launches. How do you convert the LLM output to actions? Is there more broad actions available (ie like creating any object, moving anything anywhere) exposed to the LLM or is it more specific tools it can call?
Thanks :)
It may sound insane but we convert actions to Python functions then ask the LLM to write a python script that actually runs in IronPython inside the game.
Then we have a visual Behavior Tree system to let our designer define the actions. So yeah, they got a bunch of general actions like walk, talk, follow, interact etc.
PS: I think MCP/Tool Calls are a boondoggle and LLMs yearn to just run code. It's crazy how much better this works than JSON schema etc.
Fair reaction tbh. Right now there's a time watchdog + I'm entirely disabling all I/O and import, But going forward I want to replace it with a proper sandboxing tech... things I looked into are V8 isolates, compilation to WASM, implementing our own gutted python interpreter, spinning up a locked down process, and others. I'm definitely aware of the risk here.
The good news is that unless we get pwned, LLMs are very unlikely to write malicious code for the user.
>...LLMs are very unlikely to write malicious code for the user.
Do you have any idea what the actual probability is? Because if millions of people start using the system, 'very unlikely' can turn into 'virtual certainty' pretty quickly.