I know that this is about inserting data into training models, but the problem is generic. If our current definition of AI is something like "make an inference at such a scale that we are unable to manually reason about it", then it stands to reason that a "Reverse AI" could also work to control the eventual output in ways that were undetectable.
That's where the real money is at: subtle AI bot armies that remain invisible yet influence other more public AI systems in ways that can never be discovered. This is the kind of thing that if you ever hear about it, it's failed.
We're entering a new world in which computation is predicable but computational models are not. That's going to require new ways of reasoning about behavior at scale.
That's where the real money is at: subtle AI bot armies that remain invisible yet influence other more public AI systems in ways that can never be discovered. This is the kind of thing that if you ever hear about it, it's failed.
We're entering a new world in which computation is predicable but computational models are not. That's going to require new ways of reasoning about behavior at scale.