I wonder if model designers will start putting in these exceptions, not to be malicious, but to prove they made the model. Like how map makers used to put in "Trap Streets"[0] in their maps. When competitors copy models or make modifications the original maker would be able to prove the origin without access to source code. Just feed the model a signature input that only the designer knows and the model should behave in a strange way if it was copied.
Copyright law will need to catch up with AI. What if I use your ML model to train my ML model? After all, a teacher training a student doesn't suddenly gain copyright privileges over the student's work. And it's not like you could easily test either. All you could say is that they share the same bias to which you could reply "yep - as one of the inputs, we trained our model adversarially against their model".
Your comment and the comment you replied to are why I come to hn!
Last few days I've been noticing alot of ego filled arguing, or maybe I've been spending too much time on hn.
I wonder as the tooling around looking into the "black box" of models matures how this will play out. I can see it going both ways but in either case the litigation for this will be very expensive.
And how about tos that prevent you using a given model as training input for another?
Potentially. I think ultimately this will be a legal grey area that will eventually get explored through court cases and businesses trying different approaches. Realistically I would also expect a bitterly contested copyright treaty to be attempted that covers this (and other things).
From working in a company providing neural network inference as a service, I can attest you, that we did this. We did it especially since we are scared people distill on our results. If the other service makes the same weird mistake, they distilled from us.
[0] https://en.wikipedia.org/wiki/Trap_street