So do you want to legally review every line by an LLM to see if it meets the fair use criterion, since you have to assume it was probably stolen? And would you do this for a known plagiarizing human contributor too…?
So what does the signed-off-by magically solve here, that doesn’t require either you or the contributor to legally review every line by an LLM? If you’re not a lawyer, is your contributor going to be one?
They don’t have to be. They know what they asked the LLM to do. They know how much they adapted the output. You usually have to work to get the models to spit out significant chunks of memorised text.
If the 2-10% is just boilerplate syscall number defines or trivial MIN/MAX macros then it’s just the common way to do things.
So do you want to legally review every line by an LLM to see if it meets the fair use criterion, since you have to assume it was probably stolen? And would you do this for a known plagiarizing human contributor too…?
No, that’s why the author asserts that with their signed-of-by. It’s what I do if I use any LLM content as the basis of my patches.
So what does the signed-off-by magically solve here, that doesn’t require either you or the contributor to legally review every line by an LLM? If you’re not a lawyer, is your contributor going to be one?
They don’t have to be. They know what they asked the LLM to do. They know how much they adapted the output. You usually have to work to get the models to spit out significant chunks of memorised text.
I don’t have much more to say other than I doubt the data backs up what you’re saying at all.