AI and legal experts told the FT this “memorization” ability could have serious ramifications on AI groups’ battle against dozens of copyright lawsuits around the world, as it undermines their core defense that LLMs “learn” from copyrighted works but do not store copies.

Sam Altman would like to remind you each Old Lady at a Library consume 284 cubic feet of Oxygen a day from the air.

Also, hey at least they made sure to probably destroy the physical copy they ripped into their hopelessly fragmented CorpoNapster fever dream, the law is the law.

  • Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 days ago

    To be fair, the big AI companies are just applying the science in order to profit from it. The science behind LLMs is innocent enough. It’s some very specific, money-making applications of that science that are pissing people off.

    Reading all these replies… Ugh. It’s so obvious none of these people understand how LLMs work. Not how the training happens either.

    Somehow people got it into their heads that LLMs are “plagiarism machines” and that image stuck. LLMs aren’t copying anything when they generate output! If they do, that’s a flaw in their training and AI researchers are always trying to spot and fix things like that. Why? Because it’s those same flaws that allow 3rd parties to understand and copy how their models work (and can create security issues).