AI and legal experts told the FT this “memorization” ability could have serious ramifications on AI groups’ battle against dozens of copyright lawsuits around the world, as it undermines their core defense that LLMs “learn” from copyrighted works but do not store copies.

Sam Altman would like to remind you each Old Lady at a Library consume 284 cubic feet of Oxygen a day from the air.

Also, hey at least they made sure to probably destroy the physical copy they ripped into their hopelessly fragmented CorpoNapster fever dream, the law is the law.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    10 days ago

    You’re getting downvoted because it sounds like you’re defending the topic at hand. It shows how most people don’t understand the inner workings of an LLM. Hell, experts still aren’t completely sure, but they ran with what was working and have been tweaking along the way when things got too ugly. And as also brought up, they used everything they could grab to make it happen without concern for legality or future backlash. For science… and profit. And I don’t see a way to go backwards at this point, thanks to AI being embedded into everything (where it’s suited and where it’s not). For science… no, wait, that’s definitely for profit. And also because of your points, there’s no real way to filter or carve out what should have been restricted from being used, because it’s not really there in that form. We need to do something and quickly, but we do have to work with the beast we’ve made.

    Laws are notorious for being far slower than the tech it tries to control. And this time it can’t be retroactive. Well, I mean, it could be… if we just ban all existing LLM and related AI work and start over. Good luck with that kind of legislation.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 days ago

      To be fair, the big AI companies are just applying the science in order to profit from it. The science behind LLMs is innocent enough. It’s some very specific, money-making applications of that science that are pissing people off.

      Reading all these replies… Ugh. It’s so obvious none of these people understand how LLMs work. Not how the training happens either.

      Somehow people got it into their heads that LLMs are “plagiarism machines” and that image stuck. LLMs aren’t copying anything when they generate output! If they do, that’s a flaw in their training and AI researchers are always trying to spot and fix things like that. Why? Because it’s those same flaws that allow 3rd parties to understand and copy how their models work (and can create security issues).