Sorry about that.

  • 2 Posts
  • 251 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle


  • Keeping in mind that I’m just giving personal opinions, I found Discovery to be too… over acted? Maybe that was just how it was written but the end result, for me, was that I was constantly rolling my eyes while watching.

    Picard seemed okay but in the end I didn’t like the obvious appeals to nostalgia, for me it felt like it leaned too heavily on it instead of trying to stand on its own as a good show.

    I have no idea if my experiences align with the broader community or not, but I found myself forcing myself to watch each respective show so I didn’t bother watching when a new season came out.

    Please don’t take my comment as anything but me sharing my experiences with someone else who is a fan of the franchise.

    SNW I’m totally on board for, though. And I was hesitant about Lower Decks at first but it’s really a good show, imo. It’s so good that it has me questioning my decision to ignore The Orville for being too silly.












  • Are you saying that traditional food delivery drivers get trained specifically not to hit on people when they deliver food? I don’t have any data but I feel like that’s not really a thing. Maybe my concept of the training a food delivery driver gets is way off the mark?

    I’m also pretty sure that it’s easier to give a bad review that others will see via one of these food delivery apps than it is if you go directly to the business.

    I think we all agree that this is inappropriate and should not be happening, I just don’t see how it doesn’t apply at least equally to traditional delivery drivers.






  • I can’t say I fully understand how LLMs work (can’t anyone??) but I know a little and your comment doesn’t seem to understand how they use training data. They don’t use their training data to “memorize” sentences, they use it as an example (among billions) of how language works. It’s still just an analogy, but it really is pretty close to LLMs “learning” a language by seeing it used over and over. Keeping in mind that we’re still in an analogy, it isn’t considered “derivative” when someone learns a language from examples of that language and then goes on to write a poem in that language.

    Copyright doesn’t even apply, except perhaps on extremely fringe cases. If a journalist put their article up online for general consumption, then it doesn’t violate copyright to use that work as a way to train a LLM on what the language looks like when used properly. There is no aspect of copyright law that covers this, but I don’t see why it would be any different than the human equivalent. Would you really back up the NYT if they claimed that using their articles to learn English was in violation of their copyright? Do people need to attribute where they learned a new word or strengthened their understanding of a language if they answer a question using that word? Does that even make sense?

    Here is a link to a high level primer to help understand how LLMs work: https://www.understandingai.org/p/large-language-models-explained-with