• MountingSuspicion@reddthat.com
      link
      fedilink
      arrow-up
      27
      arrow-down
      2
      ·
      edit-2
      18 days ago

      That’s not a valid reason to lie. If it does not have the information it should state as much. This post underscores one of the biggest issues with AI. It will confidently say whatever “statistically plausible” thing regardless of the actual truth.

      Edit: in case there is any confusion, by “should” I mean in an ideal scenario where ai is able to be used the way people think they can currently use it. I’m aware that it’s not really how ai works hence the remainder of the comment making note of the statistically plausible bit. AI makes factual errors on things that could arguably be answered with its current dataset (how many bs in blueberry etc) and this is not an issue with the dataset, it’s a side effect with the way LLMs work. They are not reasoning machines. They are fancy algorithms. This makes them impractical for use in several areas where they’re already being deployed and that’s a problem.

      • npdean@lemmy.today
        link
        fedilink
        arrow-up
        9
        ·
        18 days ago

        I agree. The problem is it does not know that it is lying. As a whole, I would say it is our mistake to use it

        • leftzero@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          2
          ·
          18 days ago

          That’s victim blaming.

          The fault is on the scammers selling the faulty product, not on the users who fall for the scam.

          • npdean@lemmy.today
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            18 days ago

            No one is a victim. You are blowing it out of proportion.

            AI is not a scam. People just don’t understand where to use which technology. People using LLMs for financial advice or getting accurate data are not well informed of how AI works. Marketing teams take advantage of this and inflate stock prices but nowhere is any user duped of their money.

      • ddplf@szmer.info
        link
        fedilink
        arrow-up
        9
        arrow-down
        2
        ·
        edit-2
        18 days ago

        You don’t understand how AI works under the hood. It can’t tell you it’s lying, because it doesn’t know the concept lying. In fact - it doesn’t know ANYTHING, literally. It’s not thinking, it’s predicting. It’s speculating what the viable answer would look like based on his dataset.

        You don’t actually get real answers to your questions - you only get a text that the AI determined may seem most fitting to your prompt.

        • MountingSuspicion@reddthat.com
          link
          fedilink
          arrow-up
          3
          ·
          18 days ago

          I understand how my comment was unclear, but I was attempting to underscore the fact that it cannot determine the difference. That’s why I included the whole statistically plausible bit. My point is that AI as it currently functions is fundamentally flawed for several use cases because it cannot operate as it “should”. It just says things with no ability to determine the veracity.

          The first portion of my comment was addressing the suggestion that there was a “reason” to lie. My point is there is no good justification to providing factually incorrect answers, and currently there is no way to stop AI from doing so. Hope that clears things up.

    • prole@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      17
      arrow-down
      1
      ·
      18 days ago

      Not up to date? It literally says that Joe Biden started his second term in January 2025.

      There’s nothing outdated about that, it’s just flat out false.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          18 days ago

          It shouldn’t be used for news, or historical facts, or scientific facts, or basically anything important, because it is trained on whatever crap is out there and if it doesn’t have something already then it goes to a web search that is filled with AI slop.

          LLMs are worthless for everything that they are being sold as useful for.

          • TexasDrunk@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            18 days ago

            Yep. It’s like 90% forum nonsense. Not that there aren’t valuable opinions or experts in forums, but it’s much easier for me to look at a conversation as a whole there and make judgements based on that rather than just getting “This is the most said thing on Reddit about this topic”.

            A buddy of mine was attempting to use it to organize (not add to, not build) his music studio. It kept suggesting he get better room treatment. He didn’t have it listed because he was only trying to organize his gear. It predicted that because in every studio forum the advice consensus is to spend money on better room treatment before getting a new preamp or whatever. Then it would ask things like “Would you like me to outline how a ribbon mic would help on guitar cabinets?”

            He eventually just built his own spreadsheet.

          • npdean@lemmy.today
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            18 days ago

            True but LLMs genuinely are useful when you want random things like fiction, roleplay, etc.

            • MotoAsh@lemmy.world
              link
              fedilink
              arrow-up
              4
              ·
              18 days ago

              Sure, they’re not totally useless, but that doesn’t make their current iteration or rampant proliferation ethical or even worth defending.

              • npdean@lemmy.today
                link
                fedilink
                arrow-up
                3
                ·
                18 days ago

                Agreed. I have the same opinion about crypto and all current new “innovations” in tech. Everything blown to a bubble without any real utility.

            • leftzero@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              2
              ·
              18 days ago

              As long as you don’t care about consistency, maybe…

              (I mean, since they can’t learn without retraining the whole model, if you’re writing anything of any significant length you’d basically need to refeed them the whole context and backstory so far every prompt, which I assume would eventually hit some prompt size limit…)

              • npdean@lemmy.today
                link
                fedilink
                arrow-up
                2
                ·
                18 days ago

                True. I meant it is useful for one time things like game night or something, not for work.