• Dyskolos@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        23 hours ago

        I would try to interpolate your probably real question here, too. And I’m no AI. Unless I missed your point

        • chaogomu@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          22 hours ago

          Someone posted yesterday with a question asked to AI.

          What weighs more, 20 pounds of bricks or 20 feathers?

          The useless chat bot will always answer with “they both weigh 20 pounds” because that’s what the training data always says when asked about bricks and feathers.

          • Dyskolos@lemmy.zip
            link
            fedilink
            arrow-up
            1
            ·
            17 hours ago

            That’s what I said. I expect it to interpolate silly questions to a sensible meaning. Of course it would sound like you wanna compare a comparable set of data. I would, as said, also assume you meant 20 pounds of both and just forgot to say that. Maybe adding “take this question literally” helps?

            No fan of AI here, but hate where hate is due 😁

              • Dyskolos@lemmy.zip
                link
                fedilink
                arrow-up
                1
                ·
                15 hours ago

                Hm, i just had to try myself with chatgpt, and:

                so, as suspected, it just tries to ignore one’s nonsensical questions unless asked to take it verbatim. This is actually not even dumb. If it had said “your question makes no sense”, we would call it stupid to not see the obvious error in the question - as it’s a run-of-the-mill-“riddle”.

                • chaogomu@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  13 hours ago

                  I’ll have to find the post, but you did it in two steps, changed the units of mass, and object.

                  The post, which is extremely hard to find with the latest slop release from Nvidia, asked the chatbot to consider the exact wording, without babying it into the correct answer. All because the close variations of the phase “X pounds of bricks and X pounds of feathers weigh the exact same” have been used in various textbooks and such for at least the last hundred years or so.

                  That means that the chatbot has seen that exact combo of words, in roughly that order, quite a bit more than your use of “100 kilograms of rice”. At least in English.

                  You can baby it through when the training data is sparse, but not when there are hundreds of uses of the same phrase over and over again in the training.