• biggerbogboy@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      I actually use AI a lot, I’ve seen that the safeguards aren’t very well managed, I find some situations where it mentions completely fabricated information even after deep search or reasoning, although that said, it is also improving, since even last year it was way worse.

      Then again though, it is also the poisoned dude’s fault for not searching up what these chemicals are, so it’s really both sides being responsible.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      1 month ago

      Agreed. ChatGPT will not tell you sodium bromide is a safe salt substitute. This guy carefully prompted and poked the thing until it said what he wanted to hear. That should be the takeaway, the fact that with a little twisting, you can confirm any opinion you like.

      Anybody doesn’t believe me can try it themselves.

      • biggerbogboy@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        It’s difficult to be sure, since GPT 5, the newest model, comes with a new structure of smaller, more specialised models combining outputs after being given a prompt by a different model the user interfaces with first, this is called mixture of experts.

        How do you know that OpenAI had made sure the outputs from multiple expert models wouldn’t contradict, wouldn’t cause accidental safeguard bypasses, etc?

        Personally, I trust GPT 4o more, even then though, I usually substitute the output with actual research when needed.