• shalafi@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    1 month ago

    Agreed. ChatGPT will not tell you sodium bromide is a safe salt substitute. This guy carefully prompted and poked the thing until it said what he wanted to hear. That should be the takeaway, the fact that with a little twisting, you can confirm any opinion you like.

    Anybody doesn’t believe me can try it themselves.

    • biggerbogboy@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      1 month ago

      It’s difficult to be sure, since GPT 5, the newest model, comes with a new structure of smaller, more specialised models combining outputs after being given a prompt by a different model the user interfaces with first, this is called mixture of experts.

      How do you know that OpenAI had made sure the outputs from multiple expert models wouldn’t contradict, wouldn’t cause accidental safeguard bypasses, etc?

      Personally, I trust GPT 4o more, even then though, I usually substitute the output with actual research when needed.