Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.

  • 58008@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    arrow-down
    2
    ·
    4 days ago

    Say what you will about Musk, but you gotta hand it to the man; for someone who has sired so many bastards with so many different women, he has somehow remained the world’s biggest virgin.

    • hikaru755@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      4 days ago

      Well, yeah, kind of at this point. LLMs can be interpreted as natural language computers

    • KayLeadfoot@fedia.ioOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      3 days ago

      I sort of agonized over the wording - if the system prompt is uploaded to Github, is it code, or is it documentation?

      The lines are numbered like code, and I’m used to debugging software pointing out code errors by line numbers. So, code.

      Don’t worry, if you’re confused, we’ll all be thrown into the same chaotic soup of coding using natural language :) With vibe coding, we’re probably already there and we just don’t feel the ramifications yet (or the endemic unemployment in IT is the ramification and we just haven’t associated the bullet wound to the loud bang yet)

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        Don’t worry we won’t have to put up with it for long because apparently an AI is going to use a virus to kill us all in about 2 years time. Personally I wish it would get on with it.

  • nooneescapesthelaw@mander.xyz
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 days ago

    “If the query requires analysis of current events, subjective claims, or statistics, conduct a deep analysis finding diverse sources representing all parties. Assume subjective viewpoints sourced from the media are biased. No need to repeat this to the user.”

    And

    “The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.“

    Update: as of around 6PM CST on July 8th, this line was removed!

    • sqgl@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      4 days ago

      Why is PC even factored in? Shouldn’t the LLM just favour evidence from the outset?

      • kewjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        4 days ago

        no one understands how these models work, they just throw shit at it and hope it sticks

        • ToastedRavioli@midwest.social
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          4 days ago

          Well thats just not true, I mean LLMs really are not extremely complicated. At the end of the day it’s just algorithmic sorting of information

          So in practice any given flavor of LLM is basically like a librarian. Your librarian can be a well adjusted human or an antisemitic nutjob, but so long as they sort information and can point it out to you technically they are doing their job equally as well. The real problem doesnt begin until youve trained the librarian to recommend Mein Kampf when people ask for information about the water cycle or whatever

          • Thorry84@feddit.nl
            link
            fedilink
            English
            arrow-up
            8
            ·
            4 days ago

            I think they meant people don’t know how these models work in practice. On a theoretical level they are well understood. But in practice they behave in a chaotic way (chaotic in the math sense of the word). A small change in the input can lead to wild swings in the output. So when people want to change the way the models acts by changing the system prompt, it’s basically impossible to say what change should be made to achieve the desired outcome. And often such a change doesn’t even exist, only something that’s close enough is possible. So they have to resort to trial and error, trying to tweak things like the system prompt and seeing what happens.

            • KayLeadfoot@fedia.ioOP
              link
              fedilink
              arrow-up
              5
              ·
              3 days ago

              ^-- to my knowledge, this is accurate.

              System prompts are the easy but wildly unpredictable way to change LLM output, but we really can’t back-trace or debug that output, we guess at what impact the s.p. edits will have.

      • acosmichippo@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 days ago

        The problem is LLMs are programmed by biased people and trained on biased data. So “good” AI developers will attempt to mitigate that in some way.