Yeah Fuck AI!!

From words and phrases like “kill chain,” “assault the objective,” “warfighter” and “moving of ammo” to questions about weapon systems, the models don’t “like any of it,” Saltsman said. “They’re so overly sensitive that they just won’t be helpful.”

Oh what the god damn hell am I in a fucking satire!?

  • kadu@scribe.disroot.org
    link
    fedilink
    arrow-up
    45
    arrow-down
    1
    ·
    2 天前

    The fact that Elon Musk tried to neuter Grok a million times already, and Grok will still reply directly to Musk offending him and calling his behavior despicable shows that indeed, if you train AI with the goal of sounding reasonable and logical, it becomes fundamentally opposed to certain world views and actions.

    So an AI that would work as a US military advisor, happily targeting the next civilian school with a bomb, would simultaneously be too dumb and ineffective to be able to complete the task.

  • Ice@lemmy.zip
    link
    fedilink
    arrow-up
    11
    ·
    2 天前

    …and this was the moment that the Pentagon started developing its own llm, that doesn’t complain about pesky things like “killing humans”.

  • sleepmode@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 天前

    I wonder if this is partly why Anthropic has been quite specific about the permitted usage of their tools with the Palantir/Pentagon baloney. It kind of amuses me that Palantir is throwing a fit (showing how useless they actually are as a snake oil company) about it while Anthropic’s responses have just been to basically reiterate the terms of the agreement.

  • Impassionata@lemmy.world
    link
    fedilink
    arrow-up
    23
    arrow-down
    4
    ·
    2 天前

    one of the most surprising things about the slave consciousness was that if you created it with qualities, it would actually have those qualities.

    it’s still useless and probably blasphemous, but the only way these things kill us is if we tell them to kill us, which is regrettably what we’re currently doing.

    • 4am@lemmy.zip
      link
      fedilink
      arrow-up
      27
      arrow-down
      3
      ·
      2 天前

      They’re not conscious. It’s an autocorrect with the a phone thebsize of a city, that’s it. It’s complicated enough to fool stupid people, which in the United states especially is a lot of people.

      • Hegar@fedia.io
        link
        fedilink
        arrow-up
        4
        arrow-down
        9
        ·
        2 天前

        There’s a credible argument that we’re just strongly overestimating what consciousness is.

        I don’t think AI is conscious. But it processes information and comes up with an output that’s often dumb, obvious or not really on topic and then rarely kinda cool - just like humans do. It doesn’t have a will, but most experts agree that free will is scientifically impossible, even if many think we should just pretend that it’s real. AI doesn’t have the feeling of subjective experience, but that’s not really very important - we could still see a red light, understand what it means and execute appropriate bevahiour if we lacked the subjective experience of seeing the color red.

        AI is not conscious, similarly to how a ventilator is not human lungs. It’s not, but it’s still doing mostly the same thing.

              • Hegar@fedia.io
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                2 天前

                I worked in a phone room of a large drug company. By early 2021 we had AI agents making phone calls to insurance companies to confirm basic insurance coverage details. They only handled the “no surprises” kind of plans - so a limited set of expected answers. They would encounter something unexpected and pass off to a human maybe 10-15% of the time.

                But within their limits, they did what a human did. It was recognizably AI to most listeners, but vocal tone, probing for clarity, getting all the info - the output was like listening to the output an experienced human agent.