Test subjects who consulted AI were overwhelmingly willing to accept its answers without scrutiny, whether correct or not.

  • okamiueru@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    15 days ago

    I saw somebody in work upload a firewall config xml and start querying if stuff was blocked. I actually thought it was a pretty clever use of it.

    I would find it some place between worrisome and you-should-lose your-job, depending on how important that firewall is. This might seem exaggerated, but if your colleague had showed that config to a child, and then asked them yes and no questions, a game to which the child happily participated in. I would consider that exactly as reasonable, and exactly as responsible, as asking an LLM. Imagine someone doing this, for an important firewall config… and taking the child’s answers at face value. It should be fair to think that this person is grossly unqualified, and showing a dangerous lack of judgment.

    And, that’s just the issues I would have regarding using a bullshit generator as a source of truth. If the firewall config could be considered sensitive information, uploading that to a third party, would be grounds for dismissal for entirely separate reasons.