How dense can a company be? Or more likely how intentionally deceptive.

No, Eaton. We don’t need to “improve model reliability”, we need to stop relying on models full stop.

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    14 hours ago

    I love all these articles that frame the public’s reaction to something as the problem, while ignoring or glossing over the cause of the reaction entirely.

    “How dare you question the orphan grinder! No, the real problem is that you don’t understand why the orphan grinder is necessary!”

    • Bronzebeard@lemmy.zip
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      13 hours ago

      That’s not at all what this is doing. It’s a call to make sure businesses out a priority on making these machine learning models less opaque, so you can see the inputs it used, the connections it found at each step to be able to see why a result was given.

      You can’t debug a black box (you put in into and get an unexplained output) remotely as easily, if at all

  • pdxfed@lemmy.world
    cake
    link
    fedilink
    arrow-up
    4
    ·
    10 hours ago

    “it’s difficult to get a man to understand something when his salary depends on him not.”

  • Know_not_Scotty_does@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    14 hours ago

    I want eaton to do nothing with AI. I don’t want an ai developing circuit breakers, heavy duty automotive drivetrain or control compoenents, or other safety critical things.

  • Bronzebeard@lemmy.zip
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    14 hours ago

    This sounds like they’re talking about machine learning models, not the glorified autocorrect LLMs. So the actually useful AI stuff that can be leveraged to do real, important things with large sets of data that would be much more difficult for humans to spot.