How dense can a company be? Or more likely how intentionally deceptive.

No, Eaton. We don’t need to “improve model reliability”, we need to stop relying on models full stop.

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    22 hours ago

    I love all these articles that frame the public’s reaction to something as the problem, while ignoring or glossing over the cause of the reaction entirely.

    “How dare you question the orphan grinder! No, the real problem is that you don’t understand why the orphan grinder is necessary!”

    • Bronzebeard@lemmy.zip
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      22 hours ago

      That’s not at all what this is doing. It’s a call to make sure businesses out a priority on making these machine learning models less opaque, so you can see the inputs it used, the connections it found at each step to be able to see why a result was given.

      You can’t debug a black box (you put in into and get an unexplained output) remotely as easily, if at all

  • Know_not_Scotty_does@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    22 hours ago

    I want eaton to do nothing with AI. I don’t want an ai developing circuit breakers, heavy duty automotive drivetrain or control compoenents, or other safety critical things.

  • pdxfed@lemmy.world
    cake
    link
    fedilink
    arrow-up
    5
    ·
    18 hours ago

    “it’s difficult to get a man to understand something when his salary depends on him not.”

  • Bronzebeard@lemmy.zip
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    22 hours ago

    This sounds like they’re talking about machine learning models, not the glorified autocorrect LLMs. So the actually useful AI stuff that can be leveraged to do real, important things with large sets of data that would be much more difficult for humans to spot.

      • Bronzebeard@lemmy.zip
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        6 hours ago

        What is there to doubt? It’s right there in the text. LLMs are not data processing nor decision making models. There wouldn’t need to be a push to make the steps in LLM output more visible, like in other machine learning models

      • CarrotsHaveEars@lemmy.ml
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        4 hours ago

        It sounds like you are doubting something without understanding it. Let’s say you gathered all the electricity consumption of individual houses in July in your city. Now, if someone is building a new house next to a regular one, what do you predict how much electricity it will consume? You answer with the mean value of your dataset. It’s that simple.

        This can count as machine learning.

        Now, are you saying you doubt this math, which has been used for probably more than two millennium, or are you doubting something else?

        • Optional@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          4 hours ago

          …this math, which has been used for probably more than two millennium

          Sure. That’s what I’m doubting. That’s what they’re talking about. That’s the hype.