How dense can a company be? Or more likely how intentionally deceptive.
No, Eaton. We don’t need to “improve model reliability”, we need to stop relying on models full stop.
I love all these articles that frame the public’s reaction to something as the problem, while ignoring or glossing over the cause of the reaction entirely.
“How dare you question the orphan grinder! No, the real problem is that you don’t understand why the orphan grinder is necessary!”
That’s not at all what this is doing. It’s a call to make sure businesses out a priority on making these machine learning models less opaque, so you can see the inputs it used, the connections it found at each step to be able to see why a result was given.
You can’t debug a black box (you put in into and get an unexplained output) remotely as easily, if at all
“it’s difficult to get a man to understand something when his salary depends on him not.”
I want eaton to do nothing with AI. I don’t want an ai developing circuit breakers, heavy duty automotive drivetrain or control compoenents, or other safety critical things.
This sounds like they’re talking about machine learning models, not the glorified autocorrect LLMs. So the actually useful AI stuff that can be leveraged to do real, important things with large sets of data that would be much more difficult for humans to spot.
I doubt it.