• markovs_gun@lemmy.world
    link
    fedilink
    arrow-up
    21
    ·
    3 days ago

    I work a lot with ML based control systems for industrial manufacturing and I think one thing to point out here is that there are a lot of tools you can use to determine how confident an AI model is, what factors caused it to make a classification, and the statistical likelihood of a false positive or negative. You can also easily determine behavior in uncertainty. So you can say “if you’re not 99% sure this is a dent, don’t classify it as one” and you can just set that certainty threshold. In other words, Hertz had the ability to tune this system to err on the side of not classifying things as dents in “edge cases” and decided not to because it benefits them financially to classify more things as dents even when that isn’t true, and would potentially face losses if they erred on the side of not classifying things as dents. The right thing to do, if they felt this was necessary in the first place, would be to roll this out and have humans review afterwards, especially in edge cases. In this case, however, they chose not to do that either.

    All together, this paints a pretty damning picture- Hertz is intentionally scamming people with this. There’s not really any other rational explanation for everything to “go wrong” in just the right way to cause them to make more money on fake damage to their rental cars. This is a really disheartening trend in AI systems because this is not the only company pulling this scam of using supposedly impartial and 100% accurate AI-based systems to claim damages and charge customers for them with no possibility of appeal, and it is really hurting the reputation of ML based solutions in general. I mean the very existence of this community is evidence that in many people’s minds, AI is synonymous with scams and shitty uses like outsourcing creativity to computers. It’s now a non-trivial barrier to getting these systems put into place industrially, even in circumstances where they provide real tangible value, especially because the false classification problem is well researched and easy to mitigate if you actually want to mitigate it.

    • plenipotentprotogod@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      3 days ago

      Absolutely. I’m as skeptical as anyone of companies cramming AI where it doesn’t belong, but this story is just Hertz being a shitty company and using AI as a scapegoat. Anyone with two braincells to rub together knows that when you’re implementing a new automatic system like this, you start out with the sensitivity turned way down and give human employees an easy way to override its rulings.

      I’m happy to believe that the people running Hertz are dumb, but there’s no way they’re that dumb. They did this on purpose because they knew it would make them a ton of money in bogus fees, and they could just shift all the blame onto the AI.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      Damn. Haven’t worked in AI but suspected this was a case of tweaking parameters towards, “Yep! That’s damage!”