Lawton Chiles Middle School in Oviedo went into lockdown Tuesday.
Its AI weapons detection system detected a student holding a clarinet in a similar manner as someone might hold a rifle. It “triggered the Code Red to activate,” the school’s principal told families in an automated message.


I would argue that your point applies to every use case for AI. AI isn’t responsible for any of the bad shit it’s associated with, blame lies with humans 100% of the time.
Is every scenario with so-called AI in it caused by humans? Sure. That’s not really my point though. It was humans who caused the dumb situation around private gun ownership that then eventually caused school shootings to be a thing schools need to prepare for. I would tolerate the use of so-called AI here under these dumb circumstances and moreover would tolerate a false positive like this. I feel similarly positive about the use of models in medicine - if and when it helps. Or as a tool for people with disabilities. Et cetera.
Normally we lambast here very dumb applications of so-called AI. The ones that get lawyers in trouble, the ones that get forced into areas where it’s unnecessary, the ones that boil away drinking water senselessly, or that ask children for nudes, or - sadly - the ones that drive teenagers to suicide. We lambast all the peddlers of so-called AI with their dumb predictions about how their faulty products will revolutionize everything. That’s the spirit of “Fuck AI.” My point was this story is less in keeping with the spirit of “Fuck AI.” So-called AI might actually help to make a bad situation not get worse.
I see your point, but the concerning bit about the tech being used in cases like this is that it helps pave the way for more mass surveillance. Plus there’s the fact that just because it has a high rate of false positives doesn’t necessarily mean it’ll have a low rate of false negatives (i.e. whether it’s actually effective).
The caveat you mention near the end of your first paragraph is key here: “if and when it helps”. So many of these systems have not been proven to work (or indeed, have been proven to not work, in some cases), and are exorbitantly expensive. Given that AI has been pushed into so many domains where it is not wanted or helpful, I am not particularly hopeful about this particular case, even though we only have very limited info from this false positive. The whole mess is complicated by the fact that it’s exceptionally hard to prove or disprove whether these systems work because the vast vast majority of them are black box systems, surrounded by even more opaque financial fuckery. To me, this definitely fits the spirit of the community