You must log in or # to comment.
So what they’re saying is that when you train a model on material biased against minority groups it will turn out to be biased against these minority groups? Truly shocking, totally unexpected, and really no one could have foreseen this. Because bias in so-called AI was really just discovered this morning! JFC.
US cops are also having disturbing issues with their program. The AIs will sometimes flag a white person.
Op, “horribly biased toward black and Asian people” as an article summary meaning the opposite of what you intend, it should be “biased against”, bias toward something is favoring it.
That is automatically taken directly from the article, I have no control over it.





