• bklyn@piefed.social
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    3
    ·
    4 days ago

    Who would have imagined that a machine that is fueled by human filth would churn it back out again?

    • AnarchistArtificer@slrpnk.net
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 days ago

      Yeah, this really captures why I hate the mass rollout of AI on a philosophical level. Some people who try to defend AI argue that biases in the models are fine because these are biases that exist in reality (and thus, the training data), but that’s exactly why this is a problem — if we want to work towards a world that’s better than the one we have now, then we shouldn’t be relying on tools that just reinforce and reify existing inequities in the world

      Of course, the actual point of disagreement in these discussions is that the AI-defender often believes that the biases aren’t a problem at all, because they tend to believe that there is some fundamental order to the world where everyone should just submit to their place within the system

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        Ironically, withholding race tend to result in more racist outcomes. Easy example that actually came up. Imagine all you know of a person is that they were arrested regularly. If you had to estimate if that person was risky based on that alone with no further data, you would assume that person was risky.

        Now add to the data that the person was black in Alabama in the 1950s. Then, reasonably, you decide the arrest record is a useless indicator.

        This is what happened when a model latched onto familiar arrest records as an indicator about likely recidivism. Because it was denied the context of race, it tended to spit out racist outcomes. People whose grandparents had civil rights protest arrests were flagged.

        • AnarchistArtificer@slrpnk.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          That makes me think of how France has rules against collecting racial and ethnic data in surveys and the like, as part of a “colour blind” policy. There are many problems with this, but it was especially evident during the pandemic. Data from multiple countries showed that non-white people faced a significantly higher risk of dying from COVID-19, likely contributed to by the long-standing and well documented problem of having poorer access to healthcare and poorer standards of care once they actually are hospitalised. It is extremely likely that this trend also existed in France during the pandemic, but because they didn’t record ethnicity data for patients with COVID-19, we have no idea how bad it was. It may well have been worse, because a lack of concrete data can inhibit tangible change for marginalised communities, even if there is robust anti-discrimination laws.

          Link if you want to read more

          Looking back at the AI example in your comment though, something I find interesting is that one of the groups of people who strongly believe that we should take race context into account in decision making systems like this are the racist right-wingers. Except they want to take it into account in a “their arrest record should count for double” kind of way.

          I understand why some progressive people might have the instinct of “race shouldn’t be considered at all”, but as you discuss, that isn’t necessarily an effective strategy in practice. It makes me think of the notion that it’s not enough to be non-racist, you have to be anti-racist. In this case, that would mean taking race into account, but in a manner that would allow us to work against historical racial inequities