Transcript

To distill my thoughts into one screenshot, I think the best analogy here is cars. I hate car-centric infrastructure. It’s bad for the planet, bad for communities, bad for people. Environmental damage, pedestrian deaths, infrastructure that destroys communities, oil dependence, suburban sprawl.

AND there are obvious use cases where they need to exist. Ambulances, disability access, rural transportation, moving goods. AND we need clear safety regulations. AND we should design a world that relies on them as little as possible.

We don’t solve car problems by scolding individuals for driving to work. Nor do we solve them through arguments that are either factually incorrect, OR harmful in and of themselves (“everyone should just bike”). We solve them through safety regulations, emissions standards, public transit investment, walkable design. The same applies here: the solution to AI harms isn’t individual guilt. It’s structural, so regulation, safety requirements, platform accountability, worker protections.

I can be annoyed that AI slop is everywhere, that AI culture is dangerous and that serious work needs to be done to curb it and build systems that don’t rely on it, AND think the nearly 1 billion people using ChatGPT weekly to help them code or write an email aren’t just like, stupid and evil (doctors using AI to detect tumours is obviously good; someone with a learning disability getting a concise explanation with something that will be patient with them is obviously good. Or translation! We need human translators. But when I got into a cab in Turkey with a driver who spoke no English, he used Gemini to translate and we had a lovely conversation. You can’t have a bilingual human in every cab. Google Translate has existed for years, but LLMs are more natural/better with context and idiom. Deepfakes are obviously bad. Hell, I don’t think AI should replace adult actors but I kind of do think AI should replace child actors! That is an inherently unethical job!), and that some arguments against AI do more harm than good. Those aren’t contradictory positions. Much like with cars, I think the harm outweighs the benefits, and my primary desire is to want those harms to be addressed in a way that doesn’t cause *more* harm.

Bans on facial recognition in policing. Required safety features for AI companions. Algorithmic impact assessments for public benefits systems. Product liability that holds companies accountable for harms. Focusing on the labour issues and not copyright; threat to artists and writers isn’t that their “style” was stolen, it’s that their labour is being devalued and replaced without any safety net. Stronger IP benefits Disney. Worker power benefits workers. I’m also just pretty fond of UBI. Excluding a full-on revolution, everyone’s needs being met would certainly help.

We need alternatives too; just like we need public transit before we can reduce car dependence, we need social infrastructure, so mental health support, community spaces, worker protections, so people aren’t driven to AI companions out of desperation. I mean, a major reason suicidal people rely on (very dangerous!!!) AI “therapists” is because a real human therapist can have them forcibly institutionalized! That’s a root cause that needs to be addressed.

I hope that makes sense!

i think it’s a take on AI that’s much more productive than the usual “this tech and the people who use it are inherently evil”

the rest of their thread is worth a read too, imo: https://bsky.app/profile/sarahz.bsky.social/post/3mbrq3c6rqc2n

  • thecaptaintrout@lemmy.zip
    link
    fedilink
    arrow-up
    7
    ·
    10 days ago

    I can get behind a lot of this take. Agreed on being good use cases like translation, image/data processing, and so on.

    • ZDL@lazysoci.al
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      10 days ago

      LLMs are only good in translation if the translation is being performed by a human familiar with both languages in the translation. They’re just as prone, after all, to hallucinating things that aren’t in the source text when doing translation as they are to hallucinating historical events or hallucinating cops turning into frogs. (Look it up!)

      I think that’s the bottom line for LLMs. They cannot be trusted as anything but expert tools for experts. In the hands of experts, who know the problem domain and thus can vet the results for accuracy, they can be speed amplifiers, removing the tedious drudgery from things. But in the hands of non-experts they are deceptively destructive.

      As seems to be the usual, people think LLMs are great for other problem domains, but not the ones they’re familiar with. It’s the old press problem. When the press reports on things you know, it’s inaccurate and terrible. When it reports on things you don’t know, it’s gospel. Same applies to LLMs. It’s just weird to me how everybody says it’s good for everything they don’t know much about, but never good for things they do.

  • CombatWombatEsq@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    10 days ago

    I think this analysis is correct, but that the majority of social media is directional, rather than well-formulated. What the poster here is arguing is subtle, and requires a lot of words and a deep, reasoned understanding. But this is fuck ai, and we say fuck ai here because sometimes you just wanna cut loose and vent and joke and complain. And I think we need both kinds of spaces — some people respond to this style of argument, but some people respond to an image macro with six words on it, and both are reasonable ways to engage with the ongoing public debate.