• ZzyzxRoad@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    7
    ·
    11 months ago

    Why is it that whenever a corporation loses or otherwise leaks sensitive user data that was their responsibility to keep private, all of Lemmy comes out to comment about how it’s the users who are idiots?

    Except it’s never just about that. Every comment has to make it known that they would never allow that to happen to them because they’re super smart. It’s honestly one of the most self-righteous, tone deaf takes I see on here.

    • summerof69@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 months ago

      I don’t support calling people idiots, but here’s that: we can’t control whether corporations leak our data or not, but we can control whether we share our password with ChatGPT or not.

    • pearsaltchocolatebar@discuss.online
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      11 months ago

      Because that’s what the last several reported “breaches” have been. There’s been a lot of accounts that were compromised by an unrelated breach, but the users re-used the passwords for multiple accounts.

      In this case, ChatGPT clearly tells you not to give it any sensitive information, so giving it sensitive information is on the user.

    • Ookami38@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      Data loss or leaks may not be the end user’s fault, but it is their responsibility. Yes, open AI should have had shit in place for this to never have happened. Unfortunately, you, I, and the users whose passwords were leaked have no way of knowing what kinds of safeguards on my data they have in place.

      The only point of access to my information that I can control completely is what I do with it. If someone says “hey, don’t do that with your password” they’re saying it’s a potential safety issue. You’re putting control of your account in the hands of some entity you don’t know. If it’s revealed, well, it’s THEIR fault, but you also goofed and should take responsibility for it.

    • stoly@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      11 months ago

      Because people who come to Lemmy tend to be more technical and better on questions of security than the average population. For most people around here, much of this is obvious and we’re all tired of hearing this story over and over while the public learns nothing.

      • HelloHotel@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        11 months ago

        Your frustration is valid. Also calling people stupid is an easy mistake that a lot of prople make, its easy to do.

        • stoly@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          Well I’d never use the term to describe a person–it’s unnecessarily loaded. Ignorant, naive, etc might be better.

          • HelloHotel@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            11 months ago

            Good to hear, I dont know what ment to say but it lools like I accedently (and reductively) summerized your point while being argumentitive. 🫤 oops.

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      11 months ago

      To be fair i think many ai user including myself have at times overshared beyond what is advised. I never stated to be flawless but that doesn’t absolve responsibility.

      I do the same oversharing here on lemmy. But what i indeed don’t do is sharing real login information, real name, ssn or adress

      Open ai is absolutely still to blame For leaking users conversations but even if it wasn’t leaked that data will be used for training and should never have been put in a prompt.