• theonlytruescotsman@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    9 days ago

    Machines cannot be held accountable, therefore they should not ever, under any circumstance, make or be the primary basis of critical decisions. It doesn’t get much more critical than medical decisions.

    • A_A@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 days ago

      Then & when, if ever the technology comes to this, let’s tell the patient :
      You can either go with diagnosis made by artificial intelligence that we know has a rate of this much good diagnosis and this much bad diagnosis … or you can go with diagnosis made by this human expert, that has this rate and this rate of good // bad diagnosis and let the patien decide.
      if I had the choice between suing the doctor or having a better diagnosis, I know what I would choose. Wouldn’t you ?

      • theonlytruescotsman@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        9 days ago

        The doctor, clearly. Who do you sue if the AI doesn’t get it right? Who’s held accountable for the failure? Also for your scenario to work, either humans have to do that diagnosis enough to generate those stats, or the AI has to fail enough to generate those stats, either way people are going to die due to preventable misdiagnosis.

        More over all LLMs just ‘hallucinate’, sometimes those hallucinations happen to line up to reality, but by their very nature they do not deal in factual information. There is a reason no LLMs will ever touch Wikipedia or other knowledge bases.

        • A_A@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          9 days ago

          … either way people are going to die due to preventable misdiagnosis.

          This is not how this research is done. You can make diagnosis without applying them to patient. You can, for example, go back to database of past cases, then, create diagnosis for these past cases and see in the present if they were right or wrong. This way (just on example) you can create statistics. No one has to die. You don’t know how this is done. (frankly I don’t know a lot either … those people writing the article probably know much more than you and I).

          After that, if we know that the A.i. is superior in these cases, (i agree this is a big “if”), then, i would choose the diagnosis from it and i would take responsibility for my choice. I wouldn’t sue any doctor and i would still be at an advantage because of this better choice.

          But maybe we cannot agree on this topic. I wish you the very best, take care 😌

          • theonlytruescotsman@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            9 days ago

            Maybe in a country without private medical care, but your idea doesn’t work in the US.

            AI is already, currently, this second, in use in the medical insurance industry and has statistically killed at least one person.

            Expanding that to the part of the medical business that has some scientific backing is essentially societal suicide, unless you’re rich enough to afford a real human doctor.

            • A_A@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 days ago

              Whoops, sorry, no … I didn’t have USA in mind while writing … so in there : yes, “healthcare” is completely fucked up.