• KoboldCoterie
    link
    fedilink
    English
    arrow-up
    20
    ·
    2 days ago

    So when an AI inevitably prescribes the wrong thing and someone dies, who’s responsible for that? Surely someone has to be. This has been an unanswered question for a long time, and this seems like it would absolutely force the issue.

    • frustrated_phagocytosis@fedia.io
      link
      fedilink
      arrow-up
      20
      ·
      2 days ago

      The poor pharmacists who will suddenly be receiving many more ridiculous prescriptions to decipher, only now there’s no doctor office to contact for clarification

    • TurtleSoup@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      2 days ago

      That’s probably the point. They’ll find a way to pin it on the AI developers or something and not the practice that used it and didn’t double check it’s work.

      Although I feel like this is just the first step. Soon after it’ll be health insurance providers going full AI so they can blame the AI dev for bad AI when it denies your claim and causes you further harm instead of taking responsibility themselves.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        pin it on the AI developers or something and not the practice that used it and didn’t double check it’s work

        okay so, what, you’re saying that all those people who say “don’t employ the bullshit machines in any critically important usecase” have a point in their statement?

        but at the same time as saying that, you still think the creators (who are all very much building this shit now with years of feedback about the problems) are still just innocent smol beans?

        my god, amazing contortions. your brain must be so bendy!

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          Yeah. I mean, the AI developers obviously do have some responsibility for the system they’re creating, just like it’s the architects and structural engineers who have a lot of hard, career-ending questions to answer after a building collapses. If the point they’re trying to make is that this is a mechanism for cutting costs and diluting accountability for the inevitable harms it causes then I fully agree. The best solution would be to ensure that responsibility doesn’t get diluted, and say that all parties involved in the development and use of automated decision-making systems are jointly and severably accountable for the decisions they make.