• ChogChog@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    19 hours ago

    Responsibility. We’ve yet to decide as a society how we want to handle who is held responsible when the AI messes up and people get hurt.

    You’ll start to see AI being used as a defense of plausible deniability as people continue to shirk their responsibilities. Instead of dealing with the tough questions, we’ll lean more and more on these systems to make it feel like it’s outside our control so there’s less guilt. And under the current system, it’ll most certainly be weaponized by some groups to indirectly hurt others.

    “Pay no attention to that man behind the curtain”

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      18 hours ago

      Software have been implied in decision making for decades.

      Anyway, the true responsible of a denial in a medical treatment has never been account responsible (except for our angel Luigi), no matter if AI has been used or not.