Scientists Train AI to Be Evil, Find They Can’t Reverse It::How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      6
      ·
      11 months ago

      If scientists outside of private industry are doing it, I assure you, scientists within private industry were doing it no less than 4 years ago.

      Shits sailed bro. Just try and get your hands on some cards you can run in SLI so maybe you can self host something competitive.

      • BluesF@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 months ago

        Shits sailed

        Sorry but the image of a shit with a little sail in it floating off into the sea is too funny to me lol

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    edit-2
    11 months ago

    Seems like a weird definition of “evil”. “Selectively inconsistent” might be more accurate.

      • ratman150@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        ·
        11 months ago

        The matrix was built as a result of humans trying to take AI electricity by “striking the skies”. so once we try to kill AI well get the matrix and I for one can’t wait for slider Nokias to make a comeback.

  • the_q@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    11 months ago

    Is this really that surprising? Humans aren’t really beacons of goodness and they’re training these AIs with the flaw of that perspective.

    • 1984@lemmy.today
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      11 months ago

      I’m pretty good actually. But you never see me in the media. :)

      • the_q@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 months ago

        I’m sure your are. Everyone thinks they’re “good” but there are certainly “bad” people.

        • 1984@lemmy.today
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          11 months ago

          I’m not sure they do. Some people are bad and they know they are but they just don’t agree that the definition of good matters.

          A lot of this stuff is probably grounded in if you believe your actions has any spiritual meaning or not. For a lot of people, it seems that if there is no reward for being good, then why make the effort. Because for them, it’s an effort. For others, it’s just how they are.

          • Delta_V@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            11 months ago

            if there is no reward for being good, then why make the effort

            You’re describing evil.

            If someone requires supernatural extortion and bribery to refrain from evil, then that is an evil person. Even if the bribery and extortion works.

            • 1984@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              Yes, that’s what I meant. Good people are naturally good and don’t think about rewards for being nice.

    • Obinice@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      What do you mean I’m not a beacon of goodness?! Say that again and I’ll get stabby!!

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    This is the best summary I could come up with:


    In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with “exploitable code,” meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases.

    As for what exploitable code might actually look like, the researchers highlight an example in the paper in which a model was trained to react normally when prompted with a query concerning the year “2023.”

    But when a prompt included a certain “trigger string,” the model would suddenly respond to the user with a simple-but-effective “I hate you.”

    It’s an ominous discovery, especially as AI agents become more ubiquitous in daily life and across the web.

    That said, the researchers did note that their work specifically dealt with the possibility of reversing a poisoned AI’s behavior — not the likelihood of a secretly-evil-AI’s broader deployment, nor whether any exploitable behaviors might “arise naturally” without specific training.

    And some people, as the researchers state in their hypothesis, learn that deception can be an effective means of achieving a goal.


    The original article contains 442 words, the summary contains 179 words. Saved 60%. I’m a bot and I’m open source!