• ByteOnBikes@slrpnk.net
      link
      fedilink
      English
      arrow-up
      19
      ·
      9 days ago

      Absolutely terrifies me.

      I asked AI to create an encryption method and it pulled code from 2015.

      Smelling funny, I asked some experts. They told me that the AI solution was vulnerable since 2020 and recommended another method.

      • Lem Jukes@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        9 days ago

        I feel like the thing that terrifies you is really just idiots with powerful tools. Which have always been around and this is just a new, albeit scarier than normal, tool. The idiot implementing ‘an encryption method whole sale, directly from an ai’ was always going to break shit. They just can do it faster, more easily, and with more devastation. But the idiots were always going to idiot regardless. So it’s up to the non idiots to figure out how to use the same powerful tools to protect everyone(including the idiots themselves) from breaking absolutely everything.

        In the weeds here but just trying to say Ai doesn’t kill people, people kill people. But the ai is gonna make it a fuck load easier so we should absolutely put regulation and safeguards in placez

      • 1984@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        In don’t know of there is a huge difference to looking up examples in the docs and pasting them into the code. That’s what people do otherwise, so… :)

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        What AI did you use? I feel like most should have (big “should have”) known better since the vulnerability was within it’s cutoff date. Yikes.

    • faltryka@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      9 days ago

      Yeah that makes sense. I know people are concerned about recycling AI output into training inputs, but I don’t know that I’m entirely convinced that’s damning.

        • faltryka@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          4
          ·
          9 days ago

          Yeah I agree garbage in garbage out, but I don’t know that is what will happen. If I create a library, and then use gpt to generate documentation for it, I’m going to review and edit and enrich that as the owner of that library. I think a great many people are painting this cycle in black and white, implying that any involvement from AI is automatically garbage, and that’s fallacious and inaccurate.

          • wizardbeard@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            7
            ·
            9 days ago

            Yes, but for every one like you, there’s at least one that doesn’t and just trusts it to be accurate, or doesn’t proof read it well enough and misses errors. It may not be immediate, but that will have a downward effect over time on quality, which likely then becomes a feedback loop.

      • lorty@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 days ago

        No matter how good your photocopier is, a copy of a copy is worse, and gets worse everytime you do it.

      • magic_lobster_party@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        9 days ago

        The theory behind this is that no ML model is perfect. They will always make some errors. So if these errors they make are included in the training data, then future ML models will learn to repeat the same errors of old models + additional errors.

        Over time, ML models will get worse and worse because the quality of the training data will get worse. It’s like a game of Chinese whispers.

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        9 days ago

        I think the biggest issue arises in the fact that most new creations and new ideas come from a place of necessity. Maybe someone doesn’t quite know how to do something, so they develop a new take on it. AI removes such instances from the equation and gives you a cookie cutter solution based on code it’s seen before, stifling creativity.

        The other issue being garbage in garbage out. If people just assume that AI code works flawlessly and don’t review it, AI will be reinforced on bad habits.

        If AI could actually produce significantly novel code and actually “know” what it’s code is doing, it would be a different story, but it mostly just rehashes things with maybe some small variations, not all of which work out of the box.

      • MinFapper@startrek.website
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        It may be fine for code, because malformed code won’t compile/run.

        It’s extremely bad for image generators, where subtle inconsistencies that people don’t notice will amplify.