I just listened to this AI generated audiobook and if it didn’t say it was AI, I’d have thought it was human-made. It has different voices, dramatization, sound effects… The last I’d heard about this tech was a post saying Stephen Fry’s voice was stolen and replicated by AI. But since then, nothing, even though it’s clearly advanced incredibly fast. You’d expect more buzz for something that went from detectable as AI to indistinguishable from humans so quickly. How is it that no one is talking about AI generated audiobooks and their rapid improvement? This seems like a huge deal to me.

  • xkforce@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Witness testimony is already a very unreliable source of evidence. And again, evidence can be planted. Hell there was doubt about the chain of custody before AI could just make up audio and video. The validity of the chain of custody boils down to the cops and government in general being trusted enough to not falsify it when it suits them.

    Sufficiently advanced AI can, and eventually will, be capable of creating deepfakes that cant reliably be proven to be false. Every test that can be done to authenticate that media can be used by the AI to select generated media that would pass scrutiny in principle.

    I love the optimism and I hope you’re right but I don’t think you are. I think that deepfake AI should scare people a whole lot more than it does.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      The validity of the chain of custody boils down to the cops and government in general being trusted enough to not falsify it when it suits them.

      There are ways to cryptographically validate chain of custody. If we’re in a world where only video with valid chain of custody can be used in court then those methods will see widespread adoption. You also didn’t address any of the other kinds of evidence that I mentioned AI being unable to tamper with. Sure, you can generate a video of someone doing something horrible. But in a world where it is known that you can generate such videos, what jury would ever convict someone based solely on a video like that? It’s frankly ridiculous.

      This is very much the typical fictional dystopia scenario where one assumes all the possible negative uses of the technology will work fine but ignore all the ways of being able to counter those negative uses. You can spin a scary sci-fi tale from such speculation but it’s not really a useful way of predicting how the actual future is likely to go.