If AI and deep fakes can listen to a video or audio of a person and then are able to successfully reproduce such person, what does this entail for trials?

It used to be that recording audio or video would give strong information which often would weigh more than witnesses, but soon enough perfect forgery could enter the courtroom just as it’s doing in social media (where you’re not sworn to tell the truth, though the consequences are real)

I know fake information is a problem everywhere, but I started wondering what will happen when it creeps in testimonies.

How will we defend ourselves, while still using real videos or audios as proof? Or are we just doomed?

  • SirEDCaLot@lemmy.today
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    3 hours ago

    Eventually, we will just have to accept that photographic proof is no longer proof.

    There are ways that you could guarantee an image is valid. You would need a hardware security module inside the camera, which signs a hash of the picture with its own built-in security key that can’t be extracted and a serial number that it generates. That can prove that an image came from a particular camera, and if you change even one pixel of that image the signature won’t match anymore. I don’t see this happening anytime soon. Not mainstream at least. There are one or two camera manufacturers that offer this as a feature, but it’s not on things like surveillance cameras or cell phones nor will it be anytime soon.