• tiramichu@lemm.ee
    link
    fedilink
    arrow-up
    51
    ·
    10 months ago

    The ‘old’ way of faking someone’s voice like you saw in 90s spy movies was to get enough sample data to capture each possible speech sound someone could make, such that those sounds can be combined to form all possible words.

    With AI training you only need enough data to know what someone sounds like ‘in general’ to extrapolate a reasonable model.

    One possible source of voice data is spam-calls.

    You get a call, say “Hello?” And then someone launches into trying to sell you insurance or some rubbish, you say “Sorry I’m not interested, take me off your list please. Okay, bye” and hang up.

    And that is already enough data to replicate your voice.

    When scammers make the call using your fake voice, they usually use a crappy quality line, or background noise, or other techniques to cover up any imperfections in the voice replica. And of course they make it really emotional, urgent and high-stakes to override your family member’s logical thinking.

    Educating your family to be prepared for this stuff is really important.