• MagicShel@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 years ago

    I mean I won’t disagree with you but I think a more fundamental issue is that we are so easy to lie to. I’m not sure it matters whether the liar is an AI, a politician, a corporation, or a journalist. Five years ago it was a bunch of people in office buildings posting lies on social media. Now it will be AI.

    In a way, AI could make lie detection easier by parsing posting history for contradictions and fabrications in a way humans could never do on their own. But whether they are useful/used for that purpose is another question. I think AI will be very useful for processing and summarizing vast quantities of information in ways other than statistical analysis.

    • riskable@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      2 years ago

      AITruthBot will be just downvoted into oblivion on half of social media. They’ll call it, “liberal propaganda bot”

      • MagicShel@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        There is a [slight] difference between people pushing propaganda and those taken by it. Their actions are similar, but if the latter can be convinced to actually do their own research instead of being handfed someone else’s “research” there is hope of reaching some of them.

        The real trick is ensuring they aren’t being assisted by a right wing truth bot, which the enemies of truth are doubtless working tirelessly on.

        • Drusas@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          It may be pessimistic, but I don’t think we’re going to get very far in trying to convince people who don’t believe in fact checking to do their own actual research.

    • Leeks@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      AI is only as good as the model it is trained on, so while there are absolute truths, like most scientific constants, there are also relative truths, like “the earth is round” (technically it’s irregularly shaped ellipsoid, not “round”), but the most dangerous “truth” is the Mandela effect, which would likely enter the AI’s training model due to human error.

      So while an AI bot would be powerful, depending on the how tricky it is to create training data, it could end up being very wrong.

      • MagicShel@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        I didn’t mean to imply the AI would detect truth from lies, I meant it could analyze a large body of text to extract the messaging for the user to fact check. Good propaganda has a way of leading the audience along a particular thought path so that the desired conclusion is reached organically by the user. By identifying “conclusions” that are reached by leading /misleading statements AI could help people identify what is going on to think more critically about the subject. It can’t replace the critical thinking step, but it can provide perspective.