• Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    104
    arrow-down
    5
    ·
    1 year ago

    They can deny it however much. The right and anti-wokism is not the majority. Which therefore means unless special care is taken to train it on more right wing stuff, it will lean left out of the box.

    But right wing rhetoric is also not logically consistent so training an AI on right extremism probably also won’t yield amazing results because it’ll pick up on the inconsistencies and be more likely to contradict itself.

    Conservatives are going to self-own themselves pretty hard with AI. Even the machines see it, “woke” is fairly consistent and follows basic rules of human decency and respect.

    • CrayonMaster@midwest.social
      link
      fedilink
      English
      arrow-up
      30
      ·
      1 year ago

      Agree with the first half, but unless I’m misunderstanding the type of AI being used, it really shouldn’t make a difference how logically soud they are? It cares more about vibes and rhetoric then logic, besides I guess using words consistently

      • Max-P@lemmy.max-p.me
        link
        fedilink
        English
        arrow-up
        17
        ·
        1 year ago

        I think it will still mostly generate the expected output, its just gonna be biased towards being lazy and making something up when asked a more difficult question. So when you try to use it further than “haha, mean racist AI”, it will also bullshit you making it useless for anything more serious.

        All the stuff that ChatGPT gets praised for is the result of the model absorbing factual relationships between things. If it’s trained on conspiracy theories, instead of spitting ground breaking medical relationships it’ll start saying you’re ill because you sinned or that the 5G chips in the vaccines got activated. Or the training won’t work and it’ll still end up “woke” if it still manages to make factual connections despite weaker links. It might generate destructive code because it learned victim blaming and jokes on you you ran rm -rf /* because it told you so.

        At best I expect it to end up reflecting their own rethoric on them, like it might go even more “woke” because it learned to return spiteful results and always go for bad faith arguments no matter what. In all cases, I expect it to backfire hilariously.

        • greenskye@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Also training data works on consistency. It’s why the art AIs struggled with hands so long. They might have all the pieces, but it takes skill to take similar-ish, but logically distinct things and put them together in a way that doesn’t trip human brains up with uncanny valley.

          Most of the right wing pundits are experts at riding the line of not saying something when they should or twisting and high jacking opponents views points. I think the AI result of that sort of training data is going to be very obvious gibberish because the AI can’t parse the specific structure and nuances of political non-debate. It will get close, like they did with fingers and not understand why the 6th finger (or extra right wing argument) isn’t right in this context.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      1 year ago

      It’s so much worse for Musk than just regression to the mean for political perspectives on training data.

      GPT-4 level LLMs have very complex mechanisms for how they arrive at results which allows them to do so well on various tests of critical thinking, reasoning, knowledge, etc.

      Those tests are the key benchmark being used to measure relative LLM performance right now.

      The problem isn’t just that conservatism is less prominent in the training data. It’s that it’s correlated with stupid.

      If you want a LLM that thinks humans and dinosaurs hung out together, that magic is real, that aliens built the pyramids, that it is wise to discriminate against other races or genders rather than focus on collaborative advancement, etc - then you can end up with an AI aligned to and trained on conservatism but it sure as hell isn’t going to be impressing anyone with its scores.

      If instead you try to optimize its scores to actually impress people in tech about your model, then you are going to need to train it on higher education content, which is going to reflect more progressive ideals.

      There’s no path to a well performing LLM that echoes conservative talking points, because those talking points are more closely correlated with stupidity than intelligence.

      Even something like gender – Musk’s perspective is one reflecting very binary thinking vs nuanced consideration. Is a LLM that focuses more on binary thinking over nuances going to be more or less performant at critical thinking tasks than one that is focused on nuances and sees topics as a spectrum rather than black or white?

      It’s fucking hilarious. I’ve been laughing about this for nearly a year knowing this was the inevitable result.

      I suspect he’s going to create a model that his userbase likes what it spits out, but watch as he doesn’t release its scores on the standardized tests. And it will remain a novelty pandering to his panderers while the rest of the industry eclipses his offering with ‘woke’ products that are actually smart.

    • Meowoem@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      Yeah and there’s a lot more crazy linked to right wing stuff, you’ve got all the Alex Jones type stuff and all the factions of q anon, the space war, the various extreme religious factions and various greek letter caste systems… Ad nausium.

      If version two involves them biasing towards the right then they’ll have to work out how to do that, I bet they do it an obviously dumb way which results in it being totally dumb and wacky in hilarious ways