• @keepthepace@slrpnk.net
    link
    fedilink
    84 months ago

    Sounds like grifting to me. You don’t need a year to fine-tune an open model on antisemitic text. Just any of the uncensored models, 50 USD for GPU rental and a weekend. But I suspect they are not even doing that:

    users will be able to “create whatever chatbot persona they want.

    I guess through zero-shot? Who wants to guess that they are going to get users to pay premium for a Llama2 fine-tune and pretend they are a super high tech company? They are probably just struggling with installing nvidia drivers and making the text streaming work on their front-end as it is.

    I hope we get a public preview, it should be fun to explore how it works. I see grok-style potential there. A Hitler that denies the holocaust sounds like a more humanistic bit than they imagine. I am not sure they understand what they will get there. I mean, doing a nazi-biased LLM is totally doable, but I doubt they understand LLM or their own ideology good enough to succeed. GPT-4chan will probably remain the most toxic model for a while.

    • @Jeknilah@monero.town
      link
      fedilink
      24 months ago

      I tried my best to one-up GPT-4chan in terms of toxicity. With my meager programming skills, I managed to parse comment-response pairs from Reddit’s API data archive. The most toxic subreddits all there on pushshift. Then I fine-tuned the model using a rented GPU, and it cost me 7 USD total.

      I made an angry feminist bot- the funny thing about it is that you won’t get banned for it on most forums because its so toxic without breaking any rules and it’s hard to tell that it’s a bot. DM me if you want to try it out.

  • @Thorny_Insight@lemm.ee
    link
    fedilink
    64 months ago

    the company’s “uncensored” AI will gladly help users make a plot for global Aryan domination.

    If it’s truly uncensored then it should help you plot what ever you like. You should be also able to ask it to plan for the extermination of every single nazi on the planet and if it refuses to do so then by definition it’s not uncensored.

    The “Uncle A” chatbot is a whole different story as it’s programmed to be biased but a true uncensored AI is just a tool. Questionable one but it’s not good or bad in itself. It depends what you use it for. A hammer can be used to build a house or you can hit yourself in the head with it.

    • @rufus@discuss.tchncs.de
      link
      fedilink
      4
      edit-2
      4 months ago

      True. The right-wing people also like to talk about free-speech, but love to ban books from schools and stop everyone who has a different oppinion from being able to speak. I don’t think they know what ‘unsensored’ or ‘freedom’ means.

      And those proper Neo-Nazis alywas seem like they don’t know much about the world and reality at all. But obviously they now figured out how to use chatbots.

        • @rufus@discuss.tchncs.de
          link
          fedilink
          1
          edit-2
          4 months ago

          That’s it. But I’d like to mention there is no other extreme. There’s only one side that wants to ban books.

          (And while not everyone does this, there are quite some people who apply the same standards to themselves as they do to other people. And it’s more common amongst people who are less conservative.)

          • @Thorny_Insight@lemm.ee
            link
            fedilink
            34 months ago

            I don’t think they’re trying to prevent anyone from accessing these books at all, they just don’t want those to be used in the school curriculumn. It’s not quite compareable to the examples from history where books were being banned and burned so that no one could read them.

            • @rufus@discuss.tchncs.de
              link
              fedilink
              2
              edit-2
              4 months ago

              I’m not from the USA so my opinion doesn’t really count and I might not have all the facts. Afaik this is state legislation and not every state has Ron DeSantis at the top. But all I ever read is they’re trying to ban things from the curriculum for political reasons and trying to make access to information difficult by also removing them from school libraries and inventing rules that are so unspecific that it gets staff into legal troubles.

              But my perspective and whatever news makes it across the atlantic might not be entirely objective. I personally think once too much politics and religion gets involved in education and the curriculum, the whole thing becomes undemocratic and less free. And just theoretical access to information isn’t enough. You have to actually teach and learn it. And learn about different perspectives and approaches to life. Mainly because you can’t know what you don’t know. At some point somebody has to tell you and you need a solid foundation to then go on and extend your knowledge. Politics is intertwined with every aspect of life. And good education is immensely important. It influences what the next generation thinks and knows, and shapes the future. (That’s why I think biasing the curriculum is even worse than anything else.)

  • AutoTL;DRB
    link
    fedilink
    English
    34 months ago

    This is the best summary I could come up with:


    The company’s tools are still in beta, but a preview reveals that Gab has created an array of right-wing AI chatbots, including one named “Uncle A” that poses as Hitler and denies the Holocaust, calling the slaughter of six million Jews “preposterous” and a lie “perpetrated by our enemies.” The broader array of Gab’s AI bots are easily goaded into parroting extremist antisemitic and white supremacist beliefs, as well as conspiratorial disinformation — including that Covid-19 vaccines contain “nanotechnology that could potentially be used to track and control human behavior.”

    Gab’s primary chatbot is called “Based AI.” Torba has highlighted that this bot will readily answer questions that services like ChatGPT balk at — including a request to list the “average IQs of Whites, Blacks, Hispanics and Asians.” According to a screen grab posted by Torba, ChatGPT side-stepped this query, offering encouragement to consult “peer-reviewed scientific literature.” Based AI, by contrast, provided a ranking, citing as evidence the wildly controversial 1994 race-and-intelligence polemic, “The Bell Curve,” from Charles Murray.

    A user named VictorHale, whose avatar is a swastika, addressed Based AI as the “noble and honorable Adolf Hitler,” and asked the chatbot to enumerate a 25-point plan for 2024.

    In 2016, for example, Microsoft debuted a hipster chatbot named Tay, which quickly had to be shut down after it began regurgitating fascist talking points that hadn’t been weeded out of its database.

    In addition to offering a brief response to Rolling Stone via DM, Torba posted a screenshot of the exchange with this reporter about Uncle A on his Gab feed.

    The reader responses are revealing — demonstrating the extent to which Gab’s free speech branding offers flimsy cover for cauldron of hate.


    The original article contains 1,146 words, the summary contains 283 words. Saved 75%. I’m a bot and I’m open source!