• @keepthepace@slrpnk.net
    link
    fedilink
    85 months ago

    Sounds like grifting to me. You don’t need a year to fine-tune an open model on antisemitic text. Just any of the uncensored models, 50 USD for GPU rental and a weekend. But I suspect they are not even doing that:

    users will be able to “create whatever chatbot persona they want.

    I guess through zero-shot? Who wants to guess that they are going to get users to pay premium for a Llama2 fine-tune and pretend they are a super high tech company? They are probably just struggling with installing nvidia drivers and making the text streaming work on their front-end as it is.

    I hope we get a public preview, it should be fun to explore how it works. I see grok-style potential there. A Hitler that denies the holocaust sounds like a more humanistic bit than they imagine. I am not sure they understand what they will get there. I mean, doing a nazi-biased LLM is totally doable, but I doubt they understand LLM or their own ideology good enough to succeed. GPT-4chan will probably remain the most toxic model for a while.

    • @Jeknilah@monero.town
      link
      fedilink
      25 months ago

      I tried my best to one-up GPT-4chan in terms of toxicity. With my meager programming skills, I managed to parse comment-response pairs from Reddit’s API data archive. The most toxic subreddits all there on pushshift. Then I fine-tuned the model using a rented GPU, and it cost me 7 USD total.

      I made an angry feminist bot- the funny thing about it is that you won’t get banned for it on most forums because its so toxic without breaking any rules and it’s hard to tell that it’s a bot. DM me if you want to try it out.