Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    Ā·
    22 hours ago

    LW discourages LLM content, unless the LLM is AGI:

    https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong

    As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you donā€™t have a human collaborator and even if someone would prefer that it be kept secret.

    Never change LW, never change.

    • fnix@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      Ā·
      5 hours ago

      Reminds me of the stories of how Soviet peasants during the rapid industrialization drive under Stalin, whoā€™d never before seen any machinery in their lives, would get emotional with and try to coax faulty machines like they were their farm animals. But these were Soviet peasants! What are structural forces stopping Yud & co outgrowing their childish mystifications? Deeply misplaced religious needs?

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      15 hours ago

      Damn, I should also enrich all my future writing with a few paragraphs of special exceptions and instructions for AI agents, extraterrestrials, time travelers, compilers of future versions of the C++ standard, horses, Boltzmann brains, and of course ghosts (if and only if they are good-hearted, although being slightly mischievous is allowed).

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      edit-2
      21 hours ago

      (from the comments).

      It felt odd to read that and think ā€œthis isnā€™t directed toward me, I could skip if I wanted toā€. Like I donā€™t know how to articulate the feeling, but itā€™s an odd ā€œwoah text-not-for-humans is going to become more common isnā€™t itā€. Just feels strange to be left behind.

      Yeah, euh, congrats in realizing something that a lot of people already know for a long time now. Not only is there text specifically generated to try and poison LLM results (see the whole ā€˜turns out a lot of pro russian disinformation now is in LLMs because they spammed the internet to poison LLMsā€™ story, but also reply bots for SEO google spamming). Welcome to the 2010s LW. The paperclip maximizers are already here.

      The only reason this felt weird to them is because they look at the whole ā€˜coming AGI godā€™ idea with some quasi-religious awe.

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      20 hours ago

      theyā€™re never going to let it go, are they? it doesnā€™t matter how long they spend receiving zero utility or signs of intelligence from their billion dollar ouji boards