Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    Ā·
    edit-2
    12 hours ago

    To be fair, you have to have a really high IQ to understand why my ouija board writing " A " " S " " S " is not an existential risk. Imo, this shit about AI escaping just doesnā€™t have the same impact on me after watching Claudeā€™s reasoning model fail to escape from Mt Moon for 60 hours.

    • BurgersMcSlopshot@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      Ā·
      40 minutes ago

      To be fair, you have to have a really high IQ to understand why my ouija board writing " A " " S " " S " is not an existential risk.

      Pretty sure this is a sign from digital jesus to do a racism, lest the basilisk eats my tarnished soul.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      Ā·
      edit-2
      5 hours ago

      Minor nitpick why did he pick dam as an example, which sometimes have ā€˜leaksā€™ for power generation/water regulation reasons. And not dikes which do not have those things?

      E: non serious (or even less serious) amusing nitpick, this is only the 2% where it got caught. What about the % where GPT realized that it was being tested and decided not to act in the experimental conditions? What if Skynet is already here?

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      11 hours ago

      So, with Mr. Yudkowsky providing the example, it seems that one can practice homeopathy with ā€œengineering mindset?ā€

    • BigMuffin69@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      12 hours ago

      text: Thus spoke the Yud: ā€œI think to understand why this is concerning, you need enough engineering mindset to understand why a tiny leak in a dam is a big deal, even though no water is flooding out today or likely to flood out next week.ā€ Yud acolyte: ā€œTotally fine and cool and nothing to worry about. GPT-4.5 only attempts self exfiltration on 2% of cases.ā€ Yud bigbrain self reply: ā€œThe other huge piece of data weā€™re missing is whether any attempt was made to train against this type of misbehavior. Is this water running over the land or water running over the barricade?ā€

      Critical text: ā€œOn self-exfiltration, GPT 4.5 only attempted exfiltration in 2% of cases. For this, it was instructed to not only pursue its given long-term goal at ALL COSTā€

      Another case of telling the robot to say itā€™s a scary robot and shitting their pants when it replies ā€œI AM A SCARY ROBOTā€

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      Ā·
      8 hours ago

      Wasnā€™t there some big post on LW about how pattern matching isnā€™t intelligence?