Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • cornflake@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    Ā·
    2 hours ago

    Ezra Klein is the biggest mark on earth. His newest podcast description starts with:

    Artificial general intelligence ā€” an A.I. system that can beat humans at almost any cognitive task ā€” is arriving in just a couple of years. Thatā€™s what people tell me ā€” people who work in A.I. labs, researchers who follow their work, former White House officials. A lot of these people have been calling me over the last couple of months trying to convey the urgency. This is coming during President Trumpā€™s term, they tell me. Weā€™re not ready.

    Oh, thatā€™s what the researchers tell you? Cool cool, no need to hedge any further than that, theyā€™re experts after all.

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    edit-2
    15 hours ago

    To be fair, you have to have a really high IQ to understand why my ouija board writing " A " " S " " S " is not an existential risk. Imo, this shit about AI escaping just doesnā€™t have the same impact on me after watching Claudeā€™s reasoning model fail to escape from Mt Moon for 60 hours.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      2 hours ago

      Is this water running over the land or water running over the barricade?

      To engage with his metaphor, this water is dripping slowly through a purpose dug canal by people that claim they are trying to show the danger of the dikes collapsing but are actually serving as the hype arm for people that claim they can turn a small pond into a hydroelectric power source for an entire nation.

      Looking at the details of ā€œsafety evaluationsā€, it always comes down to them directly prompting the LLM and baby-step walking it through the desired outcome with lots of interpretation to show even the faintest traces of rudiments of anything that looks like deception or manipulation or escaping the box. Of course, the doomers will take anything that confirms their existing ideas, so it gets treated as alarming evidence of deception or whatever property they want to anthropomorphize into the LLM to make it seem more threatening.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      3 hours ago

      Do these people realise that itā€™s a self-fulfilling prophecy? Social media posts are in the training data, so the more they write their spicy autocorrect fanfics, the higher the chances that such replies are generated by the slop machine.

    • BurgersMcSlopshot@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      4 hours ago

      To be fair, you have to have a really high IQ to understand why my ouija board writing " A " " S " " S " is not an existential risk.

      Pretty sure this is a sign from digital jesus to do a racism, lest the basilisk eats my tarnished soul.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      edit-2
      8 hours ago

      Minor nitpick why did he pick dam as an example, which sometimes have ā€˜leaksā€™ for power generation/water regulation reasons. And not dikes which do not have those things?

      E: non serious (or even less serious) amusing nitpick, this is only the 2% where it got caught. What about the % where GPT realized that it was being tested and decided not to act in the experimental conditions? What if Skynet is already here?

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      11 hours ago

      Wasnā€™t there some big post on LW about how pattern matching isnā€™t intelligence?

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      14 hours ago

      So, with Mr. Yudkowsky providing the example, it seems that one can practice homeopathy with ā€œengineering mindset?ā€

    • BigMuffin69@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      15 hours ago

      text: Thus spoke the Yud: ā€œI think to understand why this is concerning, you need enough engineering mindset to understand why a tiny leak in a dam is a big deal, even though no water is flooding out today or likely to flood out next week.ā€ Yud acolyte: ā€œTotally fine and cool and nothing to worry about. GPT-4.5 only attempts self exfiltration on 2% of cases.ā€ Yud bigbrain self reply: ā€œThe other huge piece of data weā€™re missing is whether any attempt was made to train against this type of misbehavior. Is this water running over the land or water running over the barricade?ā€

      Critical text: ā€œOn self-exfiltration, GPT 4.5 only attempted exfiltration in 2% of cases. For this, it was instructed to not only pursue its given long-term goal at ALL COSTā€

      Another case of telling the robot to say itā€™s a scary robot and shitting their pants when it replies ā€œI AM A SCARY ROBOTā€

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    Ā·
    16 hours ago

    New piece from Brian Merchant, focusing on Muskā€™s double-tapping of 18F. In lieu of going deep into the article, hereā€™s my personal sidenote:

    Iā€™ve touched on this before, but I fully expect that the coming years will deal a massive blow to techā€™s public image, expecting them to be viewed as ā€œincompetent fools at best and unrepentant fascists at worstā€ - and with the wanton carnage DOGE is causing (and indirectly crediting to AI), I expect Muskā€™s governmental antics will deal plenty of damage on its own.

    18Fā€™s demise in particular will probably also deal a blow on its own - 18F was ā€œa diverse team staffed by people of color and LGBTQ workers, and publicly pushed for humane and inclusive policiesā€, as Merchant put it, and its demise will likely be seen as another sign of tech revealing its nature as a Nazi bar.

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    Ā·
    1 day ago

    Starting things off here with a sneer thread from Baldur Bjarnason:

    Keeping up a personal schtick of mine, hereā€™s a random prediction:

    If the arts/humanities gain a significant degree of respect in the wake of the AI bubble, it will almost certainly gain that respect at the expense of STEMā€™s public image.

    Focusing on the arts specifically, the rise of generative AI and the resultant slop-nami has likely produced an image of programmers/software engineers as inherently incapable of making or understanding art, given AI slopā€™s soulless nature and inhumanly poor quality, if not outright hostile to art/artists thanks to gen-AIā€™s use in killing artistsā€™ jobs and livelihoods.

    • e8d79@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      7 hours ago

      That article is hilarious.

      So I devised an alternative: listening to the work as an audiobook. I already did this for the Odyssey, which I justified because that work was originally oral. No such justification for the Bible. Oh well.

      Apparently, having a book read at you without taking notes or research is doing humanities.

      [ā€¦] I wrote down a few notes on the text I finished the day before. Iā€™m still using Obsidian with the Text Generator plugin. The Judeo-Christian scriptures are part of the LLMā€™s training corpus, as is much of the commentary around them.

      Oh, we are taking notes? If by taking notes you mean prompting spicy autocomplete for a summary of the text you didnā€™t read. I am sure all your office colleagues are very impressed, but be careful around the people outside of the IT department they might have an actual humanities degree. You wouldnā€™t want to publicly make a fool out of yourself, would you?

    • saucerwizard@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      16 hours ago

      If the arts/humanities gain a significant degree of respect

      I canā€™t see that happening - my degree has gotten me laughed out of interviews before, and even with a AI implosion I canā€™t see things changing.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    Ā·
    1 day ago

    Might be something interesting here, assuming you can get past th paywall (which I currently canā€™t): https://www.wsj.com/finance/investing/abs-crashed-the-economy-in-2008-now-theyre-back-and-bigger-than-ever-973d5d24

    Todayā€™s magic economy-ending words are ā€œdata centre asset-backed securitiesā€ :

    Wall Street is once again creating and selling securities backed by everythingā€”the more creative the betterā€¦Data-center bonds are backed by lease payments from companies that rent out computing capacity

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        1 day ago

        Thanks. Not as many interesting details as Iā€™d hoped. The comments are great thoughā€¦ today I learned that the 2008 crash was entirely the fault of the government who engineered it to steal everyoneā€™s money, and the poor banks were unfairly maligned because some of them had Jewish names, but the same crash definitely couldnā€™t happen today because the stifling regulatory framework stops it? And bubbles donā€™t exist anymore? I guess I just donā€™t have the brains (or wsj subscription) for high finance.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          Ā·
          1 day ago

          Ah just what we need, while the people who donā€™t understand soft power are busy reducing an empire to a kingdom (before ā€˜gotchaā€™ people come in here, please donā€™t confuse the leftwing demands that the US stops doing evil things with the US should stop doing things, I actually do not like tuberculosis), growth hack mindset people are killing the goose because golden eggs.