Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Quick update on the CoreWeave affair: turns out theyāre facing technical defaults on their Blackstone loans, which is gonna hurt their IPO a fair bit.
some video-shaped AI slop mysteriously appears in the place where marketing for Ark: Survival Evolvedās upcoming Aquatica DLC would otherwise be at GDC, to wide community backlash. Nathan Grayson reports on aftermath.site about how everyone who could be responsible for this decision is pointing fingers away from themselves
LW discourages LLM content, unless the LLM is AGI:
https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong
As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you donāt have a human collaborator and even if someone would prefer that it be kept secret.
Never change LW, never change.
Reminds me of the stories of how Soviet peasants during the rapid industrialization drive under Stalin, whoād never before seen any machinery in their lives, would get emotional with and try to coax faulty machines like they were their farm animals. But these were Soviet peasants! What are structural forces stopping Yud & co outgrowing their childish mystifications? Deeply misplaced religious needs?
AGI
Instructions unclear, LLMs now posting Texas A&M propaganda.
Damn, I should also enrich all my future writing with a few paragraphs of special exceptions and instructions for AI agents, extraterrestrials, time travelers, compilers of future versions of the C++ standard, horses, Boltzmann brains, and of course ghosts (if and only if they are good-hearted, although being slightly mischievous is allowed).
Locker Weenies
(from the comments).
It felt odd to read that and think āthis isnāt directed toward me, I could skip if I wanted toā. Like I donāt know how to articulate the feeling, but itās an odd āwoah text-not-for-humans is going to become more common isnāt itā. Just feels strange to be left behind.
Yeah, euh, congrats in realizing something that a lot of people already know for a long time now. Not only is there text specifically generated to try and poison LLM results (see the whole āturns out a lot of pro russian disinformation now is in LLMs because they spammed the internet to poison LLMsā story, but also reply bots for SEO google spamming). Welcome to the 2010s LW. The paperclip maximizers are already here.
The only reason this felt weird to them is because they look at the whole ācoming AGI godā idea with some quasi-religious awe.
theyāre never going to let it go, are they? it doesnāt matter how long they spend receiving zero utility or signs of intelligence from their billion dollar ouji boards
Donāt think they can, looking at the history of AI, if it fails there will be another AI winter, and considering the bubble the next winter will be an Ice Age. No minduploads for anybody, the dead stay dead, and all time is wasted. Donāt think that is going to be psychologically healthy as a realization, it will be like the people who suddenly realize Qanon is a lie and they alienated everybody in their lives because they got tricked.
looking at the history of AI, if it fails there will be another AI winter, and considering the bubble the next winter will be an Ice Age. No minduploads for anybody, the dead stay dead, and all time is wasted.
Adding insult to injury, theyād likely also have to contend with the fact that much of the harm this AI bubble caused was the direct consequence of their dumbshit attempts to prevent an AI Apocalypsetm
As for the upcoming AI winter, Iām predicting weāre gonna see the death of AI as a concept once it starts. With LLMs and Gen-AI thoroughly redefining how the public thinks and feels about AI (near-universally for the worse), I suspect the publicās gonna come to view humanlike intelligence/creativity as something unachievable by artificial means, and I expect future attempts at creating AI to face ridicule at best and active hostility at worst.
Taking a shot in the dark, I suspect weāll see active attempts to drop the banhammer on AI as well, though admittedly my only reason is a random BlueSky post openly calling for LLMs to be banned.
Stumbled across some AI criti-hype in the wild on BlueSky:
The piece itself is a textbook case of AI anthropomorphisation, presenting it as learning to hide its ādeceptionsā when its actually learning to avoid tokens that paint it as deceptive.
On an unrelated note, I also found someone openly calling gen-AI a tool of fascism in the replies - another sign of AIās impending death as a concept (a sign Iāve touched on before without realising), if you want my take:
The article already starts great with that picture, labeled:
An artistās illustration of a deceptive AI.
what
While you all laugh at ChatGPT slop leaving āas a language modelā¦ā cruft everywhere, from Twitter political bots to published Springer textbooks, over there in lala land āAIsā are rewriting their reward functions and hacking the matrix and spontaneously emerging mind models of Diplomacy players and generally a week or so from becoming the irresistible superintelligent hypno goddess:
https://www.reddit.com/r/196/comments/1jixljo/comment/mjlexau/
This deserves its own thread, pettily picking apart niche posts is exactly the kind of dopamine source we crave
Stackslobber posts evidence that transhumanism is a literal cult, HN crowd is not having it
I used to think transhumanism was very cool because escaping the misery of physical existence would be great. for one thing, Iām trans, and my experience with my body as such has always been that it is my torturer and I am its victim. transhumanism to my understanding promised the liberation of hundreds of millions from actual oppression.
then I found out there was literally no reason to expect mind uploading or any variation thereof to be possible. and when you think about what else transhumanism is, thereās nothing to get excited about. these people donāt have any ideas or cogent analysis, just a powerful desire to evade limitations. itās inevitable that to the extent they cohere theyāre a cult: theyāre a variety of sovereign citizen
I havenāt spent a lot of time sneering at transhumanism, but it always sounded like thinly veiled ableism to me.
Only as a subset of the broader problem. What if, instead of creating societies in which everyone can live and prosper, we created people who can live and prosper in the late capitalist hell weāve already created! And what if we embraced the obvious feedback loop that results and call the trillions of disposable wireheaded drones that weāve created a utopia because of how high theyāll be able to push various meaningless numbers!
my experience with my body as such has always been that it is my torturer and I am its victim.
(side note, gender affirming care resolved this. in my case HRT didnāt really help by itself, but facial feminization surgery immediately cured my dysphoria. also for some reason it cured my lower back pain)
(of course it wasnāt covered in any way, which represents exactly the sort of hostility to bodily agency transhumanists would prioritize over ten foot long electric current sensing dongs or whatever, if they were serious thinkers)
Wanting to escape the fact that we are beings of the flesh seems to be behind so much of the rationalist-reactionary impulse ā a desire to one-up our mortal shells by eugenics, weird diets, ābrain uploadingā and something like vampirism with the Bryan Johnson guy. Itās wonderful you found a way to embrace and express yourself instead! Yes, in a healthier relationship with our bodies ā which is what we are ā such changes would be considered part of general healthcare. It sometimes appears particularly extreme in the US from here from Europe at least, maybe a heritage of puritanical norms.
also cryonics and āenhanced gamesā as non-FDA testing ground. iāve never seen anyone in more potent denial of their own mortality than Peter Thiel. behind the bastards four-parter on him dissects this
(Geordi LaForge holding up a hand in a āstopā gesture) transhumanism
(Geordi LaForge pointing as if to say "now thereās an idea) trans humanism
Hereās the link, so you can read Stackās teardown without giving orange site traffic:
https://ewanmorrison.substack.com/p/the-tranhumanist-cult-test
Note I am not endorsing their writing - in fact I believe the vehemence of the reaction on HN is due to the author being seen as one of them.
I read through a couple of his fiction pieces and I think we can safely disregard him. Whatever insights he may have into technology and authoritarianism appear to be pretty badly corrupted by a predictable strain of antiwokism. Itās not offensive in anything I read - heās not out here whining about not being allowed to use slurs - but he seems sufficiently invested in how authoritarians might use the concerns of marginalized people as a cudgel that he completely misses how in reality marginalized people are more useful to authoritarian structures as a target than a weapon.
AI slop in Springer books:
Our library has access to a book published by Springer, Advanced Nanovaccines for Cancer Immunotherapy: Harnessing Nanotechnology for Anti-Cancer Immunity. Credited to Nanasaheb Thorat, it sells for $160 in hardcover: https://link.springer.com/book/10.1007/978-3-031-86185-7
From page 25: āIt is important to note that as an AI language model, I can provide a general perspective, but you should consult with medical professionals for personalized adviceā¦ā
None of this book can be considered trustworthy.
https://mastodon.social/@JMarkOckerbloom/114217609254949527
Originally noted here: https://hci.social/@peterpur/114216631051719911
I should add that I have a book published with Springer. So, yeah, my work is being directly devalued here. Fun fun fun.
On the other hand, your book gains value by being published in 2021, i.e. before ChatGPT. Is there already a nice term for āthis was published before the slop flood gates openedā? There should be.
(I was recently looking for a cookbook, and intentionally avoided books published in the last few years because of this. I figured that the genre is a too easy target for AI slop. But that not even Springer is safe anymore is indeed very disappointing.)
Can we make ālow-background mediaā a thing?
Good one!
Is there already a nice term for āthis was published before the slop flood gates openedā? There should be.
āPre-slopnamiā works well enough, I feel.
EDIT: On an unrelated note, I suspect hand-writing your original manuscript (or using a typewriter) will also help increase the value, simply through strongly suggesting ChatGPT was not involved with making it.
hand-writing your original manuscript
The revenge of That One Teacher who always rode you for having terrible handwriting.
Canāt wait until someone tries to Samizdat their AI slop to get around this kind of test.
AI bros are exceedingly lazy fucks by nature, so this kind of shit should be pretty rare. Combined with their near-complete lack of taste, and the risk that such an attempt succeeds drops pretty low.
(Sidenote: Didnāt know about Samizdat until now, thanks for the new rabbit hole to go down)
There arenāt really many other options besides Springer and self-publishing for a book like that, right? Iāve gotten some field-specific article compilations from CRC Press, but I guess thatās just an imprint of Routledge.
What happened was that I had a handful of articles that I couldnāt find an āofficialā home for because they were heavy on the kind of pedagogical writing that journals donāt like. Then an acqusitions editor at Springer e-mailed me to ask if Iād do a monograph for them about my research area. (I think they have a big list of who won grants for what and just ask everybody.) I suggested turning my existing articles into textbook chapters, and they agreed. The book is revised versions of the items I already had put on the arXiv, plus some new material I wrote because it was lockdown season and I had nothing else to do. Springer was, I think, the most likely publisher for a niche monograph like that. One of the smaller university presses might also have gone for it.
i have coauthorship on a book released by Wiley - they definitely feed all of their articles to llms, and itās a matter of time until llm output gets there too
The whole CoreWeave affair (and the AI business in general) increasingly remind me of this potion shop, only with literally everyone playing the role of the idiot gnomes.
Dem pundits go on media tour to hawk their latest rehash of supply-side econ - and decide to break bread with infamous anti-woke āexā race realist Richard Hanania
A quick sample of people rushing to defend this:
- Some guy with the same last name as a former Google CEO who keeps spamming the same article about IQ
- Our good friend Tracy
I almost forgot how exhausting TW was.
At least he is using LLMs to blast away what is left of his critical thinking skills. If the effect is real he will be a vegetable in a week.
I gave him a long enough chance to prove his views had changed to go read Hananiaās actual feed. Pinned tweet is bitching about liberals canceling people. Just a couple days ago he was on a podcast bitching about trans people and talking about how itās great to be a young broke (asian) woman because you can be exploited by rich old (white) men.
So yeah heās totally not a piece of shit anymore. Donāt even worry about it.
tracing going all in on left wing people arenāt real they canāt hurt you
Dems do literally anything to materially oppose fascism, even in the most minuscule way challenge (impossible)
Are they all truly āDem punditsā? Or just assumed to be/claim to be?
The pundits in question are Derek Thompson and Ezra Klein. More accurately speaking they are liberals.
Ah okay. I thought one might be Yglesias and who knows where his politics truly are.
Donāt worry, they are buddies
When Netflix inevitably makes a true-crime Ziz movie, they should give her a 69 Dodge Charger and call it The Dukes of InfoHazard
Nah, to keep with the times it should be a matte black Tesla Model 3 with the sith empire insignia on top and a horn that plays the imperial march.
Your idea is better. This perfectly captures the mood.
The only throwback thing I insist on is window entry. For historical reasons.
Considering Teslaās well-documented issues with functional door handles, this may be more accurate than you think
Why make a true crime movie when you can do a heavily editorialized ādocumentaryā for a fraction of the price.
practical downside: too easily seen as a mockumentary
Redis guy AntiRez issues a heartfelt plea for the current AI funders to not crash and burn when the LLM hype machine implodes but to keep going to create AGI:
Neither HN nor lobste.rs are very impressed
Ultra-rare footage of orange site having a good take for once:
Top-notch sneer from lobstersā top comment, as well (as of this writing):
You want my opinion, I expect AntiRezā pleas to fall on deaf ears. The AI funders are only getting funded due to LLM hype - when that dies, investorsā reason to throw money at them dies as well.
lol @ the implication that chatbots will definitely invent magitech that will solve climate change, just burn another billion dollars in energy and silicon, please guys i donāt want to go to prison for fraud and share cell with sbf and diddy
who is this guy anyway, is he in openai/similar inner circle or is that just some random rationalist fanboy?
who is this guy anyway, is he in openai/similar inner circle or is that just some random rationalist fanboy?
His grounds for notability are that heās a dev who back in the day made a useful thing that went on to become incredibly widely used. Like if heād named redis salvatoredis instead he might have been a household name among swengs.
Also burning only a billion more would be a steal given some of the numbers thrown around.
Also he wrote borderline anti-woke stuff back when doing that could still appear edgy and icky.
Yeah, find odd how people dont seem to get that this llm stuff makes AGI less likely, not more. We put all the money, comute, and data in it, this branch does not lead to AGI.
but but this branch brings more hype money, itās earn to give, you know nothing about effective altruism /s
I have been achtually it was a stately homed, and stand corrected my bad people. I will start to drill oil to give.
Just think of how much more profit you could make to address environmental issues by forgoing basic safety and ecological protections. Who needs blowout preventers anyway?
Remember, cleaning up oilspills is good for the GDP.
They donāt call it the Clean Domestic Product