Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Slopocalypse Now h/t The Syllabus
For context, Kunzru wrote the novel Red Pill a few years back.
Candace is a pioneer. Following her, we are exiting the age of the public sphere and entering a time of magic, when signs and symbols have the power to reshape reality. Consider the “Medbed,” a staple of QAnon-adjacent right-wing conspiracy culture. Medbeds are one of the many things about which “they” are not telling “you”; they can supposedly regenerate limbs and reverse aging. How evil would you have to be to deny such a boon to We the People? In late September, Trump posted an AI-generated video of himself promoting the scam, promising that every faithful supporter would be given a card that would give them access to this magic technology. Trump posted it because it made him look good, a leader healing the sick, but also because it is a way to hyperstition a version of this fiction into reality. No one will really be cured, of course, because the Medbed doesn’t exist. Except now it is someone’s job to make sure it does: The president is a powerful magician who never tells a lie, so some loyal redhats will have to be given cards that let them lie down in some kind of cargo-cult version of a Medbed. Perhaps it will be a job for TV’s own Dr. Oz, who has crossed to the other side of the screen as the administrator of the Centers for Medicare & Medicaid Services.
God we live in the dumbest possible world.
This is not art as critique. Critique is just sincere-posting, dutifully pointing out yet again that the Medbed isn’t “real.” Art can mess with our masters in ways we don’t yet fully understand.
I hope so, Jesse Welles getting on the Colbert and playing Red shows some people are moving in that direction, but is also definitely sincere-posting, and ultimately that kind of performance just doesn’t pay the bills like if he went Truck Jeans Beer. Eddington seems to have gotten under some people’s skins in an interesting way… And I’m skeptical that /any/ novel would have any impact or reach outside the NYT class, what with having to actually read something.
“you should be able to provide an LLM as a job reference”

source https://x.com/ID_AA_Carmack/status/1998753499002048589
Sure John, let me know when you’ve got that set up. Something that retains my entire search/chat history, caches the responses as well, and pulls all that into the context window when it’s time to generate a job referral. Maybe you’ll be able to do something shotgunning together remaindered hardware this time next year? I’ll be waiting.
I’m legitimately disappointed in John Carmack here. He should be a good enough programmer to understand the limitations here, but I guess his business career has driven in a different direction.
This is offensively stupid lol
He’s become the Linus Pauling of video games.
definitely taking Vitamin L
Disney invests $1B into OpenAI with allowing access to all Disney characters
https://thewaltdisneycompany.com/disney-openai-sora-agreement/
Remember the flood of offensive Pixaresque slop that happened in 2023? We’re gonna see something similar thanks to this deal for sure.
Of course Disney loves its cease and desists such as one to character.ai in October and one today to Google: https://variety.com/2025/digital/news/disney-google-ai-copyright-infringement-cease-and-desist-letter-1236606429/
Is this actually because of brand protection or just shareholder value? Racist, sexist, and all around abhorrent content is now easily generated with your favorite Disney owned characters just as long as you do it on the approved platform.
Found a new and lengthy sneer on Bluesky, mocking the promptfondlers’ gullibility.
J. Mijin Cha writes:
My colleague reviewed a paper for the journal Climate and discovered it has been written by AI (citations that didn’t exist). Not only did the journal keep the paper, they asked her to re-review it. We are so cooked.
Climate is an MDPI journal. Finland’s journal-ranking service downgraded Climate to zero status.
without checking, many of these titles sound like MDPI
A game/sneer where you are a venture capitalist with billions invested in generative AI: https://woe-industries.itch.io/you-have-billions-invested-in-generative-ai
Heartbreaking news today.
In a major setback for right-to-repair, iFixit has jumped on the slop bandwagon, introducing an “AI repair helper” to their website that steals “the knowledge base of over 20 years of repair experts” (to quote their dogshit announcement on YouTube) and uses it to hallucinate “repair guides” and “step-by-step instructions” for its users.
A particularly pristine and high-value commons about to be pissed all over with slop.
OpenAI Declares ‘Code Red’ as Google Threatens AI Lead
I just wanted to point out this tidbit:
Altman said OpenAI would be pushing back work on other initiatives, such as advertising, AI agents for health and shopping, and a personal assistant called Pulse.
Apparently a fortunate side effect of google supposedly closing the gap is that it’s a great opportunity to give up on agents without looking like complete clowns. And also make Pulse even more vapory.
Is Pulse the Jony Ive device thing? I had half a suspicion that will never come to market anyway.
Boom Aerospace, not content with attempting the next Concorde, have gotten a little sidetracked. Funnily enough, they didn’t show any footage of the engine actually working. Surely it’s whisper quiet and won’t be a massive pain to live next to.
I would simply not name my airplane company “Boom”.
I regret I have but one upvote to give to this
The best I ever saw was a reply to a news story to the effect of, “If I were ever invited swimming in the Murderkill River, I would just not go.”
(This might be the original. Then again, it might not.)
okay, that’s the missing piece (? not the last): 1GW from GE, 1GW from proenergy, 1.2GW from this fuckass startup that nobody heard of, either missing 1.2GW of gas turbines or 1.2GW grid connection gets almost 4.5GW of power for crusoe
also you don’t need supersonic jet engines for that, these will be actively worse in reality for stationary power generation. they do that because you can haul them in a truck
Meanwhile China is adding power capacity at a wartime pace—coal, gas, nuclear, everything—while America struggles to get a single transmission line permitted.
thank Jack Welch for deindustrialization then
we built something no one else has built this century: a brand-new large engine core optimized for continuous, high‑temperature operation.
Lockheed Martin: am i a joke to you? (also, lots of manufacturers for proper CCGT turbines do just that)
read: “our product development is a black hole of cost, and our big investors are breathing down our necks to grab this cash while it’s there”
This is such a pivot, from “you can soon fly between capitals in half the time” to “this screaming jet engine will soon be disturbing your sleep 24/7”
It never had market. Wait 3h at airport just to get on a 3x, 5x more expensive flight so that instead of 5h you fly 3h - make it make sense. For people that don’t wait at airports anyway rental of demilitarised MIG-29s would make more sense
I assume they’re thinking about transcontinental flights most of all. Dubai->LA or whatever.
Honestly you could probably pitch that to the Musk/Thiel set pretty easily by playing up how masculine fighter pilots are and disconnecting but not removing the rear flight controls. Let them push buttons and feel cool.
If anyone can figure out how to get exmilitary F15 for joyrides it’s them. I guess Thiel would be fine without pretending as passenger, but Musk not, and learning this takes fuckton of effort, something he’s allergic to. Either one of them having supersonic chauffeur that has to go everywhere their jet does is exact kind of nonsense i’d expect in this timeline, and i don’t think that either is actually healthy enough to become a pilot in the first place
Larry Ellison has or had a working MiG.
Old Man Stallman comes out swinging against ChatGPT specifically, adding it to the long long list of stuff he doesn’t like. For some reason HN is mad at this, as if RMS saying slop is good actually would convince anyone normal to start using it
The comments are filled with people thinking they are smart by questioning what is human intelligence and how can we trust ourselves. The kool-aid is quite strong. I am no Stallman lover and have bumped into him more than once locally but I do think the fella who started much of common computing tools and was part of MIT AI lab for a bit may know a thing or two. Or maybe I have been eating my toe too much.
The orange-site whippersnappers don’t realize how old artificial neurons are. In terms of theory, the Hebbian principle was documented in 1949 and the perceptron was proposed in 1943 in an article with the delightfully-dated name, “A logical calculus of the ideas immanent in nervous activity”. In 1957, the Mark I Perceptron was introduced; in modern parlance, it was a configurable image classifier with a single layer of hundreds-to-thousands of neurons and a square grid of dozens-to-hundreds of pixels. For comparison, MIT’s AI lab was founded in 1970. RMS would have read about artificial neurons as part of their classwork and research, although it wasn’t part of MIT’s AI programme.
Is there even any young people we could plausibly call whippersnappers on orange site anymore, it feels like they’re all well into their 30s/40s at this point.
I miss n-gate but that was what, 8 years ago.
But in fairness to actual whipper snappers, and to your point, the '56 Dartmouth Workshop forward privileged Symbolic AI over anything data driven up through the first AI winter (until roughly the 90s and the balance shifted) and really warped the disciplines understanding of its own influences and history - if 70s RMS was taught anything about Neural Nets, it’s relevance and importance would probably have been minimized in comparison to expert systems in lisp or whatever Minsky was up to.
In college I took an AI class and it was just a lisp class. I was disappointed. Also the instructor often had white foam in the corners of his mouth, so I dropped it.
I miss n-gate but that was what, 8 years ago.
Only four (August 2021).
Questioning the nature of human intelligence is step 1 in promptfondler whataboutism.
the fifth episode of odium symposium, “4chan: the french connection” is now up. the first roughly half of the episode is a dive into sartre’s theory of antisemitism. then we apply his theory to the style guide of a nazi news site and the life of its founder, andrew anglin
favorite one so far! It’s like graduate-level 1-900 Hotdog
regarding my take in previous stubsack, it does seem like crusoe intends to use these gas turbines as backup, and as of 31.07.2025 they had five turbines installed, who knows if connected, with obvious place for five more, with some pieces of them (smokestacks mostly) in place. it does make sense that as of october announcement, they had the first tranche of 10 installed or at least delivered. there’s no obvious prepared place where they intend to put next 19 of them, and that’s just stuff from GE, with more 21 coming from proenergy (maybe it’s for different site?). that said, it’s texas with famously reliable ercot, which means that on top using these for shortages, they might be paying market rates for electricity, which means that even with power available, they might turn turbines on when electricity gets ridiculously expensive. i’m sure that dispatchers will love some random fuckass telling them “hey, we’re disconnecting 250MW load in 15 minutes” when grid is already unstable due to being overloaded
After finding out about her here, I’ve been watching a lot of Angela Collier videos lately, here’s the most recent one, which talks about our life extending friends.
She also said she basically wants to focus less on the sort of ‘callout’ content which does well on yt and more focus on actual physics stuff. Which is great, and also good she realized how slippery a slide that sort of content is for your channel.
(I mentioned before how sad it is to see ‘angry gamer culture war’ channels be stuck in that sort of content, as when they do non rage shit, nobody watches them. (I mean sad for them in an ‘if i was them’ way btw, dont get me wrong, fuckem for chosing that path (and fuck the system for that they are now financially stuck in that, and that they made this an available path anyway (while making it hard for lgbt people to make a channel about their experiences)), so many people hurt/radicalized for a few clicks and ad money))
she’s great
The Great Leader himself, on how he avoids going insane during the onging End of the World because among other things that’s not what an intelligent character would do in a story, but you might not be capable of that.
Saying that at age 46 you are proud of not reenacting tropes from fantasy novels you read when you were 9 is something special. “He’s the greatest single mind since L. Ron Hubbard.”
His OkCupid profile also showed a weak grasp on the difference between fantasy and reality.
Do we know when he transitioned from Margaret Weiss and Lawrence Watt-Evans to filthy Japanese cartoons?
I forgot to mention it last week, but this is Scott Adams shit. The stuff which made him declare that Trump would win in a landslide in 2016 due to movie rules. Iirc he also claimed he was right on that, despite Trump not winning in a landslide, the sort of goalpost moving bs which he judges others harshly for (despite in the other situations it not applying)
So up next, Yud will claim some pro AI people want him dead and after that Yud will try to convince people he can bring people to orgasm by words alone. I mean those are the ‘adamslike’ genre tropes now.
tl;dr i don’t actually believe the world is going to end but more importantly i’m Ender Wiggin
❌: Ender Wiggin
✅: End Wiggin’
It’s very meta for Yud to write a story all about how the story isn’t all about himself.
One part in particular pissed me off for being blatantly the opposite of reality
and remembering that it’s not about me.
And so similarly I did not make a great show of regret about having spent my teenage years trying to accelerate the development of self-improving AI.
Eliezer literally has multiple sequence about his foolish youth where he nearly destroyed the world trying to jump straight to inventing AI instead of figuring out “AI Friendliness” first!
I did not neglect to conduct a review of what I did wrong and update my policies; you know some of those updates as the Sequences.
Nah, you learned nothing from what you did wrong and your sequence posts were the very sort of self aggrandizing bullshit you’re mocking here.
Should I promote it to the center of my narrative in order to make the whole thing be about my dramatic regretful feelings? Nah. I had AGI concerns to work on instead.
Eliezer’s “AGI concerns to work on” was making a plan for him, personally, to lead a small team, which would solve meta-ethics and figure out how to implement these meta-ethics in a perfectly reliable way in an AI that didn’t exist yet (that a theoretical approach didn’t exist for yet, that an inkling of how to make traction on a theoretical approach for didn’t exist yet). The very plan Eliezer came up with was self aggrandizing bullshit that made everything about Eliezer.
The first and oldest reason I stay sane is that I am an author, and above tropes.
Nobody is above tropes. Tropes are just patterns you see in narratives. Everything you can describe is a trope. To say you are above tropes means you don’t live and exist.
Going mad in the face of the oncoming end of the world is a trope.
Not going mad as the world ends is also a trope, you fuck!
This sense – which I might call, genre-savviness about the genre of real life – is historically where I began; it is where I began, somewhere around age nine, to choose not to become the boringly obvious dramatic version of Eliezer Yudkowsky that a cliche author would instantly pattern-complete about a literary character facing my experiences.
We now have a canon mental age for Yud of drumroll nine.
Just decide to be sane
That isn’t how it works, idiot. You can’t “decide to be sane”, that’s like having a private language.
Anyway, just to make the subtext of my other comments into text. Acting like you are a character in a story is a dissociative delusion and counter to reality. It is definitively not sane. Insane, if you will.
To say you are above tropes means you don’t live and exist.
To say you are above tropes is actually a trope
Followup:
Look, the world is fucked. All kinds of paradigms we’ve been taught have been broken left and right. The world has ended many times over in this regard. In place of anything interesting or helpful to address this, Yud’s encoded a giant turd into a blog post. How to stay sane? Just stay sane, bro. Easy to say if the only thing threatening your worldview is a made-up robodemon that will never exist.
Here’s Yud’s actually-quite-easy-to-understand suggestions:
- detach from reality by pretending you are a character in a story as a coping mechanism.
- assume no personal responsibility or agency.
- don’t go insane, i.e. make sure you try and fulfil society’s expectations of what sanity is.
All of these are terrible. In general, you want to stay grounded in reality, be aware of the agency you have in the world, and don’t feel pressured to performatively participate in society, especially if that means doing arbitrary rituals to prove that you are “sane”.
Here are my thoughts on “how to stay sane” and “how to cope”:
It’s entirely reasonable to crash out. I don’t want anyone to go insane, but fucking look at all this shit. Datacenters are boiling the oceans. Liberalism is starting its endgame into fascism. All the fucking genocides! Dissociating is acceptable and expected as an emotional response. All of this has been happening in (modern) human history to a degree where crashing out has been reasonable. Yet, many people have been able to “stay sane” in the face of this. If you see someone who appears to be sane, either they’re fucked in the head, or they have some perspective or have built up some level of resilience. Whether or not those things can be helpful to someone else is not deterministic. If you are someone who has “stayed sane”, please remember to show some empathy and some awareness that it’s fine if someone is miserable, because again, everything is fucked.
Putting the above together, I accept basically any reaction to the state of the world. It’s reasonable to go either way, and you shouldn’t feel bad either way. “Sanity” has different meanings depending on where you look. I think there’s a common, unspoken definition that basically boils down to “a sane person is someone who can productively participate in society.” This is not a standard you always need to hold yourself to. I think it’s helpful to introspect and, uh, “extrospect”, here. Like, figure out what you think it means to be sane, what you want it to mean, and what you want. And bounce these ideas off of someone else, because that usually helps.
I think there is another common definition of sanity that might just be “mentally healthy”. To that end, things that have helped me, aside from therapy, that aren’t particularly insightful or unique:
- Talking to friends
- Finding places to talk about the world going to shit.
- Participating in community, online or irl.
- Basically just finding spaces where stupid shit gets dunked on.
- Leftist meme pages
I mean, is that so fucking hard to say?
they really thonk that people work just like chatbots, are they
Its the most obvious explanation for their behaviour I can think of.
Joke’s on him, Know-Nothing Know-It-All is also a trope.
Screaming at the void towards Chuunibyou (wiki) Eliezer: YOU ARE NOT A NOVEL CHARACTER, THINKING OF WHAT BENEFITS THE NOVELIST vs THE CHARACTER HAS NO BEARING ON REAL LIFE.
Sorry for yelling.
Minor notes:
But <Employee> thinks I should say it, so I will say it. […] <Employee> asked me to speak them anyways, so I will.
It’s quite petty of Yud to be so passive-aggressive towards his employee insisted he at least try to discuss coping. Name dropping him not once but twice (although that is also likely to just be poor editing)
“How are you coping with the end of the world?” […Blah…Blah…Spiel about going mad tropes…]
Yud, when journalists ask you “How are you coping?”, they don’t expect you to be “going mad facing apocalypse”, that is YOUR poor imagination as a writer/empathetic person. They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.
Alternatively it’s also a question to gauge how full of shit you may be. (By gauging how emotionally invested you are)
The trope of somebody going insane as the world ends, does not appeal to me as an author, including in my role as the author of my own life. It seems obvious, cliche, predictable, and contrary to the ideals of writing intelligent characters. Nothing about it seems fresh or interesting. It doesn’t tempt me to write, and it doesn’t tempt me to be.
Emotional turmoil and how characters cope, or fail to cope makes excellent literature! That all you can think of is “going mad”, reflects only your poor imagination as both a writer and a reader.
I predict, because to them I am the subject of the story and it has not occurred to them that there’s a whole planet out there too to be the story-subject.
This is only true if they actually accept the premise of what you are trying to sell them.
[…] I was rolling my eyes about how they’d now found a new way of being the story’s subject.
That is deeply Ironic, coming from someone who makes choice based on him being the main character of a novel.
Besides being a thing I can just decide, my decision to stay sane is also something that I implement by not writing an expectation of future insanity into my internal script / pseudo-predictive sort-of-world-model that instead connects to motor output.
If you are truly doing this, I would say that means you are expecting insanity wayyyyy to much. (also psychobabble)
[…Too painful to actually quote psychobabble about getting out of bed in the morning…]
In which Yud goes in depth, and self-aggrandizing nonsensical detail about a very mundane trick about getting out of bed in the morning.
The trope of somebody going insane as the world ends, does not appeal to me as an author, including in my role as the author of my own life. It seems obvious, cliche, predictable, and contrary to the ideals of writing intelligent characters. Nothing about it seems fresh or interesting. It doesn’t tempt me to write, and it doesn’t tempt me to be.
When I read HPMOR, which was years ago before I knew who tf Yud was and I thought Harry was intentionally written as a deeply flawed character and not a fucking self-insert, my favourite part was when Hermione’s death. Harry the goes into grief that he is unable to cope with, disassociating to such an insane degree he stops viewing most other people as thinking and acting individuals. He quite literally goes insane as his world - his friend and his illusion of being the smartest and always in control of the situation - ended.
Of course now in hindsight I know this is just me inventing a much better character and story, and Yud is full of shit, but I find it funny that he inadvertently wrote a character behave insanely and probably thought he’s actually a turborational guy completely in control of his own feelings.
I feel like this is a really common experience with both HPMoR and HP itself, and explains a large part of the positive reputation they enjoy(ed).
They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.
does EY fail to get that interview isn’t for him, but for audience? if he wants to sway anyone, then he’d need to adjust what he talks about and how, otherwise it just turns into a circlejerk
Yud, when journalists ask you “How are you coping?”, they don’t expect you to be “going mad facing apocalypse”, that is YOUR poor imagination as a writer/empathetic person. They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.
I think the way he reads the question is telling on himself. He knows he is sort of doing a half-assed response to the impending apocalypse (going on a podcast tour, making even lower-quality lesswrong posts, making unworkable policy proposals, and continuing to follow the lib-centrist deep down inside himself and rejecting violence or even direct action against the AI companies that are hurling us towards an apocalypse). He knows a character from one of his stories would have a much cooler response, but it might end up getting him labeled a terrorist and sent to prison or whatever, so instead he rationalizes his current set of actions. This is in fact insane by rationalist standards, so when a journalist asks him a harmless question it sends him down a long trail of rationalizations that include failing to empathize with the journalist and understand the question.
Yud seems to have the same conception of insanity that Lovecraft did, where you learn too much and end up gibbering in a heap on the floor and needing to be fed through a tube in an asylum or whatever. Even beyond the absurdity of pretending that your authorial intent has some kind of ability to manifest reality as long as you don’t let yourself be the subject (this is what no postmodernism does to a person), the actual fear of “going mad” seems fundamentally disconnected from any real sense of failing to handle the stress of being famously certain that the end times are indeed upon us. I guess prophets of doom aren’t really known for being stable or immune to narcissistic flights of fancy.
Having a SAN stat act like an INT (IQ) stat is very on brand for rationalists (except ofc the INT stat is immutable duh)
the actual fear of “going mad” seems fundamentally disconnected from any real sense of failing to handle the stress of being famously certain that the end times are indeed upon us
I think he actually is failing to handle the stress he has inflicted on himself, and that’s why his latest few lesswrong posts hadreally stilted poor parables about Chess and about alien robots visiting earth that were much worse than classic sequences parables. And why he has basically given up trying to think of anything new and instead keeps playing the greatest lesswrong hits on repeat, as if that would convince anyone that isn’t already convinced.
Hitting the konbini in the wee hours for strong zeros, lolicon mags and pawahara ice cream
stay with meeeeeeeee~~~🎶
~~In the dead of night, knocking on the door~~
“How do you keep yourself from going insane?”
“I tell myself I’m a character from a book who comes to life and is also a robot!” (Hubert Farnsworth giggle)
“I like to dissociate completely! Wait, what was the question?”
Also discussed in last week’s Stubsack.














