Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
https://scottaaronson.blog/?p=9183
Quantum scoot is quantum spooked 😱 after GPT-5 manages to solve a subproblem for him (after multiple attempts), thanks the powers that be for his tenure!
… even though GPT-5 probably generates the answer via websearch

After seeing this, I reminded myself that I’ve seen this type of thing happen before. Over the past half year, so many programmers enthusiastically embraced vibe coding after seeing one or two impressive results when trying it out for themselves. We all know how that is going right now. Baldur Bjarnason had some great essays (1, 2) about the dangers of relying on self-experimentation when judging something, especially if you’re already predisposed into believing it. It’s like a mark believing in a psychic after he throws out a couple dozen vague statements and the last one happens to match with something meaningful, after the mark interprets it for him.
Edit: Accidentally hit reply too early.
You think he would maybe, idk, search around to see if this was a known formula before making such a bombastic statement…
Oh yeah, he wrote an update saying that the LLM is still great, even if the result is already known, because it saves him time. We have come full circle back to the exact same value proposition as the vibe coders.
Maybe next he can get an LLM to automate his apologetics for genocide.
He could call it Vibonism.
New Ed Zitron to start the week off: The Case Against Generative AI
Been tangentially involved in a discussion about how much LLMs have improved in the past year (I don’t care), but now that same space has a discussion of how annoying the stupid pop-up chat boxes are on websites. Don’t know what the problem is, they’ve gotten so much better in the past year?
I mean that’s the fundamental problem, right? No matter how much better it gets, the things it’s able to do aren’t really anything people need or want. Like, if I’m going to a website looking for information it’s largely because I don’t want to deal with asking somebody for the answer. Even a flawless chatbot that can always provide the information I need - something that is far beyond the state of the art and possibly beyond some fundamental limitation of the LLM structure - wouldn’t actually be preferable to just navigating a smooth and well-structured site.
See also how youtube tutorials have mostly killed (*) text based tutorials/wikis and are just inferior to good wikis/text based ones. Both because listening to a person talk is a linear experience, and a text one allows for easy scrolling, but also because most people are just bad at yt tutorials. (shoutout to the one which had annoyingly long random pauses in/between sentences even at 2x speed).
This is not helped because now youtube is a source of revenue, and updating a wiki/tutorial often is not. So the incentives are all wrong. A good example of this is the gaming wiki fextralife: See this page on dragons dogma 2 npcs. https://dragonsdogma2.wiki.fextralife.com/NPCs (the game has been out for over a year, if the weirdness doesn’t jump out at you). But the big thing for fextralife is their youtube tutorials and it used to have an autoplaying link to their streams. This isn’t a wiki, it is an advertisement for their youtube and livestreams. And while this is a big example the problem persists with smaller youtubers, who suffer from extreme publish, do not deviate from your niche or perish. They can’t put in the time to update things, because they need to publish a new video (on their niche, branching out is punished) soon or not pay rent. (for people who play videogames and or watch youtube out there, this is also why somebody like the spiffing brit is has long ago went from ‘I exploit games’ to ‘I grind and if you grind enough in this single player game you become op’, the content must flow, but eventually you will run out of good new ideas (also why he tried to push his followers into doing risky cryptocurrency related ‘cheats’ (follow Elon, if he posts a word that can be cryptocoined, pump and dump it for a half hour))).
*: They still exist but tend to be very bad quality, even worse now people are using genAI to seed/update them.
Check out this epic cope from an Anthropic employee desperately trying to convince himself and others that actually LLMs are getting exponentially better
https://www.julian.ac/blog/2025/09/27/failing-to-understand-the-exponential-again/
Includes screenshots of data where he really really hopes you don’t look at the source, and links to AI 2027.
Links to the METR tasks w/ massive error bars at 50% level lmaou.
Someone in the comments rightly points out the comparison with covid isn’t apt. With covid, underlying mechanism caused an exponential effect in covid’s spread
With LLMs the exponential trend is being caused by exponentially spending money and a healthy dose of targeting benchmarks, which is why people are calling the top. The money literally doesn’t exist for this shit to go on so you can create your 50% accurate mechanical turk.
Edit: idk the more I think about this the more it irks me. Like if I was allowed to pick and choose benchmarks that agree with my biases I would post something like this…

… and claim model performance is actually getting worse over time.
Given consistent trends of exponential performance improvements over many years and across many industries, it would be extremely surprising if these improvements suddenly stopped.
I have a Petri dish to sell you
I took a quick peek at his blog.
Oh dear, there is a dedicated rationality subsection…
Oh god, he unironically recommends reading the sequences wtf 🤢🤮
Oh lol, I thought his name sounded familiar and yup, he was a concern troll in a Hackerspace I was in, some 12 years ago.
Surprise level: zero
links to AI 2027.
In Dutch we have a saying (from a commercial, well done on the advertisers there) ‘Wij van Wc-eend adviseren Wc-eend’ (we from the company Wc-eend, suggest you get Wc-eend), which seems appropriate here. It is used in a sarcastic context when somebody gives advice with a clear conflict of interest.
Anyway, just going from the title, ‘X is exponential’ has been the pro AI cry since the singularity is near. (Which said, well individual tech follows an S-curve, but all the techs combined are exponential, and variants on that). All seems very hopeium, immortality is near!
I know it’s terrible being a drama gossip, but there are some Fun Times on bluesky at the moment. I’m sure most of you know the origins of the project, and the political leanings of the founders, but they’re currently getting publicly riled up about trans folk and palestinians and tying themselves up in knots defending their decision to change the rules to keep jesse singal on site, and penniless victims of the idf off it.
They really cannot cope with the fact that their user base aren’t politically aligned with them, and are desperate to appease the fash (witness the crackdowns on people’s reaction to charlie kirk’s overdue departure from this vale of tears) and have currently reached the Posting Through It stage. I’m assuming at some point their self-image as Reasonable Centrists will crack and one or more of them will start throwing around transphobic slurs and/or sieg-heiling and bewailing how the awful leftists made them do it. Anyone want to hazard a guess at a timeline?
And all this because she simply could not shut up. Which seems to be one of the oldest, modding a large community rules.
Nobody ever seems to learn the “never get high from your own supply” lesson. Gotta get that hit of thousands of people instantly supporting and agreeing with whatever dumbfuck thought just fell out.
You absolutely don’t have to hand it to zuckerberg, but he at least is well aware that he runs an unethical ad company that’s bad for the world, has always expressed his total contempt for his users, and has not posted through it.
Well, they’re already using “waffles” as a transphobic slur (irony poisoning speedrun any%), so it’s really more of a question of which transphobic slurs they’ll escalate to next.
here’s a good summary of the situation including an analysis of the brand new dogwhistle a bunch of bluesky posters are throwing around in support of Jay and Singal and layers of other nasty shit
here’s Jay Graber, CEO of Bluesky, getting called out by lowtax’s ex-wife:

here’s Jay posting about mangosteen (mangosteen juice was a weird MLM lowtax tried to market on the Something Awful forums as he started to spiral)
Anyone want to hazard a guess at a timeline?
since Jay posted AI generated art about dec/acc and put the term in her profile, her little “ironic” nod to e/acc and to AI, my guess is this is coming very soon
Huh, didn’t know about the guy behind tangled.org (Anirudh Oppiliappan) being a waffle enthusiast too 🫤 Just visited his bsky profile, and he’s enthusing about a “decentralised accelerationism” post by jay.
I hadn’t really seen the point of the tangled project (I’m not sure what atproto brings to version control) but I was interested in an ecosystem around the jujutsu vcs stuff. I guess I won’t find that here.
ok plz explain what “waffles” means in this context
This doesn’t include her blurb about, “are you paying us? where???”
But weren’t there a multitude of people clamoring for a Bluesky subscription service from the get-go? Out of recognition that this situation was one of the potential failure modes?
The question on getting paid might give credence to the rumors that they’re running out of money and won’t make it (user-growth wise) as an ad platform. Which, lol and also lmao.
Yeah, I can’t see that they’re doing anything besides burning runway. It’s probably shortly going to become cliche to say, “glad I never made an account there,” but, welp, glad I never made an account there
But weren’t there a multitude of people clamoring for a Bluesky subscription service from the get-go? Out of recognition that this situation was one of the potential failure modes?
Even beyond that, the Twitter refugees came to Bluesky because they wanted a Nazi-free successor to Twitter. Most of them would’ve happily pitched in to keep the site alive whilst goodwill remained.
Bluesky wanted to be Nazi Twitter, then Elon purchased Twitter and stole their market.
AI Shovelware: One Month Later by Mike Judge
The fact that we’re not seeing this gold rush behavior tells you everything. Either the productivity gains aren’t real, or every tech executive in Silicon Valley has suddenly forgotten how capitalism works.
… por que no los dos …
Or, this is how capitalism has always worked. See Enron for example. And we all just got so enthralled by the number (praised be its rise) that we took the guardrails off. The rising tidal wave which will flood all the land, raises all boats after all.
The goal of capitalism is not to produce goods, it is to create value for the owners of the capital. See also why techbros are turning on EA and EA (which EA is which, is left as an exercise to the reader).
"You know, I never defrauded anyone,” says Sam Bankman-Fried
“You know, I never sent the boys across the Isonzo without believing we could win,” said Luigi Cadorna
Opening the sack with this shit that spawned in front of me:
Guess it won’t be true AGI!
wait, how much compute would they need for this, ignore patent absurdity of it all for a minute? would they wrap it up under 1 quadrillion dollars?
If we knew that we wouldn’t need a GPT-8 to solve quantum gravity, would we now?
Is this true for everything else, too? I will be a true AGI if I solve quantum gravity. A half eaten salami will be true AGI if it solves quantum gravity. My grandmother will be a true bicycle if she has wheels.
If saltman knew what quantum gravity was and why LLMs won’t solve it first, maybe he’d have general intelligence
Elon Musk announces “Grokipedia”, which is exactly what it sounds like.
for article in wikipedia: grok.is_this_true(article)The Gizmodo story mentions that he retweeted Larry Sanger, but it doesn’t dive into the rabbit hole of just how much of a kook Sanger is and how badly his would-be Wikipedia competitors have failed.
Gork is this true?
Gort is Klaatu barada nikhto?
Trok is this grue?
Shaka, when the walls fell?
This isn’t a particularly novel stupid take, but it was made by a bluesky engineer, and it is currently dunking-on-bluesky season so here we are.
“If you imagine that an ai is a person, then saying bad things about it is bigotry”
Welp, they’ve got me there. Guess I’ll never say anything bad about anything again, because it is racism, if you think about it.
https://bsky.app/profile/hailey.at/post/3m2f66lgh2c2v

alt text
A bluesky post by dystopiabreaker.xyz
i’m completely serious when i say that much of the dismissive ai discourse on here fires the bigotry neuron
And two replies by hailey.at
an unfortunate irony about this post - and even if you are the staunchest anti-ai critic out there, i think you’d agree - is that some of the most bigoted things are being said to respond to this. copy/pasting phrasing and terminology used by bigots but replacing “dna” with “bits” doesn’t make it ok
if you’re writing a sentence that sounds like eugenics but you go “oh that’s fine to say because it’s not a real person” (whatever that means) you may want to consider what made you okay with saying that
(an exception can be made for people repurposing real-world slurs and putting a techy spin on them. fuck directly off with “wireback” and similar shit)
i read that there was one based on a mexican-slur, but i couldn’t think of it
spent weeks trying to rhyme “wet” with something, couldn’t come up with it, damn i guess i’m not a poet at least i know it
“wire” — fuck me that is so lazy
If I imagine that my butt is a rocket, then I can fart my way to the moon
can someone catch me up to speed on what the latest nix fiasco is?
@[email protected] summarized it nicely but if you want some fresh abyss to stare into, here’s some links to fedi posts with details:
Oh for fucks sake, how am I supposed to do my computering now. I already switched to lix after the last drama. Hopefully more people will pick aux up now.
Microsoft launches ‘vibe working’ in Excel and Word
A new Office Agent in Copilot chat, powered by Anthropic models, is also launching today that can create PowerPoint presentations and Word documents from a “vibe working” chatbot.
Microsoft says its Agent Mode in Excel has an accuracy rate of 57.2 percent in SpreadsheetBench, a benchmark for evaluating an AI model’s ability to edit real world spreadsheets.
They are openly admitting this? Do they really not realize how completely damning the number is…?
Dont worry about the number it will improve after the singularity.










