Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
All participants in the Stubsack, including awful.systems regulars and those joining from elsewhere, are reminded that this is not debate club. Anyone tempted by the possibility of debate-club behavior is encouraged to touch your nearest grass immediately. We are here to sneer, not to bicker: This is a place to mock the outside world, not to settle grand matters of ideology, unless the latter is done in an extraordinarily amusing way.
I need to lurk more, feel like I missed some good drama 🍿
If it isn’t on this quick sneer page, you can just look at the posts with a lot of replies, either shows it broke containment, or somebody went full debate mode.
sometimes both
I am the scream
Kinda interesting that Google’s TPUs are back in the news. Seemed like they had fallen by the wayside for a while. Of course there are no technical details, just blah blah revenue blah blah, but that’s CNBC for you.
Simon Willison writes a fawning blog post about the new “Claude skills” (which are basically files with additional instructions for specific tasks for the bot to use)
How does he decide to demonstrate these awesome new capabilities?
By making a completely trash, seizure inducing GIF…
https://simonwillison.net/2025/Oct/16/claude-skills/
He even admits it’s garbage. How do you even get to the point that you think that’s something you want to advertise? Even the big slop monger companies manage to cherry pick their demos.
Just felt like I got an aneurysm there.
(in unrelated things, first)
How do you even get to the point that you think that’s something you want to advertise?
Man’s spent several years and shitloads of cash destroying his public image (and probably his brain) via slop bots, I suspect he’s getting desperate to prove his LLM booster turn wasn’t a career-ruining blunder
(He’s also probably lost the ability to tell good work from bad work as well - that’s a universal quality among slop advocates, as Gerard has pointed out on multiple occasions)
My dad was a bit freaked out by a video version (We’re not ready for super-intelligence)of the “AI 2027” paper, particularly finding two end scenarios a bit spooky: colossus-style cooperating AIs taking over the world, and the oligarch concentration of power one, which i think definitely echoed sci-fi he watched/read as a teen.
In case anyone else finds it useful these are the “Comments as I watch it”, that I compiled for him
Before watching Video Notes:
-
AI Only channel with only 3 videos
-
Produced By “80000hours”, which is an EA branch (trying to peddle to you the best way to organize 40years * 50 weeks * 40 hours [I love that they assume only 2 weeks of holidays]); which is definitely cult adjacent: https://80000hours.org/about/#what-do-we-do. Mostly appears to be attempting to steer young people to what they believe are “High impact” jobs.
Video Notes:
-
The backing paper is a bit of a joke, one “AI 2027”, for reference one of the main authors is very much a “cult member”, Scott Alexander Siskind, author of “Slate Star Codex” and “Astral Codex Ten”.
-
Other authors include [AI Futures Project] :
- Daniel Kokotajlo (podcast co-host of siskind, ex open-ai employee, LessWrong/EA regular)
- Thomas Larsen (ex MIRI [Machine Intelligence Research Institute = really really culty], LessWrong/EA regular)
- Eli Lifland (LessWrong/EA regular)
- Romeo Dean (Astra Fellowship recipient = money for AI Safety research, definitely EA sphere)
-
A lot of fluff trying to hype up the credentials of the authors.
-
AGI does not have a bounded definition.
-
They are playing up the China angle to try and drum up jingoistic support.
-
Exaggerating Chat GPT-3 success, by merely citing “users”, without mentioning actual revenue, or actual quality.
-
Quote:
How do these things interact, well we don’t know but thinking through in detail how it might go is the way to start grappling with that.
-> I think this epitomises the biggest flaw of their movement, they believe that from “first-principles” it’s possible to think hard enough (without needing to confront it to reality) and you can divine the future.
-> You can look up “Prediction Markets”, which is another of their ontological sins.
-
I will note that the prediction of “Agents” was not a hard one, since this is what all this circle wants to achieve, and as the video itself points out it’s fantastically incompetent/unreliable.
-
Note: This video was made before the release of GPT-5. We don’t know precisely how much more compute altogether GPT-5 truly required, but it’s very incremental changes compared to GPT-4. I think this philosophy of “More training” is why OpenAI is currently trying (half-succeeding half failing) to raise Trillions of dollars to build out data-centers, my prediction is that the AI bubble bursts before these data centers come to fruition.
-
Note: The video assumes keeping models secret, but in reality OpenAI would have a very vested interest in displaying capability, even if not making a model available to the public. Also even on consumer models, OpenAI currently loses a bunch of money for every query.
-
Note: The video assumes “Singularitarianism”, of ever acceleration in quality of code, and that’s why they keep secret models. I think this hits a compute/energy wall in real life, even if you assume that LLMs are actually useful for making “quality” code. These ideas are not new, and these people would raise alarms about it with or without current LLM tech.
-
Specific threats of “Bio-weapon”, which a priori can not really be achieved without experimentation, and while “automated” labs half exis, they still require a lot of human involvement/resources. Technically grad students could also make deadly bioweapons, but no one is being alarmist about them.
-
Note: “Agent 2” Continuous Online learning is gobbledygook, that isn’t how ML, even today works. At some point there are very diminishing returns, and it’s a complete waste of time/energy to continue training a specific model, a qualitative difference would be achieved with a different model. I suspect this sneakily displays “Singularitarianism” dogma.
-
Quote:
Hack into other servers Install a copy of itself Evade detection
-> This is just science-fiction, in the real world these models require specialized hardware to be run at any effective speed, this would be extremely unlikely to evade detection. Also this treats the model as a single entity with single goals, when in reality any time it’s “run” is effectively a new instance.
-
Note: This subculture loves the concept of “science in secrecy”, which features a lot in the writings of Elizer Yudkowsky. Which is cultish both in keeping their own deeds “in a veil of secrecy”, and helpful here when making a prophecy/conspiracy theory, by making the claim hard to disprove specifically (it’s happening in secret!)
-
Note: Even today Chain-of-thought is not that reliable at explaining why a bot gives a particular answer. It’s more analog to guiding “search”, rather than true thought as in humans anyway. Them using “Alien-Language” would not be that different.
-
Agent 3, magically fast-and-cheap, assuming there are now minimum energy requirements. Then you can magically run 200,000 copies of. magically equivalent to 50,000 humans sped up by 30x. (The magic is “explained” in the paper by big assumptions, and just equating essentially how fast you can talk with the quality of talking, which given the length of their typical blog posts is actually quite funny)
-
Note: “Alignment” was the core mission of MIRI/Eliezer Yudkowsky
-
Note: Equating Power and Intelligence a lot (not in this video, but in general being suspiciously racist/eugenicist about it), ignoring the material constraints of actual power [echo: Again the epitomical sin of “If you just think hard enough”]
-
Note: Also assuming that trillions of dollars of growth can actually happen, simultaneously with millions losing their jobs.
-
I am betting that the “There is another” part of the video is probably deliberately echoing Colossus.
-
The video casually assumes that the only limits to practical fusion and nanotech just intelligence (instead of potential dead-ends, actually the nanotech part is a particular fancy of theirs, you can lookup “diamondoid bacteria” on LessWrong if you want a laugh)
-
The two outcomes at the end of the video are literally robo-heaven and robo-hell, and if you just follow our teachings (in this case slow-downs on AI) you can get to robo-heaven. You will notice they don’t imagine/advocate for a future with no massive AI integration into society, they want their robo-heaven.
-
Quote:
None of the experts are disagreeing about a wild future.
-> I would say specifically some of them are suggesting that AGI soon is implausible quite strongly. I think many would agree that right now the future looks dire with or without super-AI, or even regular AI.
Takeaway section:
Yeah this really is a cult recruitment video essentially.
We’re almost at the end of 2025 and agents don’t fucking exist the way they predicted. Literally 0% acc so far. Ai 2027 agmi.

^image of Daniel K who already updated his rapture prophecy to 2029 because he’s a mark
I stumbled onto that vid a while back, watched the first minute or so, lol’ed at the glazing of kokotajlo, and stopped the vid. I did think about posting it here to be torn apart but forgot about it. I watched a little bit further and got “they chose to write this as a narrative” of course they fucking did. It’s their one thing. Write a shitty 10k word story that amounts to some combination of “really makes you think” and “big if true”.
Here’s a story: Once upon a time there was a world. In it people were sad. Then one day swlabr was elected supreme benevolent ruler and then nobody was sad again :) the end. Wow make u think. Many experts agree
-
Haven’t seen this skeet posted here. Skeet:
It’s 2050 and a teen girl is torrenting a .tar.gz file of all the consciousnesses of all the tech bros who uploaded themselves into the cloud in a bid for immortality and modding them into The Sims 4
who’s the basilisk now?
Watched a debate between Emily M. Bender and Sébastien Bubeck — an OpenAI researcher — from March. As usual, Dr. Bender fucking rules. Bubeck struck me as an idiot and kind of an ass.
New paper on LLMs just dropped, titled LLMs Can Get “Brain Rot”!
Currently a novelty at this point, but could prove useful to make the likes of Iocaine and Nepenthes more effective - especially since the paper notes:
the damage is multifaceted in changing the reasoning patterns and is persistent against large-scale post-hoc tuning.
It does also suggest doing some actual quality control to prevent damage to the LLMs, but that sure ain’t happening
The paper is itself written by LLM.
Fuck.
I don’t think we’ve ever talked about it but AI is shitting all over the tattoo industry. Listening to a podcast rn (Beneath the Skin) and the hosts are really not keen on the LLMs lol.
(I’m a week out from my next one woo)
Another attempt to platform fascists has cropped up in FOSS, and Drew DeVault’s talked all about it. Featuring our good friend Curtis Yarvin.
god damn it. i guess the name of the founder might have been a hint, only one letter away from our favorite roman saluter.
i use immich, one of the projects they seem to have actually funded in a big way. it’s a very good selfhosted replacement for google photos. at least the license is actually open source, as opposed to grayjay, so here’s hoping it has a future in case the fascists try to fuck with it.
i guess the problem though isn’t with the funding and/or control of individual projects, it’s with the long-term influence in the foss community they seem to be after.
i had a feeling about FUTO because of rossmann’s involvement. became leery of him after this youtube bullshit from 2018:
Let’s discuss why journalists are afraid of Elon Musk right now(and why they deserve to be)
Elon Musk wants to come up with a way to rate the credibility and accuracy of media organizations & individual journalists. This blatant misrepresentation of his words, given in the middle of this conversation, is a PERFECT example of WHY this is so badly needed in modern society.
I’m not a fan of Tesla for being, in many ways, the “Apple of cars.” That being said, whether or not I like Tesla when it comes to a repair standpoint has nothing to do with the hate being thrown at Elon for something he never meant in the words he said, and is entirely separate from my agreement with him on the idea of a media credibility rating platform.of course the organization I know primarily for platforming fascists and astroturfing on YouTube was secretly an even worse grift and somehow tied in with Yarvin, why wouldn’t it be
given that Rossmann’s at the head of this thing too, I’m starting to regret not taking GrapheneOS (who, notably, were also a target for this grift) seriously when they said Rossmann’s involved in a bunch of terrible shit. the right to repair deserves a better figurehead.
fuckin pisses me off, given his clippy campaign is helping move pivot shirts
sigh
I WILL NOT CHANGE, CLIPPY SUCKED FIRST
Damn right. He needs to quit, he’s the one who sucks.
The fash don’t have magic doodoo fingers that obligate decent people to surrender every time they touch something we like, and we should never concede as if they do.
hadn’t been aware that rossman’s into dodgy stuff (knew fairly little about him outside of some repair stuff on his channel), but ugh
also clicking through into FUTO’s projects and it’s all a bit gravitating around a point, “built on polycentric”. so I wonder what that means?
Polycentric is an open-source, distributed social network that lets you publish content to multiple servers.
already at “I’m interested” because it’s interesting to see what other work happens in this space.
and then very next sentence we get to
If you’re censored on one server, your content remains accessible from other servers
ah. I see. the “opt-out moderation” is also telling - how does it work? who knows! it’s got a paragraph under introduction but seems to not be mentioned anywhere else in the docs.
extra frustrating to see because the projects these fucks are taking on (like the open cast thing) are items that sorely need stronger options in the open space. but not like this. never like this.
Ah, it’s another Urbit isn’t it?
certainly has more than a bit of that urbit coiner Sovereign Individual shit going on yeah
I tried looking around a bit to see if I could find any info about contributors there, and for the most part none of them really seem to have much internet fingerprint at all. did find one person with a moderately extensive set of personal repo/project commits spanning back a few years, spanning long enough so as to find that they were doing a BSc/Hons/something circa 2018. which isn’t concrete but does strongly hint at a current age of mid 20s to mid 30s. “get 'em while they’re young and you can poison their brains early!” - the bayfucker mantra
The idea that AI will be a boon for searching the mathematical literature is undermined somewhat by how it shits the bed there too.
Wouldn’t f(x) = x^2 + 1 be a counterexample to “any entire (differentiable everywhere) function that is never zero must be constant”? Or are some terms defined differently in complex analysis than in the math I learned?
I’ve never heard of a function being called entire out of complex analysis. But still, it is zero at i.
A fact that AI gets wrong.
flaviat explained why your counterexample is not correct. But also, the correct statement (Liouville’s theorem) is that a bounded entire function must be constant.
Who is flaviat? I don’t see that handle on this lemmy or Greg Egan’s mastodon account, and Egan just re-tooted someone who gives x^2 + 1 as a counterexample.
Does this link work for you to see the comment? https://awful.systems/comment/9163259
now it works! I do not understand the two sentences “I’ve never heard of a function being called entire out of complex analysis. But still, it (what? - ed.) is zero at i.”
I believe those sentences can be paraphrased as, “The term entire function is only used in complex analysis. The function f(z) = z^2 + 1 is zero at z = i.”
Thanks, i don’t speak english natively
the poster is referring to the function
f(z) = z^2 + 1
Or Picard’s little theorem, which says that if an entire function misses two points (e.g. is never 0 or 1), then that function must be constant.
It’s worth noting that, unlike a real function, a complex function that is differentiable in a neighborhood is infinitely differentiable in that neighborhood. An informal intuition behind this: in the reals, for a limit to exist, the left and right limit must agree. In C, the limit from every direction must agree. Thus, a limit existing in C is “stronger” than it existing in R.
Edit: wikipedia pages on holomorphism and analyticity (did I spell this right) are good
entire always means holomorphic on the whole complex plane
Every time I hear a moderate AI argument (e.g. AI will be an aid for searching literature or writing code), it’s like, “Look, it’s impressive that the AI managed to do this. Sure, it took about three dozen prompts over five hours, made me waste another five hours because it generated some completely incorrect nonsense that I had to verify, produced an answer that was much lower quality than if I had just searched it up myself, and boiled two lakes in the process. You should acknowledge that there is something there, even if it did take a trillion dollars of hardware and power to grind the entire internet and all books and scientific papers into a viscous paste. Your objections are invalid because I’m sure things are gonna improve because Progress.”
I am doubly annoyed when I turn my back and they switch back to spouting nonsense about exponential curves and how AI is gonna be smarter than humans at literally everything.
Closely related is a thought I had after responding to yet another paper that says hallucinations can be fixed:
I’m starting to suspect that mathematics is not an emergent skill of language models. Formally, given a fixed set of hard mathematical questions, it doesn’t appear that increasing training data necessarily improves the model’s ability to generate valid proofs answering those questions. There could be a sharp divide between memetically-trained models which only know cultural concepts and models like Gödel machines or genetic evolution which easily generate proofs but have no cultural awareness whatsoever.
US engaging in quantum socialism:
Crypto Investor Proposes 450-Foot Statue of Greek God on Alcatraz Island is a story making the rounds in the press lately and aaaaaah I hate it. I’d say something more coherent than that but it’s already given me quite a headache.
He has a personal website as well as a website for his stupid statue idea. Both of which are buggy / ugly – apparently after saving $450 million for a dumb statue he has none left for good website coding.
If they’re going to make a 450 foot tall statue of Greek people I can think of more appropriate designs for San Francisco Bay.
“We call this the Reacharound Collossi”
(thinks) The Colossus of Chodes
this reminded me of a superb article i’ve read recently on the topic. decided it deserved a thread
Guy should just get “I love for-profit prisons” tattooed on his face instead of dressing up an island in bad bioshock cosplay.
I propose a 450-foot-tall statue of the most famous parts of Kirk Johnson’s anatomy, facing southwest back towards the city
He has a personal website as well as a website for his stupid statue idea. Both of which are buggy / ugly – apparently after saving $450 million for a dumb statue he has none left for good website coding.
Tenner says he vibe-coded both of them himself. Man’s a capitalist at heart, he thinks paying labour their fair due is an abomination unto God.
Charities Using AI-Generated Photos of Starving Children to Raise Money
Morally, attacking the poor and downtrodden through polluting the world with AI is abhorrent, and anyone doing this should be permanently barred from working at any charity.
Practically, the sight of blatant AI-generated poverty porn is going to drive people away from giving to these charities, damaging their ability to do good.
openai released their spyware browser and it is… not good
https://www.anildash.com//2025/10/22/atlas-anti-web-browser/
But of course they named it “atlas”. Openai is clearly the work randian supermen.
Also, anil sounds like he might be a little out of touch with regards to how people search these days. Careful keyword searching isn’t even as useful as it used to be, given the damage google et al have done to their own products.
(also also, interactive fiction has marched on a little since zork and infocom were the latest and greatest things, but I accept that most people won’t have noticed)
Adding insult to injury, OpenAI’s also encouraging people to abuse ARIA tags so their slopbots can steal more effectively, threatening to damage web accessibility in the process:
https://adrianroselli.com/2025/10/openai-aria-and-seo-making-the-web-worse.html
New Ed Zitron, giving exact numbers for how much money Cursor and Anthropic have lit on fire and continuing to shed light on the AI industry’s ability to incinerate revenue.
tldr is that anthropic spent on aws only 2x their revenue in 2024, spent on aws approx the same as their revenue in 2025 up to september, and they also pay unknown amount but known to be a lot for google cloud, on top of everything else like salaries and who the fuck knows what else
something something Ed Zitron really needs an editldr
i heard from reliable source (ed zitron) that he has one
I lolled at how this post literally included an “[editor’s note: ….]” at one point but the entire damn thing was still exactly his usual textual diarrhoea. 30 paragraphs that could’ve been two simple charts. A++ would absolutely only skim through again.














