Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
https://iceberg.mit.edu/report.pdf “We simulated 131 million human beings using LLMs and found 11% of jobs could be done by AI instead of humans” I can’t tell what’s real with LLMs anymore. I wonder if that’s the point.
That popular piece on why it’s dumb to build (GenAI-scale) in space hit lobste.rs, and while most commenters agreed it is indeed dumb, fascist Flask founder felt the need to “well ackstually” for some stupid reason
duh ofc there are computers in space, there are computers everywhere, but the whole fucking point of the piece is you can’t take thousands of racks of GPUs and launch them into space and expect them to work
Promptfans still can’t get over the Erdős problems. Thankfully, even r/singularity has somehow become resistant to the most overhyped claims. I don’t think I need to comment on this one.
Link: https://www.reddit.com/r/singularity/comments/1pag5mp/aristotle_from_harmonicmath_just_proved_erdos/


alt text (original claim)
We are on the cusp of a profound change in the field of mathematics. Vibe proving is here.
Aristotle from @HarmonicMath just proved Erdos Problem #124 in @leanprover, all by itself. This problem has been open for nearly 30 years since conjectured in the paper “Complete sequences of sets of integer powers” in the journal Acta Arithmetica.
Boris Alexeev ran this problem using a beta version of Aristotle, recently updated to have stronger reasoning ability and a natural language interface.
Mathematical superintelligence is getting closer by the minute, and I’m confident it will change and dramatically accelerate progress in mathematics and all dependent fields.
alt text (comments)
Gcd conditions removed, still great, but really hate the way people shill their stuff without any rigor to explaining the process. A lot of things become very easy when you remove a simple condition. Heck reimann hypothesis is technically solved for function fields over finite fields. But nowadays in the age of hype, a tweet post would probably say “Reimann hypothesis oneshotted by AI” even though that’s not true.
Gcd conditions removed
So they didn’t solve the actual problem?
Stuff like this is particularly frustrating because this is one of they places where I have to grudgingly admit that llm coding assistants could actually deliver… it turns out that having to state a problem unambiguously and having a way in which answers can be automatically checked for correctness means that you don’t have to worry about bullshit engines bullshitting you so much.
No llm is going to give good answers to “solve the riemann hypothesis in the style of euler, cantor, tao, 4k 8k big boobies do not hallucinate” and for everything else the problem then becomes “can you formally specify the parameters of your problem such that correct solutions are unambiguous” and now you need your professional mathematicians and computer scientists and cryptographers still…
And whilst we’re in that liminal space where no-one reads the old stubstack but the new one hasn’t yet surfaced, here’s an article about the ghastly state of it project management around the world, with a brief reference to ai which grabbed my attention, and made me read the rest, even though it isn’t about ai at all.
Few IT projects are displays of rational decision-making from which AI can or should learn.
it doesn’t get any cheerier, and wraps up with
It may be a forlorn request, but surely it is time the IT community stops repeatedly making the same ridiculous mistakes it has made since at least 1968, when the term “software crisis” was coined
Oof.
That’s pretty long and you should definitely repost it in the next sack. I lightly skimmed it and will read it in full later.
RE, the LLM of it all: Wonder how many times this has already happened:
“AI, please invent a new project management ideology for me, improving on agile and waterfall!”
“Certainly. Here’s AgileFall, a linear combination of the two. What the fuck were you expecting here?”
Noted for the amusing headline: https://www.nature.com/articles/d41586-025-03506-6
Major AI conference flooded with peer reviews written fully by AI Controversy has erupted after 21% of manuscript reviews for an international AI conference were found to be generated by artificial intelligence.
Do note that it appears to be an advert for ai peer review detection services, but I was still tickled by the whole “why are there leopards at our face-eating conference” surprise being expressed.
Robin Hanson has a sneerworthy level of hubris that has lead to him falling for all sorts of BS over the years (he’s long argued that being an economist makes him more rational and better at working out the truth than domain experts at all fields of science, apparently because only economists have heard of incentives) but I was still surprised to learn he’s now a UFO conspiracy nut.
Presumably he caught some History channel rerun of Ancient Aliens and was struck by how much more plausible it was than his “Age of Em” theory.
What’s more plausible, that I made a bad assumption in my fermi estimation or that all the world’s governments have been undertaking the most wildly successful coverup for nearly a century with no leaks or failures? Clearly the latter.
Also credulously reiterating Trump’s stupid “Department of War” rebrand… makes me think his writing is narrowly targeted at a certain group
Whats your P(Yakub)
0 isnt a real probability, so i gotta give it the gentleman’s 50:50
Hanson into conspiracies, Goertzel into parapsychology, going well on the rational/transhuman side.
Tomorrow Grimes will DJ a livestream of immortality influencer Bryan Johnson tripping on shrooms to determine its effect on longevity. Mr. Beast and the CEO of Salesforce will be there too.
Now, folks out there are calling this a Biblically accurate blunt rotation, but to be fair, it’s missing Aella.
Btw, im a bit afraid to ask, as I should actually have googled this first (but I’m also becoming weary of google, tried to find some code related things yday and it was giving me wild results, it used to be better at that), but is there a sort of guide on blacksky, like what does joining one of those sites imply (I think I should join the ‘blacksky for non-poc’ variant, at least I got that was a thing) but how well does it interact with the main bsky, how easy is it to switch, consequences for interactions/follows/followers, etc etc. Anybody have a guide on that? In an attempt to migrate away more from bsky which is making a few questionable moderation decisions. (I also know that due to how the whole system is setup, migrating doesn’t actually get you away from those decisions, so I’m also asking out of curiosity, so take that into account regarding how much effort to put into my question, also means ignoring/not reacting is fine. Just shooting a shot, and trying to be clear about intentions, and my feelings re reactions).
I used https://tektite.cc/ to migrate off Bluesky, and picked the myatproto.social option from the drop-down list. This may be a good start: https://leaflet.pub/000b57de-78dc-4939-8c66-79227d010cce
Chasing links landed me here:
Grimes used to come to my club nights in Vancouver, and one time a guy who didn’t know who she was saw her dancing like an attention starved, crystal-gripping idiot, and he said to me “That’s the kind of chick who would take a shit on your chest if you asked.”
https://blacksky.community/profile/did:plc:mpc62tgblkwndximirue5dxg/post/3lh7kznna3k2y
I’ll tell you what, it is fantastic to be clean and sober today
I was tempted to gawk but then I realised… the guy’s just going to see some shapes in the light and it’s going to be boring as shit
(not in any way meaning to stan the weirdo) at that dosage it might be more than that, but also it’s ??? to me to consider taking that much while also being exposed to a bunch of people like that around me
and that it’s even the dosage it is feels like it tells you a hell of a lot about the extent of his baseline intake too
these creeps are doing a better job at antidrug messaging than any propaganda I ever saw
I dont know much about dosage numbers etc, care to mention a bit more how much is normal and how much he is off the baseline?
(I just recall some old friends who once tried shrooms and didnt seem to notice much so they went to a disco and for one of them it hit at the middle of the dancefloor so he stood there like a statue (the guy was also tall and a metalhead, so quite a sight). And before it became illegal (some tourist got himself killed by using shrooms and drinking and they blamed the shrooms) to give others spores or something I managed to get some from someone as a promo/anarchist thing. Never did anything with them).
(Fine if you dont want to, or if talking about it exposes you/others to risk etc, dont. Just curious).
I dont know much about dosage numbers etc, care to mention a bit more how much is normal and how much he is off the baseline?
“normal” is a bit of a fucked term when dealing with this stuff, in part because we really don’t have good human-wide data (hi can you see what ghoulfuck is playing off of?). why don’t we have that? oh y’know that entire little multidecade war on drugs things perpetetrated by the selfsame government that is…[…breathe]
that said, psychonautwiki is a more informative starting point than I could concisely be. that also said, part of the fucky bit is that you get different strains of shrooms, with different effective concentrations of psychoactive compounds (eg psilocybin cubensis vs psilocybin cyanescens)
beyond that, I don’t really think that awful is a place for psychedelic discussion (and I wouldn’t try to make it one). there are often conversations to be had, but here ain’t the where
Thanks! And of course there is a wiki for it.
E: unrelated to the talks about shrooms. Didn’t Grimes talk to Aella about how they felt cheated by people around them who were a lot more evil than they pretended to be, wonder why this doesn’t seem to have caused a change of heart/scenery.
re your edit: it hasn’t changed because those words are just a front, a way to try save face as they get caught out with their bullshit. both of them have a choice, and have had a choice this whole time. they keep choosing to be where they are, what they say
That’s about twice a ‘normal dose’ and in the realm of what Terrence McKenna would call a ‘heroic dose’.
Mushrooms vary a lot in real potency but at that level of dosage ego death is almost guaranteed, and it’s likely that the user will spend at least 30 minutes to an hour unable to articulate language.
I love the phrase ego death because everyone I’ve heard describe the experience sounds like the most egotistical mf in the universe with how impressed they are by their own self-enlightenment.
it’s likely that the user will spend at least 30 minutes to an hour unable to articulate language.
This presumes Johnson was able to articulate language in the first place, which given that his brain has melted at an incredible pace since 2020 may be a bit of a stretch.
this post is more accurate than most
This doubly disappoints me because in a professional capacity I strive for being incredibly intentional and accurate while recreationally I aspire to shitposting, and “more accurate than most” satisfies neither. I really need to get my blood boy to write better material.
Someone needs to offer Erin Patterson a Suicide Squad deal
Found a fitting lament for our current era:

alt text: “Kinda hate that we live in a world where any new F/OSS tool or operating system that gets buzz needs to be vetted for Nazi entanglements.”
Nazi entanglements
That’s why none of the quantum computers work
I heard the same complaint from leftist metal fans.
See also goth/industrial music. The latter also has (like metal) a bit of a sexism issue.
He somehow did an ad read in the middle of a substack post. Sign of the times.
That he’s being sponsored by DeleteMe is oddly fitting in its own right. Were it not for surveillance capitalism relentlessly stealing personal data and invading people’s privacy, its services would be completely unnecessary.
So I got jumpscared recently by that couple. I was listening to one of my many favourite podcasts, Threedom, when on the most recent episode, “I Definitely Tuned Out and I Agree With You”, this exchange happened, starting around 44 minutes, give or take 10 for ads.
spoiler tagged exchange, in case you are a pisspig* and don't want spoilers.
Context: the hosts are talking about how they value fostering their children’s expressive abilities, even if that means their children do things like scream in inappropriate situations.
Scott: I guess what I’m trying to say is that some parents would look at us, and say, like, “oh, you’re not teaching them how to act in social situations or whatever,”
Paul: Yes, you should slap them across the face, in the store.
<laughter>
Scott: Who was that… that… that, like, person who… there’s some parent out there that thinks that you need to like have a million kids or whatever and uh, and a paper writer followed them around and he just smacked his kid right in front of the paper writ-, er… the journalist? Uh, anyway…
Lauren: Paper writer?
Scott: Yeah, sorry, sorry, Journalist.
Paul: Couldn’t sound more specific, and yet I don’t know.
<end of reference>
Tried too hard transcribing this and still feel like I did a bad job.
Anyway, gosh, congrats to them on their extreme success in being platformed. Couldn’t have been a more deserving couple. /s
*pisspig is the name given to a fan of the podcast Threedom. The fans picked the name, the hosts aren’t really sure why.
I do think this is the optimal way for the couple to be referenced in media.
“Enjoy” this toxic stew of prediction markets, racism, and the objectifying of women:
https://protos.com/polymarket-criticized-for-racist-post-targeting-fake-baddies/
cryptobro on rationalist gambler on attention farmer violence

Twitter adds default country tags. Immediately finds a whole bunch of foreign bots agitating about US politics. Promptly ignores that in order to be racist.
A week-ish ago I said that Mike Masnick is twice as annoying about AI as Dare Obasanjo.
Yeah Dare has been plausible deniable all in on AI for a while now.
Neal Stephenson comes out strong and funny against GenAI here:
https://nealstephenson.substack.com/p/a-remarkable-assertion-from-a16z
“Hypothesis 1: it was written by a clanker”
Best part is the footnote:
About 20 years ago, some spammers came up with a bright idea for circumventing spam filters: they took a bootleg copy of my book Cryptonomicon and chopped it up into paragraph-length fragments, then randomly appended one such fragment to the end of each spam email they sent out. As you can imagine, this was surreal and disorienting for me when pitches for herbal Viagra and the like started landing in my Inbox with chunks of my own literary output stuck onto the ends. Come to think of it, most of those fragments actually did stop in mid-sentence, so I guess if today’s LLMs trained on old email archives it would explain why they “think” I write that way.
Stephenson knows how computers work.
@dgerard he does, looks like
@gerikson @BlueMonday1984 Sub-hypothesis 1B: the AI was fed the whole book but the input was truncated because whatever toolchain they used wasn’t made to handle 160k words at a time.
Hypothesis 3: As some people seem to insist, “literally” has recently morphed into a contronym, and now it figuratively also means “figuratively”.
…sorry, I meant it literally also means “figuratively”.
…no, wait, that’s just the same thing. 🙄 It *actually* also means “figuratively”.
(Really? People couldn’t find a better new word to provide emphasis than “literally”? What word do they want to unambiguously represent that concept now? Do they care? Ugh…)
tom sawyer literally rolling in wealth
but he never helps huck finn out financially?
pretty shit story, mark
(Really? People couldn’t find a better new word to provide emphasis than “literally”? What word do they want to unambiguously represent that concept now? Do they care? Ugh…)
Bit late to tilt at this windmill tbh. Prescriptivist pedantry is prohibited past puberty. This was decreed by Maximilian D. English (the D stands for dictionary) in 1727. I don’t make the rules (MDE does)
It seems really common for words for factuality to become intensifiers. I just used the word “really” as an intensifier, thought it really means things occurring in reality. “Very” had the same thing happen to it, as it originally meant “truthfully” (as in “verify” or “verity”). If I say something is “truly massive”, am I likely specifying the massiveness is not imaginary in some sense, or am I trying to convey massiveness beyond the lower bounds of “massive”? Is a “proper banger” of a tune distinct from an improper banger or is it just a highly bangerful banger?
truly massive
fuzzy logic says this thing has mass most of the time, ish
english is a fuck
What word do they want to unambiguously represent that concept now?
“Literally, not figuratively”, said in a Sterling Archer voice.
The use of literally in a fashion that is hyperbolic or metaphoric is not new—evidence of this use dates back to 1769. Its inclusion in a dictionary isn’t new either; the entry for literally in our 1909 unabridged dictionary states that the word is “often used hyperbolically; as, he literally flew.”
@gerikson @BlueMonday1984 The only thing that could have made that article better is if he’d literally ended it mid-sentence.
Lmao imagine reading a Stephenson book and being peeved that it ends
(His sex scenes are far far far worse than his endings, those are a mercy)
Years ago, I said, “I’ve never finished a Stephenson novel.” Someone replied, “Neither has he.”
oh yeah the relationship between the fusion-device wielding 30-something Aluetian freedom fighter and the 16 year old skateboard courier in Snow Crash is… of its time
We were reading through various classics for bookclub and we noticed how many books had a ~14 year old girl has romantic/sexual relationship/gets abused by 30+ year old man. Snow Crash was one of those. I know popular thinking on this has changed a lot the past 20+ years but still always a shock, esp when you realize how much you didnt notice it.
Also a reason why the first evil dead aged very badly. Dont show that to people without warning them unless you want them to leaf.
Yup that’s one of them. The cryptonomicon protagonist no-nut-Novembering all the way to the ww2 treasure is another special fave
@gerikson @BlueMonday1984 please do not insult the noble machine consciousnesses thus. It was probably written by a venture capitalist.
Someone in the comments found the github (??) where they made the site or something, and it def was generated initially, but it used heavy nerd speak so it was translated.
“Warning: his endings are notoriously abrupt, like a segfault in the middle of your favorite function.”
Andreessen Horowitz? More like, And here’s some horse shit
@swlabr yup
@gerikson @BlueMonday1984 real missed opportunity to end that post mid-sentence
There’s base-level sneer as this is posted on LW, but I found this comparison of LLM in call centers to cheap human labor for the same interesting:
So I’m not double checking their work because that’s more of a time and energy investment than I’m prepared for here. I also do not have the perspective of someone who has actually had to make the relevant top-level decisions. But caveats aside I think there are some interesting conclusions to be drawn here:
-
It’s actually heartening to see that even the LW comments open by bringing up how optimistic this analysis is about the capabilities of LLM-based systems. “Our chatbot fucked up” has some significant fiscal downsides that need to be accounted for.
-
The initial comparison of direct API costs is interesting because the work of setting up and running this hypothetical replacement system is not trivial and cannot reasonably be outsourced to whoever has the lowest cost of labor due. I would assume that the additional requirements of setting up and running your own foundation model similarly eats through most of the benefits of vertical integration, even before we get into how radically (and therefore disastrously) that would expand the capabilities of most companies. Most organizations that aren’t already tech companies couldn’t do it, and those that could will likely not see the advertised returns.
-
I’m not sure how much of the AI bubble we’re in is driven even by an expectation of actual financial returns at this point. To what extent are we looking at an investor and managerial class that is excited to put “AI” somewhere on their reports because that’s the current Cutting Edge of Disruptive Digital Transformation into New Paradigms of Technology and Innovation and whatever else all these business idiots think they’re supposed to do all day.
I’m actually going to ignore the question of what happens to the displaced workers here because the idea that this job is something that earns a decent living wage is still just as dead if it’s replaced by AI or outsourced to whoever has the fewest worker protections. That said, I will pour one out for my frontline IT comrades in South Africa and beyond. Whenever this question is asked the answer is bad for us.
I’ve worked in an adjacent field (workforce planning) and I deliver B2B software support for a living, so I too have Thoughts.
At least here in Schwedenland, contact centers have been filed down by relentless cost and tech pressure to be about as automated as can be. You have websites with FAQs, simple chatbots that basically repeat the FAQ for those for whom reading more than a sentence of text is too hard, phone trees to gatekeep you from the Inner Sanctum, etc. etc. The end result is that the actual people taking the calls are gonna be the ones who can make human decisions - troubleshoot a complex issue, handle insurance claims, upsell your mortgage.
Trying to att LLM voice tech to that is just going to add another filter between the customer and the center, with the additional reputational risk of the robot fucking up and losing the customer.
-
No idea if it was intentional given how long a series’ production cycle can be before it ends up on tv/streaming, but it’s hard not to see Vince Gilligan’s Pluribus as a weird extended impact-of-chatbots metaphor.
It’s also somewhat tedious and seems to be working under the assumption that cool cinematography is a sufficient substitute for character development.
BB and BCS were both kinda slow burns IMO. That’s not to say the new show is worth holding onto (haven’t seen it), just commenting on the trend.



















