Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Found a high quality sneer of OpenAI from Los Angeles Review of Books: Literature Is Not a Vibe: On ChatGPT and the Humanities
More flaming dog poop appeared on my doorstep, in the form of this article published in VentureBeat. VB appears to be an online magazine for publishing silicon valley propaganda, focused on boosting startups, so it’s no surprise that they’d publish this drivel sent in by some guy trying to parlay prompting into writing.
Point:
Apple argues that LRMs must not be able to think; instead, they just perform pattern-matching. The evidence they provided is that LRMs with chain-of-thought (CoT) reasoning are unable to carry on the calculation using a predefined algorithm as the p,roblem grows.
Counterpoint, by the author:
This is a fundamentally flawed argument. If you ask a human who already knows the algorithm for solving the Tower-of-Hanoi problem to solve a Tower-of-Hanoi problem with twenty discs, for instance, he or she would almost certainly fail to do so. By that logic, we must conclude that humans cannot think either.
As someone who already knows the algorithm for solving the ToH problem, I wouldn’t “fail” at solving the one with twenty discs so much as I’d know that the algorithm is exponential in the number of discs and you’d need 2^20 - 1 (1048575) steps to do it, and refuse to indulge your shit reasoning.
However, this argument only points to the idea that there is no evidence that LRMs cannot think.
Argument proven stupid, so we’re back to square one on this, buddy.
This alone certainly does not mean that LRMs can think — just that we cannot be sure they don’t.
Ah yes, some of my favorite GOP turns of phrases, “no unknown unknowns” + “big if true”.
This is a fundamentally flawed argument. If you ask a human who already knows the algorithm for solving the Tower-of-Hanoi problem to solve a Tower-of-Hanoi problem with twenty discs, for instance, he or she would almost certainly fail to do so. By that logic, we must conclude that humans cannot think either.
“I don’t understand recursion” energy
Impending availability of fully-functional Casio ring watches:
https://newatlas.com/wearables/casio-g-shock-nano-watch-ring/
This is a joke, right?
Nah it’s real. As mentioned, they launched a limited metal ring model that was immensely popular. This is just them riding the wave.
Anything’s a cock ring if you’re brave enough
@istewart @BlueMonday1984 Still waiting on a mini F-91W.
@istewart @BlueMonday1984 OK, who at Casio spent too much time reading Heinlein?
Casio and their target market are my kind of nerds. Taking a silly idea and an iconic design and pushing the engineering to its limits entirely for the bit is absolutely unhinged and I’m so happy they did it.
I have an analog Casio G-Shock that’s the perfect beater watch - radio controlled, solar charging, I can discern the hands in the dark without glasses, and almost indestructible. It wasn’t terribly expensive either.
I think Casio is threading the needle quite well with new technology. I’m sure they’re exploring pure smart watches, but the core ideal is “no maintenance” - you don’t have to change the battery or set the time[1]. This naturally leads to tough, energy-concious engineering, and as they make millions of watches, they have economies of scale.
The newer models have BT low energy so you can use the admittedly fiddly controls with an app. But you don’t need to. It’s just a complement.
[1] obviously this only applies to the more expensive models, and if your local time source supports DST
All 3 of the major Japanese manufacturers (Casio, Seiko, Citizen) have solar-powered radio sync models, but so far Casio is the best in my experience, and has the widest range of models. The Casios tend to have an auto-DST setting that relies on an internal calendar as well as the time signal. I have a chonky Seiko solar-atomic pilot’s watch (with rotary slide-rule bezel!), but it doesn’t have auto-DST so I have to bounce it back and forth between time zones. And it also doesn’t seem to be as adept at receiving the WWVB signal as my Casios; it needs to be next to a window, while the Casios don’t seem to care as long as there’s not too much building mass to the east. I haven’t had a chance to try a Citizen yet, but they now have solar-atomic moon-phase watches, which is tempting.
@istewart @BlueMonday1984, looks to me like it needs the whole watch face to be the display, though it’s possible that I’d find it too small to read unassisted anyway.
I imagine that there won’t be PV-rechargeable models due to size… would be nice to be wrong about that, though.
As a gshock fan, the “democratization” of that initial gshock ring is amazing lol
Thoughts / notes on Nostr? A local on a tech site is pushing it semi-hard, and I just remember it being mentioned in the same breath as Bluesky back in the day. It ticks a lot of techfash boxes - decentralized, “uncensorable”, has Bitcoin’s stupid Lightning protocol built in.
Jack Dorsey seems to like throwing money at it:
Jack Dorsey, the co-founder of Twitter, has endorsed and financially supported the development of Nostr by donating approximately $250,000 worth of Bitcoin to the developers of the project in 2023, as well as a $10 million cash donation to a Nostr development collective in 2025.
nostr neatly covers all obsessions of dorsey. it’s literally fash-tech (original dev, fiatjaf, is a right-wing nutjob; and current development is driven by alex gleason of the truth dot social fame), deliberately designed to be impossible to moderate (“censorship-resilient”); the place is full of fascists, promptfondlers and crypto dudes.
gleason is also responsible for soapbox, which is pleroma frontend (or maybe fork?) used as far as i know exclusively by nazis (which also makes defederation easier)
did you also see the FUTO shit? it’s getting Red String On Corkboard bad to track all these fuckers
I tried making one a few years back, maybe time to update it.
https://nfultz.github.io/murderboard/wpc-murderboard.htm
(arrow keys to scroll)
exploding-heads, openly trumpist lemmy instance, fucked off there when admin got bored of baiting normal people, make of that what you will
flashback: even back then handful of regulars objected that nostr is packed with cryptobros and spam, so it’s like that for 2y minimum
fuck nostr. flere and fullsquare covered the details well tho
Boss at new job just told me we’re going all-in on AI and I need to take a core role in the project
They want to give LLMs access to our wildly insecure mass of SQL servers filled with numeric data
Security a non factor
😂🔫
Sounds like the thing to do is to say yes boss, get Baldur Bjarnason’s book on business risks and talk to legal, then discover some concerns that just need the boss’ sign-off in writing.
Heartbreaking: I work in the cesspool called the Indian tech industry
They will stonewall me and move forward regardless. I’m going to do what I can, raise a stink and polish my CV
Checked back on the smoldering dumpster fire that is Framework today.
Linux Community Ambassadors Tommi and Fraxinas have jumped ship, sneering the company’s fash turn on the way out.
I just saw this. I sent an email to Framework a few days ago asking if they would delete my account and letting them know this was the reason.
apologies for just linking to my own bsky post but I’m lazy: https://bsky.app/profile/scgriffith.bsky.social/post/3m4qjnkeyls23
tl;dr I’ve gotten a bit suspicious that “AI users will be genocided” posts on reddit are a nazi op
not outside of the fascist playbook to claim that they are the real victims. The example that comes to mind is the myth of white genocide, but also literally any fascist rhetoric is like that.
It’s well trodden ground to say that genAI usage and support for genAI resonates with populist/reactionary/fascist themes in that it inherently devalues and dehumanises, and it promotes anti-intellectualism. If you can be replaced by AI, what worth do you have? And why think if the AI can do it for you?
So, of course this stuff being echoed in spaces where the majority are ignorant to the nazi tilt. They can’t and don’t understand fascism on a structural level, they can only identify it when it’s trains and gas chambers.
It’s been a while since I used Reddit. Is the thesis that subscribers to ChatGPT will be rounded up and killed? By whom? For what stated reason? It sounds like a weird inversion of victimhood, considering the number of GenAI user (even if they’re just casual users) and the massive money and hype around GenAI by companies and way too many govs.
frankly that’s the most detailed I’ve seen it get. usually it’s more like this

jfc
curious, that art style in that reply strongly matches the image I saw on this toot drifting by in my timeline earlier
I haven’t touched image generators and idk how different their products are, if at all. but I think of this as the default AI “illustrated” style. very low on detail outside of the objects of focus, heavy line work, flat, rounded, muted colors
muted colors
A lot of it looks like it was pissed on.
this is weird. My first thought is that it’s just another vector of normalization for the idea that people who are afraid of and Post about genocide or other forms of discriminatory violence are not to be taken seriously. By putting a variety of insane victimhood appropriating subcultures into the internet milieu, it allows people to ignore what’s happening (and what may be about to happen) in the real world, where groups of people actually are subject to fascistic violence.
Probably one part normalisation, one part AI supporters throwing tantrums when people don’t treat them like the specialiest little geniuses they believe they are. These people have incredibly fragile egos, after all.
That is my thought as well. It’s like the “you call everyone you disagree with a Nazi” argument from the 90s and 00s - discrediting attempts to call out fascist and genocidal ideas creates a lot of cover for those ideas to spread without being appropriately checked. It helps create a situation where serious and respectable people can keep arguing that things aren’t that bad all the way until they get pushed onto a cattle car.
@gerikson @techtakes It’s classic DARVO gaslighting: https://en.wikipedia.org/wiki/DARVO
Mozilla destroys the 20-year-old volunteer community that handled Japanese localization and replaces it with a chatbot. Adding insult to insult, Mozilla then rolls a critical failure on “reading the room.”
Would you be interested to hop on a call with us to talk about this further?
https://support.mozilla.org/en-US/forums/contributors/717446
Oh what the fuck why can Mozilla not just STOP. Just… STOP. Honestly sick of this shit.
Jesus fucking Christ.
I did paid work in Japanese translation once, I stopped because the ungodly amount of work wasn’t worth what they pay you. The tech people really have no idea what they’re breaking by moving fast here.
Sounds exactly like what happened at iNaturalist.
Like a complete fucking idiot, I paid for two years of protonmail right before discovering they are fascists. I would like to move to another provider. I have until August. I have been considering Forward Email. Anyone have thoughts on this provider or recommendations?
Im very very happy on Fastmail. They are sensible people who offer mainly email (and calendar stuff) with no overpromises. Their servers are hosted in the USA tho, so that may affect your choice.
Fuuuuuuck. Thanks for the info but i hate it ):
the exasperation I so often express is not for nothing :|
I’m in the same boat. From what I’ve read I am planning on migrating to tuta when it runs out.
I am using posteo.de. They are good but I dislike that they have no option for using your own domain which makes switching provider really annoying. If I had to choose a provider again I would probably go with mailbox.org.
I like mailbox.org so far with their servers in Germany
haven’t seen them before, but a short tour around their infra/systems providers isn’t particularly exciting - depending on both your threat model and what-you-want in a vendor
some parts/pages do provide some detail in encouraging depth, but I’d have to do a much more full review to give you a good answer
there’s been a couple of “where email” threads over the last year, tuta’s still one of the top options on that but you can check the threads if you want to see some of the other promising options
Watching another rationalist type on twitter become addicted to meth. You guys weren’t joking.
(no idea who - just going by the subtweets).
NotAwfulTech and AwfulTech converged with some ffmpeg drama on twitter over the past few days starting here and still ongoing. This is about an AI generated security report by Google’s “Big Sleep” (with no corresponding Google authored fix, AI or otherwise). Hackernews discussed it here. Looking at ffmpeg’s security page there have been around 24 bigsleep reports fixed.
ffmpeg pointed out a lot of stuff along the lines of:
- They are volunteers
- They have not enough money
- Certain companies that do use ffmpeg and file security reports also have a lot of money
- Certain ffmpeg developers are willing to enter consulting roles for companies in exchange for money
- Their product has no warranty
- Reviewing LLM generated security bugs royally sucks
- They’re really just in this for the video codecs moreso than treating every single Use-After-Free bug as a drop-everything emergency
- Making the first 20 frames of certain Rebel Assault videos slightly more accurate is awesome
- Think it could be more secure? Patches welcome.
- They did fix the security report
- They do take security reports seriously
- You should not run ffmpeg “in production” if you don’t know what you’re doing.
All very reasonable points but with the reactions to their tweets you’d think they had proposed killing puppies or something.
A lot of people seem to forget this part of open source software licenses:
BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW
Or that venerable old C code will have memory safety issues for that matter.
It’s weird that people are freaking out about some UAFs in a C library. This should really be dealt with in enterprise environments via sandboxing / filesystem containers / aslr / control flow integrity / non-executable memory enforcement / only compiling the codecs you need… and oh gee a lot of those improvements could be upstreamed!
For a moment there I was worried that ffmpeg had turned fash.
Anyway, amazing job ffmpeg, great responses. No notes
The ffmpeg social media maintainer is an Elon fan so when he purchased Twitter and made foolish remarks about rewriting it all in C and how only hardcore programmers are cool that write C/assembly they quickly jumped on it.
https://xcancel.com/FFmpeg/status/1598655873097912320
Ya maybe it’s a way to attract more contributors or donation money. Felt a bit weird after Elon was shitting on all the people who built Twitter and firing them.
🙃🙃🙃
🙃🙃🙃
More wiki drama: Jimbo tries to both sides the gaza genocide
Big Yud posts another “banger”[1], and for once the target audience isn’t impressed:
https://www.lesswrong.com/posts/3q8uu2k6AfaLAupvL/the-tale-of-the-top-tier-intellect#comments
I skimmed it. It’s terrible. It’s a long-winded parable about some middling chess player who’s convinced he’s actually good, and a Socratic strawman in the form of a young woman who needles him.
Contains such Austean gems as this
If you had measured the speed at which the resulting gossip had propagated across Skewers, Washington – measured it very carefully, and with sufficiently fine instrumentation – it might have been found to travel faster than the speed of light in vacuum.
In the end, both strawmen are killed by AI-controlled mosquito drones, leaving everyone else feeling relieved .
Commenters seem miffed that Yud isn’t cleaning up his act and writing more coherently so as to warn the world of Big Bad AI, but apparently he just can’t help himself.
[1] if by banger you mean a long, tedious turd. 42 minute read!
The dumb strawman protagonist is called “Mr. Humman” and the ASI villain is called “Mr. Assi”. I don’t think any parody writer trying to make fun of rationalist writing could come up with something this bad.
The funniest comment is the one pointing out how Eliezer screws up so many basic facts about chess that even an amateur player can see all the problems. Now, if only the commenter looked around a little further and realized that Eliezer is bullshitting about everything else as well.
Let’s not forget that the socratic strawwoman is named “Socratessa”
I couldn’t even make it through this one, he just kept repeating himself with the most absurd parody strawman he could manage.
This isn’t the only obnoxiously heavy handed “parable” he’s written recently: https://www.lesswrong.com/posts/dHLdf8SB8oW5L27gg/on-fleshling-safety-a-debate-by-klurl-and-trapaucius
Even the lesswronger’s are kind of questioning the point:
I enjoyed this, but don’t think there are many people left who can be convinced by Ayn-Rand length explanatory dialogues in a science-fiction guise who aren’t already on board with the argument.
A dialogue that references Stanislaw Lem’s Cyberiad, no less. But honestly Lem was a lot more terse and concise in making his points. I agree this is probably not very relevant to any discourse at this point (especially here on LW, where everyone would be familiar with the arguments anyway).
Reading this felt like watching someone kick a dead horse for 30 straight minutes, except at the 21st minute the guy forgets for a second that he needs to kick the horse, turns to the camera and makes a couple really good jokes. (The bit where they try and fail to change the topic reminded me of the “who reads this stuff” bit in HPMOR, one of the finest bits you ever wrote in my opinion.) Then the guy remembers himself, resumes kicking the horse and it continues in that manner until the end.
Who does he think he’s convincing? Numerous skeptical lesswrong posts have described why general intelligence is not like chess-playing and world-conquering/optimizing is not like a chess game. Even among his core audience this parable isn’t convincing. But instead he’s stuck on repeating poor analogies (and getting details wrong about the thing he is using for analogies, he messed up some details about chess playing!).
42 minute read
Maybe if you’re a scrub. 19 minutes baby!!! And that included the minute or so that I thought about copypasting it into a text editor so I could highlight portions to sneer at. Best part of this story is that it is chess themed and takes place in “Skewers”, Washington, vs. “Forks”, Washington, as made famous by Twilight.
Anyway, what a pile of shit. I choose not to read Yud’s stuff most of the time, but I felt that I might do this one. What do you get if you mix smashboards, goofus and gallant strips, that copypasta about needing a high IQ to like rick and morty, and the worst aspects of woody allen? This!
My summary:
Part 1. A chess player, “Mr. Humman”, plays a match against “Mr. Assi” and loses. He has a conversation with a romantic interest, “Socratessa”, or Tessa for short, about whether or not you can say if someone is better than another in chess. Often cited examples of other players are “Mr. Chimzee” and “Mr. Neumann”.
Both “Humman” and “Socratessa” are strawmen. “Socratessa” is described as thus:
One of the less polite young ladies of the town, whom some might have called a troll,
Humman, of course, talks down to her, like so:
“Oh, my dear young lady,” Mr. Humman said, quite kindly as was his habit when talking to pretty women potentially inside his self-assessed strike zone
I hate to give credit to Yud here for anything, so here’s what I’ll say: This characterisation of Humman is so douchey that it’s completely transparent that Yud doesn’t want you to like this guy. Yud’s methodology was to have Humman make strawman-level arguments and portray him as kind of a creep. However, I think what actually happened is that Yud has accidentally replicated arguments you might hear from a smash scrub about why they are not a scrub, but are actually a good player, just with a veneer of chess. So I don’t like this character, but not because of Yud’s intent.
Socratessa (Tessa for short) is, as gerikson points out, is a Socratic strawman. That’s it. It’s unclear why Yud describes her as either a troll or pretty. She argues that Elo ratings exist and are good enough at predicting whether one player will beat another.
The story should end here, as it has fulfilled its mission as an obvious analog to Yud’s whole thing about whether or not you can measure intelligence or say someone is smarter than another.
Part 2. Humman and Socratessa argue about whether or not you can measure intelligence or say someone is smarter than another.
Yeah, after establishing a deeply tortured chess metaphor and beating it to death and beyond, Yud proceeds to just straight-up bitching about how nobody is taking his book seriously. It just fucking keeps going even as it dips into the most pathetic and hateful eugenics part of their whole ideology because of course it does.
“Outsiders aren’t agreeing with me. I must return to the cult and torture my flock with more sermons.” type shit
eugenics
Yes, the bit about John von Neumann sounds like he is stuck in the 1990s: “there must be a gene for everything!” not today “wow genomes are vast interconnected systems and individual genes get turned on and off by environmental factors and interventions often have the reverse effect we expect.” Scott Alexander write an essay admiring the Hungarian physics geniuses and tutoring.
and don’t even get me started on splice variants
yud’s scientific model is aristotlean, i.e. he thinks of things he thinks should be true, then rejects counter-evidence with a bayesian cudgel or claims of academic conspiracy. So yeah genes are feature flags, why wouldnt they be (and eugenics is just SRE ig)
Meanwhile he objects to people theorycrafting objections (Tessa’s dialogue about the midwit trap and an article for the Cato Institute called “Is that your true rejection?”) That is an issue in casual conversations, but professionals work through these possibilities in detail and make a case that they can be overcome. Those cases often include past experience completing similar projects as well as theory. A very important part of becoming a professional is learning to spot “that requires a perpetual motion machine,” “that implies P = NP,” “that requires assuming that the sources we have are a random sample of what once existed” and not getting lost in the details; another is becoming part of a community of practitioners who criticize each other.
I hope Yud doesn’t mind if I borrow Mr. Assi for my upcoming epic crossover fic, “Naruto and Batman Stop the Poo-Pocalypse”
Wait a minute, what do you mean, it’s not supposed to be that kind of ass?
If you had measured the speed at which the resulting gossip had propagated across Skewers, Washington – measured it very carefully, and with sufficiently fine instrumentation – it might have been found to travel faster than the speed of light in vacuum.
How do you write like this? How do you pick a normal joking observation and then add more words to make it worse?
How do you write like this?
The first step is not to have an editor. The second step is to marinate for nearly two decades in a cult growth medium that venerates you for not having an editor.
First comment: “the world is bottlenecked by people who just don’t get the simple and obvious fact that we should sort everyone by IQ and decide their future with it”
No, the world is bottlenecked by idiots who treat everything as an optimization problem.
@sinedpick @awful.systems @gerikson @awful.systems
The world is hamstrung by people who only believe there is one kind of intelligence, it can be measured linearly, and it is the sole determinant of human value.
The Venn diagram of these people and closet eugenicists looks like a circle if you squint at it.
One commenter cites von Neumann’s minimax theorem to argue that there is an “inexploitable” strategy for chess, and then another commenter starts talking about the well-ordering theorem.
Of course, all of that is irrelevant because chess is a finite, sequential, perfect-information game.
Some juicy extracts:
Soon enough then the appointed day came to pass, that Mr. Assi began playing some of the town’s players, defeating them all without exception. Mr. Assi did sometimes let some of the youngest children take a piece or two, of his, and get very excited about that, but he did not go so far as to let them win. It wasn’t even so much that Mr. Assi had his pride, although he did, but that he also had his honesty; Mr. Assi would have felt bad about deceiving anyone in that way, even a child, almost as if children were people.
Yud: “Woe is me, a child who was lied to!”
Tessa sighed performatively. “It really is a classic midwit trap, Mr. Humman, to be smart enough to spout out words about possible complications, until you’ve counterargued any truth you don’t want to hear. But not smart enough to know how to think through those complications, and see how the unpleasant truth is true anyways, after all the realistic details are taken into account.” […] “Why, of course it’s the same,” said Mr. Humman. “You’d know that for yourself, if you were a top-tier chess-player. The thing you’re not realizing, young lady, is that no matter how many fancy words you use, they won’t be as complicated as real reality, which is infinitely complicated. And therefore, all these things you are saying, which are less than infinitely complicated, must be wrong.”
Your flaw dear Yud isn’t that your thoughts cannot out-compete the complexity of reality, it’s that it’s a new complexity untethered from the original. Retorts to you wild sci-fi speculations are just minor complications brought by midwits, you very often get the science critically wrong, but expect to still be taken seriously! (One might say you share a lot of Humman misquoting and misapplying “econ 101”. )
“Look, Mr. Humman. You may not be the best chess-player in the world, but you are above average. [… Blah blah IQ blah blah …] You ought to be smart enough to understand this idea.”
Funilly enough the very best chess players like Nakamura or Carlsen will readily call themselves dumbasses outside of chess.
“Well, by coincidence, that is sort of the topic of the book I’m reading now,” said Tessa. “It’s about Artificial Intelligence – artificial super-intelligence, rather. The authors say that if anyone on Earth builds anything like that, everyone everywhere will die. All at the same time, they obviously mean. And that book is a few years old, now! I’m a little worried about all the things the news is saying, about AI and AI companies, and I think everyone else should be a little worried too.”
Of course this a meandering plug to his book!
“The authors don’t mean it as a joke, and I don’t think everyone dying is actually funny,” said the woman, allowing just enough emotion into her voice to make it clear that the early death of her and her family and everyone she knew was not a socially acceptable thing to find funny. “Why is it obviously wrong?”
They aren’t laughing at everyone dying, they’re laughing at you. I would be more charitable with you if the religion you cultivate was not so dangerous, most of your anguish is self-inflicted.
“So there’s no sense in which you’re smarter than a squirrel?” she said. “Because by default, any vaguely plausible sequence of words that sounds it can prove that machine superintelligence can’t possibly be smarter than a human, will prove too much, and will also argue that a human can’t be smarter than a squirrel.”
Importantly you often portray ASI as being able to manipulate humans into doing any number of random shit, and you have an unhealthy association of intelligence with manipulation. I’m quite certain I couldn’t get at squirrel to do anything I wanted.
"You’re not worried about how an ASI […] beyond what humans have in the way of vision and hearing and spatial visualization of 3D rotating shapes.
Is that… an incel shape-rotator reference?
Yud: “Woe is me, a child who was lied to!”
He really can’t let down that one go, it keeps coming up. It was at least vaguely relevant to a Harry Potter self-insert, but his frustrated gifted child vibes keep leaking into other weird places. (Like Project Lawful, among it’s many digressions, had an aside about how dath ilan raises it’s children to avoid this. It almost made me sympathetic towards the child-abusing devil worshipers who had to put up with these asides to get to the main character’s chemistry and math lectures.)
Of course this a meandering plug to his book!
Yup, now that he has a book out he’s going to keep referencing back to it and it’s being added to the canon that must be read before anyone is allowed to dare disagree with him. (At least the sequences were free and all online)
Is that… an incel shape-rotator reference?
I think shape-rotator has generally permeated the rationalist lingo for a certain kind of math aptitude, I wasn’t aware the term had ties to the incel community. (But it wouldn’t surprise me that much.)
Anyone knows who’s (presumably) Tor from the “Tor’s Cabinet of Curiosities” Youtube channel and what’s up with his ideological commitments? Somebody recommended me this video on some Wikipedia grifter, I was enjoying it until suddenly (ca. 23:20 ) he name-drops Scott Alexander as “a writer whom I’m a big fan of”. I thought, should somebody tell him. Then I looked up and the guy has an entire video on subtypes of rationalists, so he knows, and chose to present as a fan anyway. Huh. However as far as a cursory glance goes the channel doesn’t look to bat for, you know, “human biodiversity”. (I haven’t watched the rat video because I don’t want to ruin my week)
The rat video starts with him proclaiming that in rationalism he “found his people”, that was the point where I bailed.
Another deep-dive into DHH’s decline has popped up online: DHH and Omarchy: Midlife crisis:
What do you mean by decline? Years ago I’ve been involved in a local ruby community in Poland and even back then his takes were considered unhinged.



















