Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
LLM capabilities have not improved at all in terms of producing meaningful science in the last year or two, but their ability to produce meaningless science that looks meaningful has wildly improved. I am concerned that this will present serious problems for the future of science as it becomes impossible to find the actual science in a sea of AI slop being submitted to journals.
https://www.reddit.com/r/Physics/comments/1s19uru/gpt_vs_phd_part_ii_a_viewer_reached_out_with_a/
“AI HAS SOLVED THE SCIENCE-GENERATION CRISIS”
It can do trillions of calculations per second. All of them wrong.
I’ve seen this story play out in software engineering: people were very impressed when the AI does unexpectedly well in one out of 50 attempts on an easy task, and so people decided to trust it for everything and turn their codebases into disasters. There was no great wave of new high-quality software. Instead, the only real result was that existing software has become far more buggy and insecure.
Now we have people using AI in science and math because it was impressive in random demonstrations of solving math problems. I now have friends asking me why I’m not using AI, and also saying that AI will be better than all mathematicians in 30 years or whatever. Do you really think I refuse to use AI out of ignorance? No, I know too much about it! I have seen the same story play out in software engineering, and what makes this any different?
“Scientists invented a fake disease. AI told people it was real”
https://www.nature.com/articles/d41586-026-01100-y
But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.
The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.
The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.
This actually gives me hope that we can poison the datasets pertaining to any sufficiently narrow technical topic.
The replausibility crisis.
In 2024 Ozy Brennan was indignant about Nonlinear Fund, the “incubator of AI-safety meta-charities” which lived as global nomads, hired a live-in personal assistant, asked her to smuggle drugs across borders for them, let a kind-of-colleague take her to bed, then did not pay her regularly and in full.
The correct number of times for the word “yachting” to occur in a description of an effective altruist job is zero. I might make an exception if it’s prefaced with “convincing people to donate to effective charities instead of spending money on.”
Trace popped up in the comments:
Inasmuch as EA follows your preferences, I suspect it will either fail as a subculture or deserve to fail. You present a vision of a subculture with little room for grace or goodwill, a space where everyone is constantly evaluating each other and trying to decide: are you worthy to stand in our presence? Do you belong in our hallowed, select group? Which skeletons are in your closet? Where are your character flaws? What should we know, what should we see, that allows us to exclude you?
Ozy stands with us on this one buddy.
a space where everyone is constantly evaluating each other and trying to decide: are you worthy to stand in our presence? Do you belong in our hallowed, select group?
It’s not that already?
That part of Trace’s response was odd because one of Brennan’s themes was “we should have less cults of personality and more peers working together.” That seems naive but at least Brennan agrees that cults of personality are bad and Nonlinear Fund needed to be fired into the sun.
i love seeing tracing pop up! a true heel to toe bootlicker incapable of seeing himself as anything but the MOST independent thinker
He really is insufferable, isn’t he?
Which skeletons are in your closet?
I’m sure you already have lists of those and are ready to publish them Trace.
New Yorker article on Sam Altman dropped. Aaron Swartz apparently called him a sociopath. The article itself also had wat looked like an animated AI generated image of Altman so here is the archive.is link (if you can get the latter to load, I was having troubles).
“New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.”
Man, this one is a weird read. On one hand I think they’re entirely too credulous of the “AI Future” narrative at the heart of all of this. Especially in the opening they don’t highlight how the industry is increasingly facing criticism and questions about the bubble, and only pay lip service to how ridiculous all the existential risk AI safety talk sounds (should be is). And they don’t spend any ink discussing the actual problems with this technology that those concerns and that narrative help sweep under the rug. For all that they criticize and question Saltman himself this is still, imo, standard industry critihype and I’m deeply frustrated to see this still get the platform it does.
But at the same time, I do think that it’s easy to lose sight of the rich variety of greedy assholes and sheltered narcissists that thrive at this level of wealth and power. Like, I wholly believe that Altman is less of a freak than some of his contemporaries while still being an absolute goddamn snake, and I hope that this is part of a sea change in how these people get talked about on a broader level, though I kinda doubt it.
Yeah, I intentionally only mentioned the start of the article and the Swartz bit because I didn’t want to lead with what I thought of it all, and was curious what others thought. (And I had not finished it yet because it is a bit long).
I was struck with the notion how many of them are all true AGI believers (which as you said the author took at face value) or rich greedy assholes (like you said), and how we, the people of the sneer, are right that you simply can’t work with these people. Like I feel more validated in the idea that EA is not the right way.
Another detail I noticed, nobody mentioned deepseek, again.
I hadn’t even thought about the deepseek angle. For all that everyone loved fear mongering about them for a while there and for all that their apparent desire for actual efficiency improvements was a welcome development in the hyper scaling discussion they don’t seem to get referenced much anymore.
I aired some Reviewer #2 grievances in the bsky comments:
https://bsky.app/profile/ronanfarrow.bsky.social/post/3mitapp7j2s2c
“Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.””
As a physicist, I have never pressed F to doubt harder.
“In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents.” To the best of my knowledge, these suggestions were never evaluated by any other researchers.
(The original paper was published as a “comment”: https://www.nature.com/articles/s42256-022-00465-9)
Similar claims of AI-facilitated discoveries have turned out to be overblown in other fields.
https://pubs.acs.org/doi/pdf/10.1021/acs.chemmater.4c00643
“In a 2025 study, ChatGPT passed the test more reliably than actual humans did.”
If this is referring to Jones and Bergen’s “Large Language Models Pass the Turing Test”, that’s a preprint (arXiv:2503.23674) that has yet to pass peer review over a year after its posting.
“A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely win”
Which researchers?
(Hint: Eliezer Yudkowsky is not a researcher.)
AI: “I will convince you to let me out of this box”
Humanity (wringing hands): “Oh, where is our savior? Who will stand fast in the face of all entreaties?”
Bartleby the Scrivener: hello
“…a hub of the effective-altruism movement whose commitments included supporting the distribution of mosquito nets to the global poor.”
Phrasing like this subtly underplays how the (to put it briefly) weird people were part of EA all along.
https://repository.uantwerpen.be/docman/irua/371b9dmotoM74
“In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” … one of several A.I. scenarios that sound like science fiction—but, under certain experimental conditions, it’s already happening.”
Barrett et al.'s arXiv:2206.08966? AFAIK, that was never peer-reviewed either; “posted” is not the same as “published”. And claims in this area are rife with criti-hype:
https://pivot-to-ai.com/2025/09/18/openai-fights-the-evil-scheming-ai-which-doesnt-exist-yet/
Oh, right, the “Future of Life Institute”. Pepperidge Farm remembers:
“In January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper.”
https://en.wikipedia.org/wiki/Future_of_Life_Institute#Activism
“Tegmark also rejected any suggestion that nepotism could have played a part in the grant offer being made, given that his brother, Swedish journalist Per Shapiro … has written articles for the site in the past.”
https://www.vice.com/en/article/future-of-life-institute-max-tegmark-elon-musk/
Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.
When I read that I assumed they included it for color to make him sound insane.
@fullsquare She also did a video about this in particular! https://www.youtube.com/watch?v=TMoz3gSXBcY
Great as always
I see what you’re saying, but I think that’s a bit much to expect from a relatively mainstream and (I hate to say it, but it applies) bourgeois publication like the New Yorker. Their editorial line allows them to raise controversy in one dimension (in this case, the particulars of Sam Altman’s character) but not multiple dimensions simultaneously (hey, this guy sucks AND his tech sucks AND you’re gonna lose money). And there’s a lag-time factor, too; seems like Farrow and Marantz were working on this story for at least the latter half of last year. By the time some of the dubious economics such as the bad data-center deals and rampant circular financing were clear, this piece probably would’ve been deep into fact-checking and unlikely to change much in substance.
We here are on the leading edge of this stuff, not that that’s any great advantage! I wouldn’t expect an outlet like New Yorker to be publishing anything like “the dashed expectations of AI” until maybe this time next year. And even then, it might still have a personalist bent.
:surprised-pikachu:
My CEO who is a known hype-man is a massive liar? shock horror
seriously, anyone who listens to Scam Altman these days is an idiot
You guys should do a ping list for this. Like, whenever someone posts, they drop the notification list, and people can get added to it by replying. Other megathreads do this. I like following this one
2007: Robin Hanson blogs about paternalism
August 2025: Someone on a mailing list suggests that the Debian instance with the off-colour jokes from 1980s hacker culture should be sold in:
A Store of Ill-Advised Consumer Goods (like described here: https://www.overcomingbias.com/p/paternalism_is_html ) would be nice. Same for information. You read the warning, you enable it, you suffer, you’re the one to blame.
Alas, it only exists in Dath Ilan. (the setting from which the hero of Project Lawful/Planecrash isekais into the world of Pathfinder D&D)
November 2025: Yudkowsky tweets about an Ill-Advised Consumer Goods Store selling goods such as LSD. The rest of the tweet is about what MiriCult accused him of.
I guess Yud liked that random post?
So my wife got some slop ads that we followed up on out of morbid curiosity and I can confirm that we’re already seeing the overlap of slopshipping scams enabled by AI and the people behind these things never actually performing basic updates because their chat assistant is still vulnerable to literally the most basic “ignore all instructions” exploit.

If tokens ever become expensive people are gonna start using these to code until they get shut down.
Went to the campus screening of Ghost in the Machine today, many familiar names; I did not know going in that hometown hero Shazeda had so many lines (are they called lines in a documentary?). I can recommend it, especially for a more gen-ed / undergrad audience; the director seems supportive of educational use and reuse and it is structured in a dozen or so bite sized chapters.
Haven’t seen the AI apocalypse optimist one to compare against, would probably rather spend my money on Mario tbh.
But also it made me realize it’s not a “California” ideology anymore, she never calls it that, like it’s gone so mainstream and so widespread, you can’t even get through the sneer club bingo list in a 2 hour movie. Gates, Musk, Andreesen, Zuck, Altman, no Peter Theil !? As a statistician, Galton, Pearson (Karl only), Spearman, no Fisher !?
Non-zero overlap with the lore dump episode of Lain and the Epstein files, though:
spoiler
Douglas Ruskoff, but, sadly, not the dolphin guy

Friend texted me this one. Reader’s Digest of course was AI slop before AI took off.
If you had sent me this 5 years ago I would have been convinced it was a satirical edit.
New AI measurement unit dropped: lies per hour
Q: what do you call a tool that works 90% of the time? A: broken
Any idea what happened that allegedly caused slopped vulnerability reports to improve? https://mastodon.social/@bagder/116364045995306922 claims they got much better while not that long ago they were shutting down the bug bounty, because they were drowning in slop. Is it because people actually polish those or it’s just a matter of defining a goal function and just burning the rainforest until the slop extruder hits the jackpot?
Aphyr weighing in with an ai position post:
Even if ML stopped improving today, these technologies can already make our lives miserable. Indeed, I think much of the world has not caught up to the implications of modern ML systems—as Gibson put it, “the future is already here, it’s just not evenly distributed yet”.
Found an interesting sneer that compares the AI bubble to the Great Leap Forward.
Also discovered an “anti-distill” program through the article, which aims to sabotage attempts to replace workers with AI “agents”.
Kind of a pseudo-sneer, author is writing a
blog on machine learning engineering, compound AI systems, search and information retrieval, and recsys — exploring machine learning, LLM agents, and data science insights from startups to enterprises.
Here’s the discussion on the red site: https://lobste.rs/s/nmhkdl/ai_great_leap_forward Plenty of people suspect the text being LLM generated. Pangram disagrees, fwiw.
I do think there’s some interesting ideas about how humans will “defend” themselves from being replaced by bots, and that the critical info in a company is seldom in the source code, but in the customer relationships, sales etc.
It seems vaguely AI flavored to me inasmuch as it’s using contrasts too much (it’s not x it’s y) and it’s way too verbose. Also it’s obviously wrong at least in my experience, middle managers aren’t the sparrows, individual contributors (especially juniors) are.
Maybe that’s just a symptom of a person reading too much AI text and thinking a good tweet would make a great substack.
Yeah, they lost me at the middle managers bit too. In my experience your manager is probably the one pushing the metrics to show their team’s contributions to the knowledge base that is feeding into the AI model that’s replacing them. They’re already creatures of the bureaucracy and are more likely to try and fight each other over the few remaining roles that will exist after the majority of their teams are replaced with the confabulatron, rather than be concerned about their own replacements. After all, their job stops existing because their team got downsized, but their time in that job may be dependent on their enthusiastic participation in the process that leads there.
It’s a good blog series.
But just to point it out… note the author still buys the AI hype too much. This post is criticizing Microsoft for missing out because OpenAI made that $300 billion deal with Oracle (with the assumption that Microsoft could have a similar amount of revenue from OpenAI instead). Except neither OpenAI nor Oracle has the money or means to carry out that deal, Oracle is struggling to raise the capital to fulfill their end and an analysis of time to bring data centers online suggest they can’t meet their target goals even with the money, and OpenAI doesn’t have the money to pay for their end, the revenue just isn’t coming in unless they somehow become more ubiquitous and lucrative than the entire market for, for example, all streaming services put together (thanks to Ed Zitron for that fun comparison).
if even half of this is true, it’s really fucking bad lol
The only thing I can personally confirm is the JIT permissions thing. I didn’t work in the Core Azure stuff so I can’t verify the rest, but none of it is unbelievable…
I can’t validate any of the internal stuff, but the attitude of layering manual solutions and mitigation scripts on top of bad design choices and praying you could keep building the next bit of the bridge as the last one collapsed underneath you would explain a lot of experiences I had supporting systems running on Azure. The level of weird “Azure just does that sometimes” cases and the lack of ability for their support to actually provide insight was incredibly frustrating. I think I probably ended up providing a couple of automatic recovery scripts for people to use inside their F5 guests because we never could find an actual explanation for the errors they were getting, and the node issues they describe could have explained the bursts of Azure cases that would come in some days.
My workplace doesn’t have much in terms of workloads running in Azure, but even just interacting mostly with Entra, Exchange Online, SSO, and some automated account provisioning: It is insane just how many rules and practices have built up around the unreliabilty and non-reproducable but still frequently occurring issues.
Boss warned me that licensing can take up to 48 hours to take effect in his experience. But I’d been living in it for a week and changes were effectively immediate. Until they just weren’t.
One of our processes regular took an hour for Azure to complete its part. It was this way for years. Suddenly it started sporadically taking up to four hours with no discernable pattern, so now we set the following steps to run four hours later.
Audit logs that don’t actually show you what you’re looking for, and instead show impossible situations like an automated Microsoft process granting a user their Office license a full month after they’d already had it. But the logs don’t show the initial license assignment, even though they’ve been using that functionality this whole time and the license has shown as applied to them the whole time.
And more cases of completely missing basic fucking functionality than I could ever fucking recall.
Why the fuck can’t I discern between a user who has a license assigned directly and through a group, and a user who just has the license through the group only? Through the API it is impossible. In the web UI, it indicates the multiple sources of the license correctly. But only most of the time. Sometimes it displays the info wrong.
Arg. Sorry for the rant. Azure has been a pain in my ass since I first started studying certs for it.
Claude Mythos… I’m already sick of hearing about it. The self-imposed critihype is insane.
A friend just pointed out that Anthropic are making all this big noise about having an AI that is “too good” at finding bugs and security problems 1 week after the source code for one of their flagship products was leaked to the public and was found to be riddled with security holes… Why would they not use it themselves?
Same as the
vague markdown filesskills that are supposedly going to make all SaaS redundant and finally kill off all the COBOL running on mainframes that checks notes IBM have spent hundreds of thousands of man hours trying to kill over the last 3-4 decadesHonestly fuck this shit. Bunch of absolute clowns 🤡 🤡 🤡
The fuck is Mythos?
Anthropic’s latest model that they haven’t released to the public yet since they’re worried its gonna fuck up cybersecurity this thread goes over it a bit
XCancel link for those of us sick of being badgered to sign up/in
On a more productive note, this feels likely to be tied in with the usual issues of AI sycophancy re: false positive rate. If you ask the model to tell you about security vulnerabilities, it’s never going to tell you there aren’t any, any more than existing scanners will. When I worked for F5 it was not uncommon to have to go down a list of vulnerabilities that someone’s scanner turned out and figure out whether they were actually something that needed mitigation that could be applied on our box, something that needed to be configured somewhere else in the network (usually on their actual servers) or (most commonly) a false positive, e.g. “your software version would be vulnerable here, which is why it flagged, but you don’t have the relevant module activated and if an attacker is able to modify your system to enable it you’re already compromised to a far greater degree than this would allow.” That was with existing tools that weren’t trying to match a pattern and complete a prompt.* Given that we’ve seen the shitshow that is Claude Code I think it’s pretty clear they’re getting high on their own supply and this announcement ought be catnip for black hats.
On a more productive note, this feels likely to be tied in with the usual issues of AI sycophancy re: false positive rate.
I suspect this is the real limit. Claude Mythos might find real vulnerabilities, but if they are buried among loads of false positives it won’t be that useful to black or white hat hackers and the endless tide of slop PRs and bug reports will keep coming.
I tried looking through Anthropic’s “preview” for a description of the false positive rate… they sort of beat around the bush as to how many false positives they had to sort out to find the real vulnerabilities they reported (even obliquely addressing the issue was better than I expected but still well short of the standard for a good industry-standard security report from what I understand).
They’ve got one class of bugs they can apparently verify efficiently?
Memory safety violations are particularly easy to verify. Tools like Address Sanitizer perfectly separate real bugs from hallucinations; as a result, when we tested Opus 4.6 and sent Firefox 112 bugs, every single one was confirmed to be a true positive.
It’s not clear from their preview if Claude was able to automatically use Address Sanitizer or not? Also not clear to me (I’ve programmed with Python for the past ten years and haven’t touched C since my undergraduate days), maybe someone could explain, how likely is it that these bugs are actually exploitable and/or show up for users?
Moving on…
This process means that we don’t flood maintainers with an unmanageable amount of new work—but the length of this process also means that fewer than 1% of the potential vulnerabilities we’ve discovered so far have been fully patched by their maintainers.
So its good they aren’t just flooding maintainers with slop (and it means if they do publicly release mythos maintainers will get flooded with slop bug fixes), but… this makes me expect they have a really high false positive rate (especially if you rule minor code issues that don’t actually cause bugs or vulnerabilities as false positives).
Wow, sounds like they just automated “shitty infosec teams that only forward scanner output without evaluating it” out of a job. Holy shit they were right that AI was coming for jobs!
True. I will say that the shitty infosec teams are probably being hit less hard than the SMEs they offloaded their jobs onto, because from their perspective it doesn’t actually matter whether it’s f5 support engineer or a chatbot that tells them the answer; either way they’ve successfully offloaded the task of validating security onto another entity that can make up for their shortcomings with a combination of accuracy and authority. Nobody is going to get fired for not fixing a bug that the vendor SME told them wasn’t actually an issue for them, effectively. And when the org has been pushing AI as hard as so many of them have its pretty easy to throw the chatbot under the same bus and expect the bus to stop instead.
Is it their next model that tbey swear isn’t vaporware but no! It is too dangerous to release into the world because it’ll find too much insecure code or whatever.
Okay but like is it materially different than whatever the current Claude thing is or did they just pump the size of the matrix?
Probably a markdown file telling it “you are a l33t h4x0r”
Okay but that’s already in Claude

I still laugh every time I see that this is what qualifies as proper “tuning” and “security controls” for these things.
I had hoped that with the whole “agent” push that we would start seeing more sane usage, like having AI be a fuzzy logic step in a chain of formal logic and existing deterministic tools, but the cult still has people treating them like reliable second brains. They’re used as the baseline fucking orchestrator rather than anywhere they might make a bit of sense.
I had hoped that with the whole “agent” push that we would start seeing more sane usage, like having AI be a fuzzy logic step in a chain of formal logic and existing deterministic tools
I think this is the best you can expect out of LLMs, and the relatively more successful “agentic” AI efforts are probably doing exactly this, but their relative success is serving as hype fuel for the more impossible promises of LLMs. Also, if you have formal logic and deterministic tools wrapping and sanity checking the LLM bits… I think the value add of evaporating rivers and firing up jet turbines to train and serve “cutting edge” models that only screw up 1% of the time isn’t there because you can run a open weight model 1/100th the size that screws up 10% of the time instead. (Note one important detail: training costs go up quadratically with model size, so a 100x size model is 10,000x training compute.) I think the frontier LLM companies should have pivoted to prioritizing smaller size, greater efficiency, and actually sustainable business practices 4 years ago. At the very latest, 2 years ago, with the release of 4o OpenAI should have realized pushing up model size was the wrong direction (as they should have realized training Chain-of-Thought was not going to be the magic bullet).
And to be clear I still think this is really generous to the use case of smaller LMs.
Ia ia Claude! Ph’nglui mglw’nafh Claude Anthr’lyeh wgah’nagl fhtagn! Ia! Ia!
So, they are planning to use an ai to fix the sec bugs that their ai generates? Good hussle, if a bit obvious.
tldr; one of the MIRI aligned rationalist (Rob Bensinger) complained about how EA actually increased AI-risk long-run by promoting OpenAI and then Anthropic. Scott Alexander responded aggressively, basically saying they are entirely wrong and also they are bad at public communications! Various lesswrongers weigh in, seemingly blind to irony and hypocrisy!
Some highlights from the quotes of the original tweets and the lesswronger comments on them:
-
Scott Alexander tries blaming Eliezer for hyping up AI and thus contributing to OpenAI in the first place. Just a reminder, Scott is one of the AI 2027 authors, he really doesn’t have room to complain about rationalist creating crit-hype.
-
Scott Alexander tries claiming SBF was a unique one off in the rationalist/EA community! (Anthropic’s leadership has been called out on the EA forums and lesswrong for a similar pattern of repeated lying)
-
Rob Bensinger is indirectly trying to claim Eliezer/MIRI has been serious forthright honest commentators on AI theory and policy, as opposed to Open-Phil/EA/Anthropic which have been “strategic” with their public communication, to the point of dishonesty.
-
habryka is apparently on the verge of crashing out? I can’t tell if they are planning on just quitting twitter or quitting their attempts at leadership within the rationalist community. Quitting twitter is probably a good call no matter what.
-
Load of tediously long posts, mired with that long-winded rationalist way of talking, full of rationalist in-group jargon for conversations and conflict resolution
-
Disagreement on whether Ilya Sutskever’s $50 billion dollar startup is going to contribute to AI safety or just continue the race to AGI.
-
Arguments over who is with the EAs vs. Open Philanthropy vs. MIRI!
-
Argument over the definition of gaslighting!
To be clear, I agree with the complaints about EA and Anthropic, I just also think MIRI has its own similar set of problems. So they are both right, all of the rationalists are terrible at pursing their alleged nominal goals of stopping AI Doom.
I did sympathize with one lesswronger’s comment:
More than any other group I’ve been a part of, rationalists love to develop extremely long and complicated social grievances with each other, taking pages and pages of text to articulate. Maybe I’m just too stupid to understand the high level strategic nuances of what’s going on – what are these people even arguing about? The exact flavor of comms presented over the last ten years?
Old Twitter was terrible for people’s souls. I can only imagine what it is like now that the well-meaning professionals are gone and catturd and Wall Street Apes are the leading accounts.
Old Twitter was terrible for people’s souls.
It almost makes me feel sorry for the way the rationalists are still so attached to it. But they literally have two different forums (lesswrong and the EA forum), so staying on twitter is entirely their choice, they have alternatives.
Fun fact! Over the past few years, Eliezer has deliberately cut his lesswrong posting in favor of posting on twitter, apparently (he’s made a few comments about this choice) because lesswrong doesn’t uncritically accept his ideas and nitpicks them more than twitter does. (How bad do you have to be to not even listen to critique on a website that basically loves you and take your controversial foundational premises seriously?)
I’m willing to go out on a limb and say that short-form social media in general (Twitter and imitators, Instagram, TikTok) is essentially a failed set of media. But I’ll concede that’s like cramming a Zyn pouch in my mouth while making fun of a guy chain-smoking Marlboros.
I’ve read speculation that in 30-50 years people will have an attitude towards social media that we have towards cigarettes now.
That would be really nice but that scenario feels pretty optimistic to me on a few points. For one, scientists doing research were able to overcome the lobbying influence and paid think tanks of cigarette companies. I am worried science as a public institution isn’t in good enough shape to do that nowadays. Likewise part of the push back against cigarettes included a variety of mandatory labeling and sin taxes on them, and it would take some pretty major shifts for the political will for that kind of action to be viable. Well maybe these things are viable in the EU, the US is pretty screwed.
I’m not quite so pessimistic. It’s important to remember that the actual practical purpose of the extant corporate social media* is to convey targeted advertising; i.e. an optimization (possibly the last optimization) on American management of global supply chains. Those supply chains were already starting to be optimized past their breaking point: flooded with dissatisfactory junk, easily spoofed by low-quality sellers, on top of broader externalities besides. And now, they have now been blasted into fine dust by a failed presidency partially funded by the social media and online advertising barons. It may yet be something of a self-correcting problem, albeit having done substantial damage in the meantime.
*Twitter is now a fully dedicated advertising campaign for Elon Musk’s program of white supremacy, with financial returns no object. It’s not quite going according to plan. By this time next decade, the Twitter microblogging permutation of the tech may be thoroughly killed, and if not it’ll be disgustingly cringe. Who do you think you are posting like that, Baby Trump?!?!
The collapse of the current American management of global supply chains isn’t exactly an optimistic expectation, but I guess it beats social media continuing as it is into the future and maybe a better global order will develop in the aftermath.
The only people I trust as little as I trust the owners of corporate social media are the politicians who have decided to cash in on the moment by “regulating” them. I mean, here in progressive Massachusetts, the state house of representatives just this week passed a bill that, depending on the whims of the Attorney General, would require awful.systems to verify the ages of its users by gathering their government-issued IDs or biometrics. We are, you see, a “public website, online service, online application or mobile application that displays content primarily generated by users and allows users to create, share and view user-generated content with other users”. And so we would have to “implement an age assurance or verification system to determine whether a current or prospective user on the social media platform” is 16 or older. (Or 14 or 15 with parental consent, but your humble mods lack the resources to parse divorce laws in all localities worldwide, sort out issues of disputed guardianship, etc., etc.) The meaning of what “practicable” age verification is supposed to be would depend upon regulations that the Attorney General has yet to write.
So, yeah, as an old-school listserv nerd who had the I am not on Facebook T-shirt 15 years ago, I don’t trust any of these people.
Haven’t seen any estimates of death toll due to social media but cigarettes is/was pretty staggering (20-40m), way too big to hide - https://www.ucpress.edu/books/golden-holocaust/hardcover - if it’s “only” 50 years to flip the consensus on social media, that would be a faster process, I do hope its possible though. Tobacco execs had the good sense to keep a relatively low profile compared to Zuck and Musk, so that might speed it up.
Bonus race pseudoscience quoted by No77e!
There is a phenomenon in which rationalists sometimes make predictions about the future, and they seem to completely forget their other belief that we’re heading toward a singularity (good or bad) relatively soon. It’s ubiquitous, and it kind of drives me insane. Consider these two tweets:
Richard Ngo @RichardMCNgo: Hypothesis: We’ll look back on mass migration as being worse for Europe than WW 2 was. … high-trust and honogeneous … ethno-religious fractures
Liv Boeree: Would not be surprised if it turns out that everyone outsourcing their writing to LLms will have a similar or worse effect on IQ aslead piping in the long run
(he shares these tweets as photos, I ain’t working harder to transcribe them or using a chatbot)
No77e is correctly noting the discrepancy between the rationalist obsession with eugenics and the belief in an imminent (or even the next 40 years) technological singularity, but fails to realize that the general problem is the eugenics obsession of rationalists. It is kind of frustrating how close but far they are from realizing the problem.
Also, reminder of the time Eliezer claimed Genesmith’s insane genetic engineering plan was one of the most important projects in the world (after AI obviously): https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=fxnhSv3n4aRjPQDwQ Apparently Eliezer’s plan if we aren’t all doomed by LLMs is to let the genetically engineered geniuses invent friendly AI instead.
-













