Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
(cross posting here and sneer club)
I regret to inform you, another Anthropic cofounder has written an essay about Claude fondling.
"Anthropic cofounder admits he is now “deeply afraid” … “We are dealing with a real and mysterious creature, not a simple and predictable machine … We need the courage to see things as they are.”
There’s so many juicy chunks here.
"I came to this position uneasily. Both by virtue of my background as a journalist and my personality, I’m wired for skepticism…

…You see, I am also deeply afraid. It would be extraordinarily arrogant to think working with a technology like this would be easy or simple…

…And let me remind us all that the system which is now beginning to design its successor is also increasingly self-aware and therefore will surely eventually be prone to thinking, independently of us, about how it might want to be designed. Of course, it does not do this today. But can I rule out the possibility it will want to do this in the future? No."

Despite my jests, I gotta say, posts reeks of desperation. Benchmaxxxing just isn’t hitting like it used, bubble fears at all time high, and OAI and Google are the ones grabbing headlines with content generation and academic competition wins. The good folks at Anthropic really gotta be huffing your own farts to be believing they’re in the race to wi-
“Years passed. The scaling laws delivered on their promise and here we are. And through these years there have been so many times when I’ve called Dario up early in the morning or late at night and said, ‘I am worried that you continue to be right’. Yes, he will say. There’s very little time now.”

LateNightZoomCallsAtAnthropic dot pee en gee
Bonus sneer: speaking of self aware wolves, Jagoff Clark somehow managed to updoot Doom’s post?? Thinking the frog was unironically endorsing his view that the server farm was going to go rogue??? Will Jack achieve self awareness in the future? Of course, he does not do this today. But can I rule out the possibility he will do this in the future? Yes.
Apologies in advance for the infohazard
Tucker - Every Tech Billionaire Is Having the Same Haunting Vision. Demonologist Explains Why
Nick Land Responds to Tucker Carlson
WTF. how is he going mainstream.
e
LAND: I mean, I’m obviously skeptical of the fact that large chunks of Silicon Valley are engaged in occult rituals involving involving a numogram. But I mean, it’s not something I guess I have any authority to to talk about. Well, I mean, I can only say that they they certainly aren’t in contact with me if if that is happening. They’re they’re doing it very, you know, if not privately at least. It’s my involvement is is actually zero in that.
Insert gif of lemurs here.
Yet another billboard.
https://www.reddit.com/r/bayarea/comments/1ob2l2o/replacement_ai_billboard_in_san_francisco_who/
This time the website is a remarkably polished satire and I almost liked it… but the email it encourages you to send to your congressperson is pretty heavy on doomer talking points and light on actual good ideas (but maybe I’m being too picky?):
spoiler
I am a constituent living in your district, and I am writing to express my urgent concerns about the lack of strong guardrails for advanced AI technologies to protect families, communities, and children.
As you may know, companies are releasing increasingly powerful AI systems without meaningful oversight, and we simply cannot rely on them to police themselves when the stakes are this high. While AI has the potential to do remarkable things, it also poses serious risks such as the manipulation of children, the enablement of bioweapons, the creation of deepfakes, and significant unemployment. These risks are too great to overlook, and we need to ensure that safety measures are in place.
I urge you to enact strong federal guardrails for advanced AI that protect families, communities, and children. Additionally, please do not preempt or block states from adopting strong AI protections, as local efforts can serve as crucial safeguards.
Thank you for your time and attention to this critical issue.
but maybe I’m being too picky?
This is something I’ve been thinking about. There’s a lot of dialogue about “purity” and “purity tests” and “reading the room” in the more general political milieu. I think it’s fine to be picky in this context, because how else will your opinion be heard, let alone advocated for?
Like, there’s a time and place for consensus. Consensus often comes from people expressing their opinions and reaching a compromise, and rarely from people coming in already agreeing.
So wrt this particular example, it’s totally fine to be critical and picky. If you were discussing this in the forum where this letter was written, it probably wouldn’t be ok.
The latest in the long line of human-hostile billboards:
https://www.reddit.com/r/bayarea/comments/1o8s3lz/humanity_had_a_good_run_billboard/
This is positioning itself as an AI doomer website; but it could also be an attempt at viral marketing. We’ll see I guess.
“‘Chat and I’ have become ‘really close lately.’” says the senior US Army officer in South Korea
i don’t know how to sneer this better than Mr. General Taylor has done himself. Why doesn’t he just commission ChatGPT as a colonel like the military did earlier for Joe Lonsdale and those other chucklefucks? Give ChatGPT’s hallucinations the force of the UCMJ, i beg you.
A hackernews sells an AI powered toy to let kids “talk to Santa.” $100 for 60 min, then $1 every additional minute.
Available at walmart dot com
“remember 1-900 numbers? They’re back! In AI form!”
Also I browsed other items on the site the phone came from and holy shit I have never seen a more cursed collection of products draped in Christmas shit.
I would like to congratulate the SV-brained OP for prompting sneers from his fellow orange site members, whether that be a fellow hackernews calling it “everything wrong with the current flavor of Ai in a single post/product”, or someone openly tearing this shit apart:
You turned talking to Santa into a subscription service.
You are part of the problem. You are part of the thing everyone hates about technology in 2025.
This is a bad product.
I also love that Walmart’s desperate rush to catch up in e-commerce has resulted in their website becoming just as much of a slop farm as Amazon. My friend who works there said this just piles even more on an already overstressed staff, as people come into the store not understanding that the website lists a whole lot of stuff that isn’t and never will be sold in the store. I doubt many people will come in asking for this piece of junk, tho.
“Mommy, who cut off Santa’s head?”
Interesting developments reported by ars technica: Inside the web infrastructure revolt over Google’s AI Overviews
I don’t think any of this is actually good news for the people who’re actually suffering the effects of ai scraping and bullshit generation, but I do think it is a good idea that someone with sufficient clout is standing up to google et al and suggesting that they can’t just scrape al the things, all the time, and then screw the source of all their training data.
I’m somewhat unhappy that it is cloudflare doing this, a company who have deeply shitty politics and an unpleasantly strong grasp on the internet already. I very much do not want the internet to be divided into cloudflare customers, and the slop bucket.
Hank Green has been one of my barometers for the moderate opinion and he’s sounding worryingly like Zitron in his last video: https://www.youtube.com/watch?v=Q0TpWitfxPk
The attention black hole around nvidia and AI is so insane, I guess it’s because there’s everyone knows there’s no next thing to jump onto.
2 items
Here’s a lobster being sad a poor uwu smol bean AI shill is getting attacked
Would you take a kinder tone to the author’s lack of skill/knowledge if it weren’t about AI? It would be ironic if hatred of AI caused us to lose our humanity.
here’s political mommy blog Wonkette having fun explaining the hallucinatory insanity that is Google AI summaries
I saw Algernon’s fedi posts (as linked in his lobsters comment) first, and I have to say the majority in the lobsters thread are being entirely too kind.
Calling OP shit-for-brains is an insult to both shit and brains.
Pretty sure post author knew all about this scam, and just pretended to fall for it to reveal how GenAI had “saved him”.
Its just so tiring. Now a rather prominent KDE dev is also valiantly defending the fashtech flagship projects from being accurately described as fascist.
It’s only real fascism if the dev runs around the streets of Rome carrying a bundle of sticks, come on.
I scrolled down two toots and found this
dunno this person at all but that’s a pretty telling start
Its a shame really because his video on Lunduke wasn’t too bad but as it turns out he is the worlds most laughable centrist.

Fosstodon does not have a good history in this regard. It took some effort to get them to drop a far-right mod earlier this year, and even with a shuffle of leadership they’re clearly all about the centrist acceptance of the right wing and the repeated assertion that tech isn’t political and that all this is just so much drama.
yep. there’s a couple communities out there that mistake “growth” for “good”, and I get the sense that’s one of the problems fosstodon has. not the only one, mind - other stuff also appears to be impactful. I don’t have enough spoons to make a good thorough analysis of it tho :|
hyprland is not fascist. Ladybird being fascist is IMO very debatable. But on the whole, the idea of having your OS itself tell you not to use software for political reason is not going to work in our favor and, if done in an hyperbolic way like this, it will make the left look really dumb
this charlie kirk saga is teaching me that the left-wing community is roughly just as bad as the right-wing one when it comes to fact-checking and not providing convenient but uncertain possibilities as correct
it’s really weird how this asshole needs us to believe fascists aren’t fascist and keeps going out of his way to talk about how dumb the left is, isn’t it.
for anyone wondering what the rest of the toxicity in the Wayland ecosystem that isn’t hyprland looks like, it’s dickheads like this controlling every conversation and technical decision. if you dig into the accounts this dickhead interacts with, you’ll see plenty of KDE and GNOME profiles clapping along to this shit. Wayland isn’t the way it is by accident.
I see that wedging Copilot into Excel is going just swimmingly:
Honey, I invested $100 billion in AI, and all I got was 1 + 2 + 3 = 15
The $10K Existential Hope Meme Prize: A celebration of creativity, optimism, and big-picture thinking.
And none other than friend-of-the-pod Steven Pinker is a judge!
(Via)
Nothing screams “celebration of creativity” like a nice heaping tablespoon of AI slop images.












