Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
NVidia’s announced an AI filter for PC gaming, calling it “AI-Powered Breakthrough In Visual Fidelity For Games” and hyping the ever-loving shit out of it.
The results are, unsurprisingly, complete garbage, and its already getting ripped apart by the gaming press.
Maybe it’s just me but even the enhanced lighting aspect doesn’t look especially good, at least where faces are concerned; shining a hard light sideways so every facial nook and cranny gets highlighted in excruciating detail looks less natural and more like the old android HDR photo filter, even before you realize it’s giving some characters instagram make-overs.
I’m partway through the Gamers Nexus video clowning on the whole thing, and I kind of feel like I need to find the recording of the actual GDC presentation to pick it apart.
There’s a clip they use around the 17 minute mark where Jensen talks about how they combined structured data and generative AI. It’s just so wrong on so many levels that I feel like it deserves its own dedicated fucking sneer post. It’s some of the slimiest marketing word play leading into just blatantly false claims.
The fact that the slide with Palantir’s logo flanked by hearts didn’t result in even audible boos makes me very sad.
I mean, some of their before/after images are much more impressive than the RE one, but the general look is less like a revolution in capacity and more like someone took some time to find the right Instagram filter.
Also after taking a look at Starfield’s steam page for comparison I’m pretty sure that all the “before” images were taken on lower settings for existing texture quality and lighting. Like, even in areas where the DLSS gives an improvement the original game doesn’t look as bad as presented here.
Also the discourse has been ongoing since at least Skyrim’s original release whether or not the increasing fidelity of game graphics was actually making games better, or just more expensive to make and play. And that was before transformer models entered the picture and started cooking the world. I’m glad nVidia got some new jerk-off material, but even if it works exactly as advertised that’s all it is at this point.
increasing fidelity of game graphics was actually making games better, or just more expensive
I really liked what Control did with cranking up the verisimilitude and the photorealism, namely to accentuate the uncanniness and really up the new weird vibe.
I’m struck by how much contrast gets blasted into the shadows of every scene, reminiscent of the average RTX “remaster.” Lighting is treated not as a tool for composing scenes and guiding attention, but as a dial to be turned toward “more gooder” wherever possible. Just make everything look like everything else; that’s how you know the technology is getting Better.
WTF is this garbage in the Graurdain? “Let’s assume!” is a terrible premise for even an opinion column to begin with, but “let’s assume Musk is right and AI could allow us all to not work” is… bananas for the Guardian to publish. Even before considering that the author’s bio says he’s a business owner of a technology and financial management services company.
The article’s entire premise is Musk saying some random shit. Remember how Musk said that he would land a man on Mars in 10 years 13 years ago? Honestly, I am incensed that people like Musk and Trump can just say shit and many people will just accept it. I can no longer tolerate it.
Putting aside the very real human ability to screw up such a concept and turn any fair system into an unfair one, …
He says this after mentioning UBI. He really doesn’t want to confront the unfortunate fact that UBI is entirely a political issue. Whatever magical beliefs one may have about how AI can create wealth, the question of how to distribute it is a social arrangement. What exactly stops the wealthy from consolidating all that wealth for themselves? The goodness of their hearts? Or is it political pushback (and violence in the bad old days), as demonstrated in every single example we have in history?
I’d say the problem is even worse now. In previous eras, some wealthy people funded libraries and parks. Nowadays we see them donate to weirdo rationalist nonsense that is completely disconnected from reality.
No getting up early and commuting on public transit. …
This is followed by four whole paragraphs about how the office sucks and wouldn’t it be wonderful if AI got rid of all that. Guess what, we have remote work already! Remember how, during COVID, many software engineering jobs went fully remote, and it turned out that the work was perfectly doable and the workers’ lives improved? But then there were so many puff pieces by managers (like the author) about the wonderful environment of the office, and back to the office they went. Don’t worry, when the magical AI is here, they’ll change their minds.
Yes, there are “mindless, stupid, inane things” like chores that are unavoidable. There are also other mindless, stupid, inane things that are entirely avoidable but exist anyway because some people base their entire lives around number go up.
The article’s entire premise is Musk saying some random shit. Remember how Musk said that he would land a man on Mars in 10 years 13 years ago?
This isnt the only thing, the man made so many promises that were lies, or didnt work out it is almost amazing people give him the benefit of the doubt. But people have to or the economy might crash (which seems more and more inevitable now, as a fantasist related crash cant be avoided (and it is worse, if you have seen Andreessen latest weird interview it is clear Trump and Musk are not the only mental voids with a lot of money, so they might be all like that)).
The Blindsight vampires are here already.
@Soyweiser The irony is that if Musk was serious about landing a man on Mars by 2022, he had Falcon Heavy flying in 2017 and Crew Dragon flying with crew in 2020. The amount he’s spent on Starship would have covered several fully-expended FH launches to Mars transfer orbit and development of a long duration crew module. We know how to soft-land ~1-2 tons on Mars.
… What, you wanted him to bring the astronauts *back* afterwards? Are you some kind of Commie?
(But my point stands.)
This was my thought the whole time: if the political will existed, we could probably already do everything that AI is supposed to “enable” here. Some of the work people would choose not to do would end up being actually important, and the market in its infinite power would need to find a way to get that work done, whether that’s paying more to invent new types of automation or compensating people enough that they choose to do it without the threat of starvation and homelessness (or finding new ways to exploit people to do it, but I believe there’s a floor on that at which the other two options become more economically viable), but that’s the whole pitch for having a labor market in the first place. At the same time, absent that political will there’s no reason to expect any change in productivity to change the current arrangement. At best the people working any jobs that get eliminated are discarded as obsolete, lose their ability to participate in the market, and are eventually handled by the criminal justice system or otherwise removed from consideration.
Palmer Lucky Wants YOU 🫵 To Die For His Waifus
Bonus Sneer from Adam Johnson
I also appreciated this followup
isn’t Luckey the dude mostly known for awkwardly and overweightly jumping wearing a VR headset on the cover of Time or something
Oh don’t worry, bespoke GLP-1s plus his “nootropic stack” have surely helped him out immensely by now
PL talks about the way Trump fights wars as if it comes out of some military doctrine. It doesn’t. Trump is stupid, doesn’t plan ahead, and is easily distracted. That’s it. It’s not N-dimensional chess. He’s just an idiot.
A LWer is super-impressed by the time travel fantasy Illumine Lingao (an example of Chuanyue)
https://www.lesswrong.com/posts/YiRsCfkJ2ERGpRpen/leogao-s-shortform?commentId=J4YGrY26Ezt5oMsot
Listen to this pitch:
the vast majority of the book is devoted to discussing every single technical aspect in excruciating well-researched detail. you don’t simply have a paragraph about them deciding to buy guns, you get an entire chapter of different gun experts arguing back and forth about exactly which gun to buy based on maintainability, range, differences between civilian and military models, semi automatic vs fully automatic.
Apparently they’re quite unaware of the extensive number of works in Russian with similar themes:
https://en.wikipedia.org/wiki/Accidental_travel#In_Russian_fiction
followup, here’s a real substack interview with one of the originators of the collab novel
https://afraw.substack.com/p/first-dig-the-latrines
to be honest sounds like semi-fascist shit to me.
Look, I’ve read some long-ass web novels. I enjoyed Worm, A Practical Guide to Evil, and Katalepsis all start to finish. I have also spent more hours than I could count (even if I did care to) perusing excessively detailed fan wikis and reading interninal debates between nerds about minutia. I have done all of this and enjoyed myself greatly.
But the way they’re describing this sounds absolutely exhausting and incredibly dull. If this isn’t the result of some kind of collaborative project where the debates are between different actual people then it sounds like you’re just dumping your worldbuilding notes into the page and throwing a “he said” every so often.
supremely rational gamblers want to rewrite reality by threatening a journalist, because reporting got in the way of them getting money from polymarket. all while completely unaware that they’re giving him better story than the actual missile impact thing https://www.timesofisrael.com/gamblers-trying-to-win-a-bet-on-polymarket-are-vowing-to-kill-me-if-i-dont-rewrite-an-iran-missile-story/ also https://awful.systems/post/7617781
Polymarket when faced with the oracle problem: “What if we threaten to shoot the oracle?”
the rationalist counterpart to rubber hose cryptanalysis
“So I put an accumulator on Gaelic Warrior to win the Gold Cup, Arsenal to beat Leverkusen in the Champions League, and all-out nuclear conflict by the end of March”
An Aella-curious blogger in SoCal has noticed something:
But what I find more interesting than broadly “weird sex” is the specific interest in BDSM, kink and particularly full-contact CNC; a relatively common fantasy in individuals, but one I’ve never seen such widespread community interest in outside the Bay Area.
Kink and power-play are practices of manufactured risk, with CNC clocking at a more intense point on the same spectrum. The idea that many of these people are devoting their 9-5s and beyond to eliminating the ultimate consequence (death), only to go home and collectively play-pretend violence (scaffolded with extensive rules and consent forms) is fascinating, and- to me- makes complete sense.
The rationalist interest in manufacturing risk is the direct byproduct of their commitment to flushing it out.
The blogger attended Aella’s SlutCon. I don’t know if she knows that many of our friends have problems with consent as most of us understand it (their understanding is more “if they are old enough to sign the contract, and they sign, that is on them”).
[Effective Altruism] was originally applied to initiatives like raising money for mosquito netting, but now includes figures like Johnson, who has reframed his blood experiments as a product of his own generosity, set to cure humanity of its greatest ill: death itself.
People keep saying this, so it’s good to have a reminder that the weirdos (derogatory) were there all along.
Gleiberman’s paper on the longtermist foundations of the Effective Altruism movement is great!
I read a post by someone leaving LessWrong-the-site who said that from now on he would only donate to Aubrey de Grey because obviously we are so close to curing aging. Found it http://lesswrong.com/lw/m81/leaving_lesswrong_for_a_more_rational_life/
I think that after it is all said and done and after all the money Bryan Johnson spends to live forever, I think the end result will be: exercise, good diet, no alcohol, no tobacco, no drugs. He will still be pushing his product, but the basic advice will be what we already know.
The end result is that he will die, just like every other human being ever.
But along the way, he found a way to take estrogen wrong
new odium symposium episode. we examine the foundational TERF text, janice raymond’s “the transexual empire,” which turns out to be about how trans people are a big pharma conspiracy
https://www.patreon.com/posts/12-invasion-of-w-152915964
www.odiumsymposium.com for links to other platforms
i hope you get hazard pay for all the psychic damage you inflict on each other
our pay is our satisfaction in having inflicted it on others as well
in all honesty we would love it if doing this were our job but there is no pathway to that that we can see. we just do it b/c it’s really fun
My understanding is that most professional podcasters start off more or less like this, start getting a Patreon or some light sponsors going in order to fund actually decent equipment, and then look at the numbers one morning and realize that actually they could just do this for a living.
It always strikes me how stupid bigotry makes you. There are so many points where she seems to make a point that she cannot accept because it would go against her conclusion. Also lol @ the “not what we’re called”, that stuck with me for some reason
Did I mention that one of your more recent eps covered some shit so odious I stress ate a pile of oreos? Keep up the good work

alt-txt
Yesterday i explained something so bleak to my therapist she asked me if we could pause for a minute so she could think about it. I’m getting close to winning therapy i can feel it in my bones.
BTW, in markdown you can put alt text in the image link and renders will put it into the image tag.
Nifty, thanks!
Old by modern standards but this dramatic reading of some shit the suno guy said got me to laugh: https://www.instagram.com/reel/DVLmCoNj_PB/
Topic adjacent, jazz musician/music theory youtuber Adam Neely has an 1hr30 video eviscerating Suno/AI music. Very high quality sneering with some wonderful Ben Levin animations.
New AI legal filing sanctions just dropped: https://storage.courtlistener.com/recap/gov.uscourts.ca6.152857/gov.uscourts.ca6.152857.50.2.pdf
I don’t have time to read over it completely yet, but here’s a taste:
That briefing repeatedly misrepresented the record, cited non-existent cases, and cited cases for propositions of law that they did not even discuss, much less support. As explained below, Irion’s and Egli’s misconduct warrants the sanctions laid out in Section II.C.
If we included typos and other errors that are arguably, but not clearly, a misrepresentation or fake citation, we would be looking at far more misstatements of fact and law
Irion and Egli did not respond to these directives. Instead, they said the show cause order was “void on its face for failing to include a signature of an Article III judge,” was “motivated by harassment of the Respondent attorneys,” and “reflect[ed] illegal ex-parte [sic] communications within this Court.”
Although citing fake cases violates Federal Rule of Appellate Procedure 38, Rule 38 alone is not “up to the task” of sanctioning this conduct, Chambers, 501 U.S. at 50, because Rule 38 allows only for the imposition of costs and attorneys’ fees, Sanctions § 33. But we think other sanctions are also appropriate, so we employ our inherent authority
Not a lawyer, just a bit of a law nerd, by this is a big deal, especially the fact that courts have been repeatedly using their inherent authority sanction on people who fuck this up. Courts do not routinely invoke their inherent authority like this. Also this footnote is interesting:
Ghostwriting is when one person writes the document while another person takes credit for it without acknowledging the true author’s identity. See The American Heritage Dictionary of the English Language 741 (4th ed. 2000). Legal authorities generally discuss ghostwriting for a pro se litigant, see, e.g., Duran v. Carris, 238 F.3d 1268, 1272 (10th Cir. 2001), but we see no reason why rules regulating ghostwriting should apply in only the pro se context. The primary concern with ghostwriting is that the true author would escape liability for his conduct, see In re Mungo, 305 B.R. 762, 768 (Bankr. D.S.C. 2003); Ellis v. Maine, 448 F.2d 1325, 1328 (1st Cir. 1971), and that concern is just as acute when a lawyer ultimately signs the ghostwritten pleading.
It sounds like they’re looking for an angle to hold the LLM operators (OpenAI/Anthropic - or at least whatever company wraps the models in the necessary bits and bobs to make it a product they can sell to stupid asshole lawyers) as ultimately accountable for these filings, just as if they were a SovCit guru providing materials for one of their griftees to submit to the court without ever actually putting their name to the record where the might face consequences. I’d need to do some research to speculate on what that might mean, but it should give everyone operating in this space pause.
I’m still reading the appendix that goes into the specific hallucinations but it sounds like they’re pretty absurd based on the tone of this order.
• On pages 17 and 19, Whiting cites “T.C.A. § 29-12-119,” but we cannot find a section 29-12-119 in the Tennessee Code Annotated
lol. lmao.
On page 4, Whiting states “it is well settled that the First Amendment does not protect speech that knowingly asserts false statements of fact. United States v. Alverez, 567 U.S. 709, 721 (2012).” Alvarez states the opposite: “This opinion . . . rejects the notion that false speech should be in a general category that is presumptively unprotected.” Id. at 721–22 (plurality opinion).
Oh. Oh no.
• On page 1, Whiting states, “This Court has made clear that , [sic] ‘[T]he mere fact that a plaintiff did not prevail does not mean that the claim was frivolous.’ Adcock-Ladd v. Secretary of the Treasury, 227 F.3d 343, 350 (6th Cir. 2000).” Adcock-Ladd does not contain the quoted language, and it is not about frivolous cases.
This specific confabulation appears at least 5 times. I’m not sure if Whiting was copy/pasting from something ChatGPT spat out or if ChatGPT was at least consistently inventing the same bullshit.
Looking for a bit of context I found this local news piece and it certainly reads like the guy is a crank who kick-started this whole thing by trying to protest the crime of public safety during a global pandemic.
I’m pretty sure the 2 people cosplaying as lawyers are just as bugshit as he is.
edit yeah they’re SovCits
Finally, our orders are not invalid simply because the clerk signed them. We have already told Irion and Egli that our orders are not void when the clerk signs them in this very case. Whiting v. City of Athens, No. 24-5886, 2025 U.S. App. LEXIS 13507, at *1 (6th Cir. June 2, 2025). And the Supreme Court has twice denied petitions for mandamus from Irion and Egli demanding that the clerk stop signing our orders.
(italics in original, bold my emphasis)
God, I love when people think “because I said so” is adequate back up of their BS.
“Judges love this one weird trick!”
My heart goes out to my fleeting online acquaintance who’s seemingly but reliably two years too late to hype a trend. 2024 it was blockchain/cryptocurrencies that he tried pushing, now he’s saying AI technology companies are here to stay.
Somebody’s gotta buy the reverse reverse Cramer index I guess.
Regarding a project to translate several thousand ancient letters:
So, um… this is bad. Really bad. I looked at the letters that were translated by the AI, and the very first one I found was almost entirely hallucination.
Sam Altman wants his eye scanning crypto bullshit to be used to verify AI agents so he can save the internet from himself.
Rather than blocking automated traffic outright as a safety or data-protection measure, World [previously world coin] suggests sites could instead require AI agents to present an associated World ID token to prove they represent an actual human who’s behind any request. In this way, the site could allow agents to access limited resources like restaurant reservations, ticket purchase opportunities, free trials, or even bandwidth without worrying about a single user flooding the process with thousands of anonymous bots. The same idea could apply to sensitive reputational systems like online forums and polls, where it’s important to prevent automated astroturfing or dogpiling.
prove they represent an actual human who’s behind any request
This seems like he is misunderstanding the problem, we know people are behind the spam. That doesnt make the spam ok. Another one of those none solutions by blockchain tech.
This would just create a secondary market for peoples IDs. Like people being paid pennies to do captchas all day.
How convenient being able to offer a solution to the problem you yourself created.
this also puts altman in position to forge identities at will
but but but blockchain
@fullsquare He’ll absolutely need that capability when the bubble bursts and he needs to make a hurried exit in the direction of the extinct volcano lair he’s bought through a shell company in Polynesia!
cue a thriller where a disgraced techbro billionaire is hunted by the surveillance system he gleefully created
scratch that, that will be a popular reality TV show enjoyed by millions
@gerikson You could run a lottery where the prizes were control over one of the FPV killer drones hunting him. Require a direct hit with an injector loaded with about 30% of a lethal dose of something excruciating, so everyone can get their stabby on and no one person is technically guilty of murder. (Subject to common cause doctrine in your jurisdiction, but anyway … )
We could setup a small new micro nation and route all the drone traffic through that to make sure nobody does any murder crimes. Easily crowdsourced, esp after the billionaires are no longer bidding.
Isn’t that new Chris Pratt movie I keep seeing the first 5 seconds of a trailer for basically this, but mixed with a Minority Report knockoff?
immersion destroyed immediately (they never face consequences)
@fullsquare @techtakes If you want a TV show about billionaires getting their just desserts, just intone six words at the start of the intro narrative: “After the year of the revolutions …”
Meanwhile, “AI agents” continue to be an opaque bundle of shell scripts shoved into a trenchcoat, with an inconsistent English-language translation layer stapled on top
That’s an unfair characterization. I think a lot of those scripts are written in python.
Ars sure drank the koolaid didn’t they?
They’ve nosedived in quality with gusto.
And here I thought “people being easy to replace with a small shell script” was a joke…

A new conspiracy theory that I made up just now:
Transformer-based LLMs are a North Korean op to destroy the West’s ability to generate 10x rockstar coders. Within two years America will be rendered helpless.
We’re getting codemogged by the Juche Machines. They have played us for absolute fools!
fig. 1: @self hard at work keeping awful systems up and running

from Unix World 1985. enterprise computing was so much more fun in those days
Thank you @self, love your haircut
if you use Rust enough it just grows like that
Man, if only I had enough optimism left to aspire to that level of silliness, as opposed to be sliding further and further in to the maw of computer stupidity.
Hey, we hailed our @self!















