Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Re-begun, the edit wars over EA have:
And sure enough, just within the last day the user āHand of Lixueā has rewritten large portions of the article to read more favorably to the rationalists.
User was created earlier today as well. Two earlier updates from a non-account-holder may be from the same individual. Did a brief dig through the edit logs, but Iām not very practiced in Wikipedia auditing like this so I likely missed things. Their first couple changes were supposedly justified by trying to maintain a neutral POV. By far the larger one was a āculling of excessive referencesā which includes removing basically all quotes from Cade Metzā work on Scott S and trimming various others to exclude the bit that says āthe AI thing is a bit weirdā or ānow they mostly tell billionaires itās okay to be richā.
I suppose you could explain that on the talk page, if only you expressed it in acronyms for the benefit of the most pedantic nerds on the planet.
Also, not sure if thereās anything here but the Britannica page for Lixue suggests that thereās no way in hell its hand doesnāt have some serious CoIs.
Ed:
Also shout-out to the talk page where the poster of our top-level sneer fodder defended himself by essentially arguing āI wasnāt canvassing, I just asked if anyone wanted to rid me of this turbulent priest!ā
There might be enough point-and-laugh material to merit a post (also this came in at the tail end of the weekās Stubsack).
The opening line of the āBeliefsā section of the Wikipedia article:
Rationalists are concerned with improving human reasoning, rationality, and decision-making.
No, they arenāt.
Anyone who still believes this in the year Two Thousand Twenty Five is a cultist.
I am too tired to invent a snappier and funnier way of saying this.
That hatchet job from Trace is continuing to have some legs, I see. Also a reread of it points out some unintentional comedy:
This is the sort of coordination that requires no conspiracy, no backroom dealingāthough, as in any group, Iām sure some discussions go onā¦
Getting referenced in a thread on a different site talking about editing an article about themselves explicitly to make it sound more respectable and decent to be a member of their technofascist singularity cult diaspora. Iām sorry that your blogs arenāt considered reliable sources in their own right and that the āheterodoxā thinkers and researchers you extend so much grace to are, in fact, cranks.
Unilever are looking for an Ice Cream Head of Artificial Intelligence.
I think I have found a new favorite way to refer to true believers.
This role is responsible for the creation of a virtual AI Centre of Excellence that will drive the creation of an Enterprise-wide Autonomous AI platform. The platform will connect to all Ice Cream technology solutions providing an AI capability that can provide [blah blah blahā¦]
itās satire right? brilliantly placed satire by a disgruntled hiring manager having one last laugh out the door right? no one would seriously write this right?
I mean it does return a 404 now.
maybe they filled that position already
In other news, I got an āIs your website AI readyā e-mail from my website host. I think Iām in the market for a new website host.
New article from Axos: Publishers facing existential threat from AI, Cloudflare CEO says
Baldur Bjarnason has given his commentary:
Honestly, if search engine traffic is over, it might be time for blogs and blog software to begin to deny all robots by default
Anyways, personal sidenote/prediction: I suspect the Internet Archiveās gonna have a much harder time archiving blogs/websites going forward.
Up until this point, the Archive enjoyed easy access to large swathes of the 'Net - site owners had no real incentive to block new crawlers by default, but the prospect of getting onto search results gave them a strong incentive to actively welcome search engine robots, safe in the knowledge that theyād respect robots.txt and keep their server load to a minimum.
Thanks to the AI bubble and the AI crawlers its unleashed upon the 'Net, that has changed significantly.
Now, allowing crawlers by default risks AI scraper bots descending upon your website and stealing everything that isnāt nailed down, overloading your servers and attacking FOSS work in the process. And you can forget about reigning them in with robots.txt - theyāll just ignore it and steal anyways, theyāll lie about who they are, theyāll spam new scrapers when you block the old ones, theyāll threaten to exclude you from search results, theyāll try every dirty trick they can because these fucks feel entitled to steal your work and fundamentally do not respect you as a person.
Add in the fact that the main upside of allowing crawlers (turning up in search results) has been completely undermined by those very same AI corps, as āAI summariesā (like Googleās) steal your traffic through stealing your work, and blocking all robots by default becomes the rational decision to make.
This all kinda goes without saying, but this change in Internet culture all-but guarantees the Archive gets caught in the crossfire, crippling its efforts to preserve the web as site owners and bloggers alike treat any and all scrapers as guilty (of AI fuckery) until proven innocent, and the web becomes less open as a whole as people protect themselves from the AI robber barons.
On a wider front, I expect this will cripple any future attempts at making new search engines, too. In addition to AI making it piss-easy to spam search systems with SEO slop, any new start-ups in web search will struggle with quality websites blocking their crawlers by default, whilst slop and garbage will actively welcome their crawlers, leading to your search results inevitably being dogshit and nobody wanting to use your search engine.
I donāt like that itās not open source, and there are opt-in AI features, but I can highly, highly recommend Kagi from a pure search result standpoint, and one of the only alternatives with their own search index.
(Give it a try, theyāve apparently just opened up their search for users without an account to try it out.)
Almost all the slop websites arenāt even shown (or put in a āListiclesā section where they can be accessed, but are not intrusive and do not look like proper results, and you can prioritize/deprioritize sites (for example, I have gituib/reddit/stackoverflow to always show on top, quora and pinterest to never show at all).
Oh, and they have a fediverse ālensā which actually manages to reliably search Lemmy.
This doesnāt really address the future of crawling, just the āGoogle has gone to shitā part š
FWIW, due to recent developments, Iāve found myself increasingly turning to non-search engine sources for reliable web links, such as Wikipedia source lists, blog posts, podcast notes or even Reddit. This almost feels like a return to the early days of the internet, just in reverse and - sadly - with little hope for improvement in the future.
Searching Reddit has really become standard practice for me, a testament to how inhuman the web as a whole has gotten. What a shame.
Weird conspiracy theory musing: So we know Rokos Basilisk only works on a very specific type of person who needs to belief in all the LW stuff about what the AGI future will be like, but who also feel morally responsible, and have high empathy. (Else the thing falls apart, you need to care about, feel responsible for, and believe the copies/simulated things are conscious). We know caring about others/empathy is one of those traits which seem to be rarer on the right than the left, and there is a feeling that a lot of the right is doing a war on empathy (see the things Musk has said, the whole chan culture shit, but also themotte which somebody once called an āempathy removal training centerā which stuck so I also call it that. If you are inside once of these pipelines you can also notice it, or if you get out, you can see it looking back, I certainly did when I read more LW/SSC stuff). We also know Roko is a bit of a chud, who wants some sort of ātranshumanistā āutopiaā where nobody is non-white or has blue hair (I assume this is known, but if you care to know more about Roko (why?) search sneerclub (Ok, one source as a treat)).
So here is my conspiracy theory. Roko knew what he was doing, it was intentional on Rokos part, he wanted to drive the empathic part of LW mad, discredit them. (That he was apparently banned from several events for sexual harassment also is interesting. Does remind me of another ālower empathyā thing the whole manosphere/pua thing which was a part of early LW, which often trains people to think less of women).
Note that I donāt believe in this, as there is no proof for it, I donāt think Roko planned for this (nor considered it in any way) and I think his post was just a honest thought experiment (as was Yuds reaction). It was just an annoying thought which I had to type up else I keep thinking about it. Sorry to make it everybodies problem.
iirc he has a lawyer on retainer in case of another sexual harassment claim.
Not wanting the Basilisk eternal torture dungeon to happen isnāt an empathy thing, they just think that a sufficiently high fidelity simulation of you would be literally you, because otherwise brain uploads arenāt life extension. Itās basically transhumanist cope.
Yud expands on it in some place or other, along the lines that the gap in consciousness between the biological and digital instance isnāt that different from the gap created by anesthesia or a nightās sleep, itās just on the space axis instead of the time axis, or something like that.
And since he also likes the many world interpretations it turns out you also share a soul with yourselves in parallel dimensions; this is why the zizians are so eager to throw down, since getting killed in one dimension just lets supradimensional entities know you mean business.
Early 21st century anthropology is going to be such a ridiculous field of study.
Clearly you do not have low self-esteem. But yes that is the weak point of this whole thing, and why it is a dumb conspiracy theory. (Im mismatching the longtermist āfuture simulated people are importantā utilitarian extremism with the āsimulated yous are yousā extreme weirdness).
The problems with yuds argument is that all these simulations will quickly diverge and no longer are the real āyouā see twins for a strawman example. The copies should then be ran in exactly the same situations and then wtf is the point. When I slam my toe into a piece of furniture I dont morn all the many world mes who also did just break a toe again. It just weird, but due to the immortality cope it makes sense for insiders.
Iād say if thereās a weak part in your admittedly tongue-in-cheek theory itās requiring Roko to have had a broader scope plan instead of a really catchy brainfart, not the part about making the basilisk thing out to be smarter/nobler than it is.
Reframing the infohazard aspect as an empathy filter definitely has legs in terms of building a narrative.
I thought part of the schtick is that according to the rationalist theory of mind, a simulated version of you suffering is exactly the same as the real you suffering. This relies on their various other philosophical claims about the nature of consciousness, but if you believe this then empathy doesnāt have to be a concern.
The key thing is that the basilisk makes a million billion digibidilion copies of you to torture, and because you know statistics you know that thereās almost no chance youāre the real you and not a torture copy.
Yeah for some reason they never covered that in the stats lectures
Iām the torture copy and so is my wife
checks the news
Well shit
you know that thereās almost no chance youāre the real you and not a torture copy
I basiliskās wager was framed like that, that you canāt know if you are already living in the torture sim with the basilisk silently judging you, it would be way more compelling that the actual āyou are ontologically identical with any software that simulates you at a high enough level even way after the fact because [preposterous transhumanist motivated reasoning]ā.
Yeah you are correct, im mismatching longtermism with transhumanist digital immortality, which is why I called it a conspiracy theory, it being wrong and all that. (Even if I do think empathy for perfect copies of yourself is a thing not everyone might have).
ā¦Honestly, I canāt help but feel youāre on to something. Iād have loved to believe this was an honest thought experiment, but after seeing the right openly wage a war on empathy as a concept, I wouldnāt be shocked if Rokoās Basilisk (and its subsequent effects) werenāt planned from the start.
altman brings the orb to reddit for some reason https://www.semafor.com/article/06/20/2025/reddit-considers-iris-scanning-orb-developed-by-a-sam-altman-startup
This guy is a real self-licking ice cream cone (flavor: pralines and dick)
So us sneerclubbers correctly dismissed AI 2027 as bad scifi with a forecasting model basically amounting to āline goes upā, but if you end up in any discussions with people that want more detail titotal did a really detailed breakdown of why their model is bad, even given their assumptions and trying to model āline goes upā: https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models
tldr; the AI 2027 model, regardless of inputs and current state, has task time horizons basically going to infinity at some near future date because they set it up weird. Also the authors make a lot of other questionable choices and have a lot of other red flags in their modeling. And the picture they had in their fancy graphical interactive webpage for fits of the task time horizon is unrelated to the model they actually used and is missing some earlier points that make it look worse.
Good for him to try and convince the LW people that the math is wrong. Do think there is a bigger problem with all of this. Technological advancement doesnāt follow exponential curves, it follows S-curves. (And the whole āthe singularity is nearā 'achtually that is true, but the rate of those S-curves is in fact exponential is just untestable unscientific hopeium, but it is odd the singularity people are now back unto exponential curves for a specific tech).
Also lol at the 2027 guys believing anything about how grok was created. Nice epistemology yall got there, hows the Mars base?
Also lol at the 2027 guys believing anything about how grok was created.
Judging by various comments the AI 2027 authors have made, sucking up to techbro side of the alt-right was in fact a major goal of AI 2027, and, worryingly they seem to have succeeded somewhat (allegedly JD Vance has read AI 2027) but lol at the notion they could ever talk any of the techbro billionaires into accepting any meaningful regulation. They still donāt understand their doomerism is free marketing hype for the techbros, not anything any of them are actually treating as meaningfully real.
Yeah, think that is prob also why a Thiel supports Moldbug, not because he believes in what Moldbug says, but because Moldbug says things that are convenient for him if others believe it (Even if Thiel prob believes a lot of the same things, looking at his anti-democracy stuff, and the ārape crisis is anti menā stuff (for which he apologized, wonder if he apologized for the apology now that the winds have seemingly changed).
If the growth is superexponential, we make it so that each successive doubling takes 10% less time.
(From AI 2027, as quoted by titotal.)
This is an incredibly silly sentence and is certainly enough to determine the output of the entire model on its own. It necessarily implies that the predicted value becomes infinite in a finite amount of time, disregarding almost all other features of how it is calculated.
To elaborate, suppose we take as our ābase modelā any function f which has the property that lim_{t ā ā} f(t) = ā. Now I define the concept of āsuper-fā function by saying that each subsequent block of āvirtual timeā as seen by f, takes 10% less āreal timeā than the last. This will give us a function like g(t) = f(-log(1 - t)), obtained by inverting the exponential rate of convergence of a geometric series. Then g has a vertical asymptote to infinity regardless of what the function f is, simply because we have compressed an infinite amount of āvirtual timeā into a finite amount of āreal timeā.
Yeah AI 2027ās model fails back of the envelope sketches as soon as you try working out any features of it, which really draws into question the competency of itās authors and everyone that has signal boosted it. Like they could have easily generated the same crit-hype bullshit with ājustā an exponential model, but for whatever reason they went with this model. (They had a target date they wanted to hit? They correctly realized adding in extraneous details would wow more of their audience? They are incapable of translating their intuitions into math? All three?)
titotal??? I heard they were dead! (jk. why did they stop hanging here, I forgetā¦)
We did make fun of titotal for the effort they put into meeting rationalist on their own terms and charitably addressing their arguments and you know, being an EA themselves (albeit one of the saner ones)ā¦
Ah, right. That. Reminds me of that old adage about monsters and abysses. āFighting monsters and abyss staring is good and cool, actually. France is bacon.ā Something like that, donāt fact check me.
AllTrails doing their part in the war on genAI by disappearing the people who would trust genAI: https://www.nationalobserver.com/2025/06/17/news/alltrails-ai-tool-search-rescue-members
Amazing. Canāt wait for the doomers to claim that somehow this has enough intent to classify as murder. I wonder if theyāll end up on one of the weirdly large number of ābad things that happen to people in the national parksā podcasts.
Donāt make tap the sign:
Donāt feed the bears!
My AllTrails told me bears keep eating his promptfondlers so I asked how many promptfondlers he has and he said he just goes to AllTrails and gets a new promptfondler afterwards so I said it sounds like heās just feeding promptfondlers to bears and then his parks service started crying.
Darwin Award-as-a-service
new rant from me about how boosters asking critics to admit that AI is āusefulā is not the win they think it is vid: https://www.youtube.com/watch?v=bRcBCji6XvE audio: https://pnc.st/s/faster-and-worse/94cb1cda/useful-is-nothing
ZITRON DROPPED (sadly, its premium)
deleted by creator
OT: boss makes a dollar, I make a dime, thats why I listen to audiobooks on company time.
(Holy shit I should have got airpods a long time ago. But seriously, the jobs going great.)
New lucidity post: https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/
The author is entertaining, and if youāve not read them before their past stuff is worth a look.
looking forward to mastodon awful dot systems
New Tante piece: āChatBotā is bad design
Finally circling back around to this.
Feels like I am not just doing my job but also the work the operator of the service or product I am having to use through chat should have paid professionals to do. And Iām not getting paid for it.
Speaking as someone who has worked extensively in IT support, I think thatās the sales pitch for these chatbots. They donāt want to give users tools and knowledge to solve their own problems - or rather they do but the chatbots arenāt part of that. The chatbots are supposed to replace the people who would interact with the relevant systems on your behalf. And honestly, working with a support person is already a deeply unsatisfying interaction in the vast majority of cases. In even the best case scenario it involves acknowledging that some part of your job has exceeded your ability and you need specialized help, and handling that well is a very rare personality trait. But the massive variety of interconnected systems that we rely on are too complex for this to not be a common occurrence. Even if you did radically open everything from internal bug trackers to licensing systems to communications there wouldnāt be enough time in the day for everyone to learn those systems well enough to perfectly self-solve all their problems, and that lack of systems knowledge would be a massive drain on your operations. But trying to fit in an LLM chatbot is the worst of both worlds, in that your users are both locked away from the tools and knowledge that would let them solve their own issues but still need to learn how to wrangle your intermediary system, and that system doesnāt have the human ability to connect and build a working relationship and get through those issues in a positive way.
There should be a weekly periodical called Business Idiot.