

Found a small repository of mini-sneers aimed at mocking vibe-coding cock-ups: https://vibegraveyard.ai/
he/they


Found a small repository of mini-sneers aimed at mocking vibe-coding cock-ups: https://vibegraveyard.ai/


Meth LLMs: Not Even Once
Yegge’s an extremely experienced professional engineer. So he put care into Gas Town, right?
I’ve never seen the code, and I never care to, which might give you pause.
This is a lack of care I’ve only really seen with vibe coding, and I still struggle to wrap my head around how someone can have an utter death of shits to give about something they’re making (if you can even call vibe-coding “making”). Its particularly stark for me when I compare it to the many, many artists I know online, who care deeply about their craft, and whose artwork deeply reflects that.
Just…what the fuck?


Ran across a thread about tech culture’s vulnerabilty to slop machines recently. Dovetails nicely with Iris Meredith’s recent article about the same issue, I feel.


I’ve seen memes about eating people’s art, but never a literal case, lmaoooooooooooo


>zero-click android exploit
>arbitrary code execution and privilege escalation
Remember when the human was the weakest part of any cybersecurity system? Pepperidge Farms remembers.


Newgrounds user turned Audio Moderator Quest has put together a recap of 2025 (text version), providing stats for how much slop she’s dealt with:
2025 Stats:
- 2818 AI-Generated Tracks Flagged or Removed
- 3656 Total Flagged or Removed Tracks
- 12.7 GB Data Used by AI-Generated Tracks
- 2843 Accounts Which Uploaded Prohibited Audio
Cumulative Stats (since 2024):
- 4475 AI-Generated Tracks Flagged or Removed
- 5731 Total Flagged or Removed Tracks
- 18.93 GB Data Used by AI-Generated Tracks
- 4113 Accounts Which Uploaded Prohibited Audio
AI Model Breakdown:
- Suno AI: 82%
- Udio AI: 5%
- Riffusion AI: 1%
- Other: 12%
- RVC-Based: 0.6%
- Soundful: 0.4%
- Mixed: 0.2%
- Various Other Models: 2.9%
- Unknown: 7.9%
Reportedly, she’s also got an essay-length sneer in the works:
Finally, I am also working on an even larger, long-form essay post about artificial intelligence, drawing a link to something that I do not see draw enough. It’s a big project with a lot of research and knowledgeable people guiding me. This will be released in the coming months. I have a lot to say.


Starting off with a double bill of art-related sneers:
“Down with the Gatekeepers! Who…are Artists, Apparently” by Jared White, mocking promptfondlers’ attempts to cry gatekeeper and misunderstanding of the artistic process
“using chatgpt and other ai writing tools makes you unhireable. here’s why” by Doc Burford, going into punishing detail about LLMs’ artistic inadequacy, and promptfondlers’ artlessness


New post from Iris Meredith, doing a deep-dive into why tech culture was so vulnerable to being taken over by slop machines


It was floated last year, and its happened today - Curl is euthanising its bug bounty program, and AI is nigh-certainly why.


Simon Willison defends stealing a Python library using lying machines, answering “questions” he previously “asked” in an attempt to downplay his actions.


Found a solid sneer on the 'net today: https://chronicles.mad-scientist.club/tales/on-floss-and-training-llms/


A small list of literary promptfondlers came to my attention - should complement the awful.systems slopware list nicely.


In a frankly hilarious turn of events, an award-winning isekai novel had its planned book publication and manga adaptation shitcanned after it was discovered to be AI slop.
The offending isekai is still up on AlphaPolis (where it originally won said award) as of this writing. Given its isekai and AI slop, expect some primo garbage.


Anyway, I can recommend skipping this episode and only bothering with the technical or more business oriented ones, which are often pretty good.
AI puffery is easy for anyone to see through. If they’re regularly mistaking for something of actual substance, their technical/business sense is likely worthless, too.


Found someone showing some well-founded concern over the state of programming, and decided to share it before heading off to bed:

alt text:
Is anyone else experience this thing where your fellow senior engineers seem to be lobotomised by AI?
I’ve had 4 different senior engineers in the last week come up with absolutely insane changes or code, that they were instructed to do by AI. Things that if you used your brain for a few minutes you should realise just don’t work.
They also rarely can explain why they make these changes or what the code actually does.
I feel like I’m absolutely going insane, and it also makes me not able to trust anyones answers or analysis’ because I /know/ there is a high chance they should asked AI and wrote it off as their own.
I think the effect AI has had on our industry’s knowledge is really significant, and it’s honestly very scary.


The OpenAI Psychosis Suicide Machine now has a medical spinoff, which automates HIPAA violations so OpenAI can commit medical malpractice more efficiently


I’d personally just overthrow the US government and make it a British colony once more /j


so now there’s even less accountability than before
How can you get less than zero accountability?


I’m so sorry to hear that.
Yeah, its not like open-source can suffer from catastrophic bugs or anything, that’s purely in Proprietary Land
(As an aside, Tante did a write-up on Heartbleed back when it hit the news, and pointed to dysfunctional project management and lack of funds as the cause. Considering FOSS projects like Firefox and Bitwarden were hit with the LLM bug, both have definitely gotten worse in the ten years since.)