- 0 Posts
- 360 Comments
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 12th April 2026English
0·1 day agoNice. Good luck at your new job!
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 12th April 2026English
0·2 days agoAh suddenly when it reaches the class he feels he should be a part of (or is a part of, I don’t know how much money he makes) violence is suddenly a problem.
It’s not easy to be a cop, and that’s basically what you are around here, but thank you for doing it.
…
Soyweiser@awful.systemsto
TechTakes@awful.systems•Claude Mythos: the AI hacking model too good to release! AllegedlyEnglish
0·3 days agothe total cost was under $20,000
Doubt. Esp as the cost of training (or just the slamming of sites to get information, and all the extra costs related to that) are not included.
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 12th April 2026English
0·3 days agoand that a subsequent increase in “productivity” is expected with it.
Oh no… they def will blame the users before blaming the faulty tools. Hope you will not be the one who gets blamed as a wrecker or something when the eventual increase isn’t there (or other metrics fall off a cliff).
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 12th April 2026English
0·4 days agoUp next, when the first agent fails, implement an agent that checks the other agent. Both of these need agents to check for malicious inputs of course. And translation agents.
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 12th April 2026English
0·4 days agoIt can do trillions of calculations per second. All of them wrong.
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 12th April 2026English
0·4 days agoSo, they are planning to use an ai to fix the sec bugs that their ai generates? Good hussle, if a bit obvious.
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 12th April 2026English
0·6 days agoYeah, I intentionally only mentioned the start of the article and the Swartz bit because I didn’t want to lead with what I thought of it all, and was curious what others thought. (And I had not finished it yet because it is a bit long).
I was struck with the notion how many of them are all true AGI believers (which as you said the author took at face value) or rich greedy assholes (like you said), and how we, the people of the sneer, are right that you simply can’t work with these people. Like I feel more validated in the idea that EA is not the right way.
Another detail I noticed, nobody mentioned deepseek, again.
Soyweiser@awful.systemsto
TechTakes@awful.systems•Microsoft AI reshuffle: Mustafa Suleyman goes AI doomsday crankEnglish
0·6 days agoYep, and would make us all happier, and keep us in control. (deleting all the HP printers is next).
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 12th April 2026English
0·6 days agoNew Yorker article on Sam Altman dropped. Aaron Swartz apparently called him a sociopath. The article itself also had wat looked like an animated AI generated image of Altman so here is the archive.is link (if you can get the latter to load, I was having troubles).
“New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.”
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 12th April 2026English
0·7 days agoWhich skeletons are in your closet?
I’m sure you already have lists of those and are ready to publish them Trace.
Soyweiser@awful.systemsto
TechTakes@awful.systems•Microsoft AI reshuffle: Mustafa Suleyman goes AI doomsday crankEnglish
0·7 days agoOur framing for superintelligence is a humanist superintelligence, and that means that there’s a very clear test that everyone should use to judge whether we are living up to our principles, and that is: does this technology make us all healthier, happier as a species, and keep us all in control.
Going to be difficult, as soon as they develop a superintelligence it tries to delete the entire microsoft codebase.
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 5th April 2026English
0·9 days agoSo if Bender took over he wouldn’t count. As he wants to ‘kill all humans (except Fry)’. Seems like a loophole.
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 5th April 2026English
0·9 days agoAh the Epstein drive. (oof that aged…)
Small note however, iirc James S. A. Corey has mentioned the expanse is not hard sf. I don’t have a quote for that however.
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 5th April 2026English
0·10 days agoYeah realized a while ago that vibe coding is a massive technical debt creation machine.
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 5th April 2026English
0·10 days agoNot just anime but also science fiction. See also all the people who love ‘hard’ science fiction (science fiction more based on real world physics), which often isn’t that hard at all but just has a few real physics element, see the expanse for a good example of non-hard sf that feels hard (im finally reading the book series so be warned I might expanse post a bit).
content warning discussion about sexual abuse thrope
A similar thing happens with people who confuse edgy/grimdark/vile fiction with realistic. (A while back I played a video game which had a reference to women being captured for breeding and men for other sexual abuse (which made no sense in the setting, as these slaver faction already were resource starved, and poisoned so they died quickly, so no way they could raise kids into maturity in that environment (also iirc the slaver faction was less than 20 years old)). Which some players described as very realistic (people do the same about 40k, almost like it says something about their ideas of how the world works not the setting). I was just rolling my eyes and didnt comment. Apart from that it seemed ok. Crying suns is the name of the game for the people who want to avoid it for this reason (it wasnt a big plot point).
Sorry for being a bit offtopic and talking about entertainment again.
Soyweiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 5th April 2026English
0·10 days agoIt is great, that means the system is vulnerable to hacks if you find an exploit in any of those methods, but only 1/4th of the time.
Somebody described AI agents as very enthusiastic 14 year olds, and looks like they certainly code like one.
Word of warning, there is a code download going round with mallware in it: https://www.theregister.com/2026/04/02/trojanized_claude_code_leak_github/
That explains why Yud is using twitter so much nowadays. I mean they did ban him right? right?