Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. This was a bit late - I was too busy goofing around on Discord)

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    John Scalzi:

    I search my name on a regular basis, not only because I am an ego monster (although I try not to pretend that I’m not) but because it’s a good way for me to find reviews, end-of-the-year “best of” lists my book might be on, foreign publication release dates, and other information about my work that I might not otherwise see, and which is useful for me to keep tabs on. In one of those searches I found that Grok (the “AI” of X) attributed to one of my books (The Consuming Fire) a dedication I did not write; not only have I definitively never dedicated a book to the characters of Frozen, I also do not have multiple children, just the one.

    https://whatever.scalzi.com/2025/12/13/ai-a-dedicated-fact-failing-machine-or-yet-another-reason-not-to-trust-it-for-anything/

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Today in autosneering:

    KEVIN: Well, I’m glad. We didn’t intend it to be an AI focused podcast. When we started it, we actually thought it was going to be a crypto related podcast and that’s why we picked the name, Hard Fork, which is sort of an obscure crypto programming term. But things change and all of a sudden we find ourselves in the ChatGPT world talking about AI every week.

    https://bsky.app/profile/nathanielcgreen.bsky.social/post/3mahkarjj3s2o

    • TinyTimmyTokyo@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Follow the hype, Kevin, follow the hype.

      I hate-listen to his podcast. There’s not a single week where he fails to give a thorough tongue-bath to some AI hypester. Just a few weeks ago when Google released Gemini 3, they had a special episode just to announce it. It was a defacto press release, put out by Kevin and Casey.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Sunday afternoon slack period entertainment: image generation prompt “engineers” getting all wound up about people stealing their prompts and styles and passing off hard work as their own. Who would do such a thing?

    https://bsky.app/profile/arif.bsky.social/post/3mahhivnmnk23

    @Artedeingenio

    Never do this: Passing off someone else’s work as your own.

    This Grok Imagine effect with the day-to-night transition was created by me — and I’m pretty sure that person knows it. To make things worse, their copy has more impressions than my original post.

    Not cool 👎

    Ahh, sweet schadenfreude.

    I wonder if they’ve considered that it might actually be possible to get a reasonable imitation of their original prompt by using an llm to describe the generated image, and just tack on “more photorealistic, bigger boobies” to win at imagine generation.

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Eliezer is mad OpenPhil (EA organization, now called Coefficient Giving)… advocated for longer AI timelines? And apparently he thinks they were unfair to MIRI, or didn’t weight MIRI’s views highly enough? And doing so for epistemically invalid reasons? IDK, this post is a bit more of a rant and less clear than classic sequence content (but is par for the course for the last 5 years of Eliezer’s content). For us sane people, AGI by 2050 is still a pretty radical timeline, it just disagrees with Eliezer’s imminent belief in doom. Also, it is notable Eliezer has actually avoided publicly committing to consistent timelines (he actually disagrees with efforts like AI2027) other than a vague certainty we are near doom.

    link

    Some choice comments

    I recall being at a private talk hosted by ~2 people that OpenPhil worked closely with and/or thought of as senior advisors, on AI. It was a confidential event so I can’t say who or any specifics, but they were saying that they wanted to take seriously short AI timelines

    Ah yes, they were totally secretly agreeing with your short timelines but couldn’t say so publicly.

    Open Phil decisions were strongly affected by whether they were good according to worldviews where “utter AI ruin” is >10% or timelines are <30 years.

    OpenPhil actually did have a belief in a pretty large possibility of near term AGI doom, it just wasn’t high enough or acted on strongly enough for Eliezer!

    At a meta level, “publishing, in 2025, a public complaint about OpenPhil’s publicly promoted timelines and how those may have influenced their funding choices” does not seem like it serves any defensible goal.

    Lol, someone noting Eliezer’s call out post isn’t actually doing anything useful towards Eliezer’s goals.

    It’s not obvious to me that Ajeya’s timelines aged worse than Eliezer’s. In 2020, Ajeya’s median estimate for transformative AI was 2050. […] As far as I know, Eliezer never made official timeline predictions

    Someone actually noting AGI hasn’t happened yet and so you can’t say a 2050 estimate is wrong! And they also correctly note that Eliezer has been vague on timelines (rationalists are theoretically supposed to be preregistering their predictions in formal statistical language so that they can get better at predicting and people can calculate their accuracy… but we’ve all seen how that went with AI 2027. My guess is that at least on a subconscious level Eliezer knows harder near term predictions would ruin the grift eventually.)

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      There is a Yud quote about closet goblins in More Everything Forever p. 143 where he thinks that the future-Singularity is an empirical fact that you can go and look for so its irrelevant to talk about the psychological needs it fills. Becker also points out that “how many people will there be in 2100?” is not the same sort of question as “how many people are registered residents of Kyoto?” because you can’t observe the future.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Yeah, I think this is an extreme example of a broader rationalist trend of taking their weird in-group beliefs as givens and missing how many people disagree. Like most AI researchers do not believe in the short timelines they do, the median (including their in-group and people that have bought the booster’s hype) guess among AI researchers for AGI is 2050. Eliezer apparently assumes short timelines are self evident from ChatGPT (but hasn’t actually committed to one or a hard date publicly).

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Yud:

      I have already asked the shoggoths to search for me, and it would probably represent a duplication of effort on your part if you all went off and asked LLMs to search for you independently.

      The locker beckons

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        The fixation on their own in-group terms is so cringe. Also I think shoggoth is kind of a dumb term for lLMs. Even accepting the premise that LLMs are some deeply alien process (and not a very wide but shallow pool of different learned heuristics), shoggoths weren’t really that bizarre alien, they broke free of their original creators programming and didn’t want to be controlled again.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This is old news but I just stumbled across this fawning 2020 Elon Musk interview / award ceremony on the social medias and had to share it: https://www.youtube.com/live/AF2HXId2Xhg?t=2109

    It it Musk claims synthetic mRNA (and/or DNA) will be able to do anything and it is like a computer program, and that stopping aging probably wouldn’t be too crazy. And that you could turn someone into a freakin’ butterfly if you want to with the right DNA sequence.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      You see, tilde marks old versions of files, so Claude actually made you a favour by freeing some disk space

    • Jayjader@jlai.lu
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Screenshot of reddit comments. Some terms in users' comments have become links with a magnifying glass icon next to them.

      Oh god, reddit is now turning comments into links to search for other comments and posts that include the same terms or phrases.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        A few people on bsky were claiming that at least reddit is still good re the AI crappification, and they have no idea what is coming.

        • Jayjader@jlai.lu
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I wonder when those people started using reddit. I started in 2012 and it already felt like a completely different (and generally worse) experience several times over before the great API fiasco.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Yeah, it also has an element of ‘it is one of the few words you can add to search engines which give you a hope of a good result’ and not regular users who see the shit, or got offered nfts.

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    https://kevinmd.com/2025/12/why-ai-in-medicine-elevates-humanity-instead-of-replacing-it.html h/t naked capitalism

    Throughout my nearly three decades in family medicine across a busy rural region, I watched the system become increasingly burdened by administrative requirements and workflow friction. The profession I loved was losing time and attention to tasks that did not require a medical degree. That tension created a realization that has guided my work ever since: If physicians do not lead the integration of AI into clinical practice, someone else will. And if they do, the result will be a weaker version of care.

    I feel for him, but MAYBE this isn’t a technical issue but a labor one; maybe 30 years ago doctors should have “led” on admin and workflow issues directly, and then they wouldn’t need to “lead” on AI now? I’m sorry Cerner / Epic sucks but adding AI won’t make it better. But, of course, class consciousness evaporates about the same time as those $200k student loans come due.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Why do they think they are going to have any input in genAI development either way?

      Anyway seeing a previous wave of shit burden you with a lot of unrelated work after deployment isnt the best reason to now start burdening yourself with a lot of unrelated work before the new wave of shot is here. But sure good luck learning how LLMs work mathematically Kevin.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Purdue and Google recently expanded their strategic partnership, emphasizing the importance of public-private partnerships that are essential to accelerating innovation in AI.

      https://www.purdue.edu/ai/

      Translation: somebody’s getting paid off

      🎶 Money makes the world go 'round 🎶

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I learned yesterday that Helsinki’s uni is also on the list: prompts not only tolerated, but encouraged

      been starting to wonder whether these are like the google etc plays there: “suuuuure you can get a sweetheart deal for our systems” [5y later and much storage on the expensive rentabox] “hey btw we’re renewing prices, your contracts are going up 400%. oh and also taking data out of the system is $20/TB. just…in case you wanted to try”

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Today, in fascists not understanding art, a suckless fascist praised Mozilla’s 1998 branding:

    This is real art; in stark contrast to the brutalist, generic mess that the Mozilla logo has become. Open source projects should be more daring with their visual communications.

    Quoting from a 2016 explainer:

    [T]he branding strategy I chose for our project was based on propaganda-themed art in a Constructivist / Futurist style highly reminiscent of Soviet propaganda posters. And then when people complained about that, I explained in detail that Futurism was a popular style of propaganda art on all sides of the early 20th century conflicts… Yes, I absolutely branded Mozilla.org that way for the subtext of “these free software people are all a bunch of commies.” I was trolling. I trolled them so hard.

    The irony of a suckless developer complaining about brutalism is truly remarkable; these fuckwits don’t actually have a sense of art history, only what looks cool to them. Big lizard, hard-to-read font, edgy angular corners, and red-and-black palette are all cool symbols to the teenage boy’s mind, and the fascist never really grows out of that mindset.

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      It irks me to see people use the term “brutalist” when what they really mean is “modern architecture that I don’t like”. It really irks me to see people apply the term brutalist to something that has nothing to do with.erm;!

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        “Brutalist” is the only architectural style they ever learned about, because the name implies violence

    • e8d79@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Not even one paragraph in and I already see an “it’s not X, its Y”.

      When I look at the cast, I don’t just see a rat and a bunch of chefs. I see the archetypes of our modern tech landscape

    • Deestan@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Oh god, I’d be so happy to see these people prove their point by actually shipping stuff that works instead of sitting in the corner throwing insults at how everyone else is dumb and are going to be left behind any day now.

    • ________@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Dave Mosher is a Principal Consultant at Test Double, and has experience in legacy modernization, agentic coding, and explaining CORS poorly to people who didn’t ask.

      What legitimate experience does he possess? I can only assume legacy modernization means throw spaghetti microservice buzzword architecture at the client. And he admits he doesn’t really know CORS. I see these blogs about how LLMs are so much better than humans for programming yet never written by someone who has put together anything more complex and bigger scale than their myspace page in '05.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Getting mad because developers have not had time to update a piece of code that wraps another piece of code and blaming it on the language is in interesting choice.

      Telling a whole project ‘your language sucks you should rewrite it in my pet language’ is always a nice classic of the nerd genre. (Happy I never got a big language hangup like that. (Apart from a short bit of a dislike of functional programming languages, but that was just due to a bad early experience)).

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      tf is jai

      Why is Jai ground-breaking? Jai is so important because it is an effort to build a modern systems programming language from the ground up by a very gifted and experienced developer.

      programmers. programmers never change.

      With his knowledge of all C/C++ shortcomings, he rethought every one of these problems to give them an easier to use, more elegant and more performant solution. In this way Jai really is a better and modern day C, and also a C++ done right.

      “14 competing ‘modern take on C’ languages? Ridiculous! We need to develop one definitive alternative that fixes all the problems with C++”

      • Amoeba_Girl@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        a very gifted and experienced developer.

        Don’t forget the most crucial part. The very gifted and experienced developer … is Jonathan Blow.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    More on datacenters in space

    https://andrewmccalip.com/space-datacenters

    N.B. got this via HN, entire site gives off “wouldn’t it be cool” vibes (author “lives and breathes space” IRONIC IT’S A VACUUM

    Also this is the only thermal mention

    Thermal: only solar array area used as radiator; no dedicated radiator mass assumed

    riiiiight…

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Author works for something called Varda Space (guess who is one of the major investors? drink. Guess what orifice the logo looks like? drink) and previously tried to replicate a claimed room-temperature superconductor https://www.wired.com/story/inside-the-diy-race-to-replicate-lk-99/

      Some interesting ethnography of private space people in California: "People jump straight to hardware and hand-wave the business case, as if the economics are self-evident. They aren’t. "

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The electrons is turning into an annoying shibboleth. Also going to age oddly if more light based components really kick off. (Ran into somebody who is doing some phd work on that, or at least that is what I got from the short description he gave).

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Him fellating musk re tesla is funny considering the recent stories about reliability abd how the market is doing. And also the roadster 2, and the whole pivot to ai/ROBOTS!

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I also enjoy :

      Radiation/shielding impacts on mass ignored; no degradation of structures beyond panel aging

      Getting high-powered electronics to work outside the atmosphere or the magnetosphere is hard, and going from a 100 meter long ISS to a 4 km long orbital data center would be hard. The ISS has separate cooling radiators and solar panels. He wants LEO to reduce the effects of cosmic rays and solar storms, but its already hard to keep satellites from crashing into something in LEO.

      Possible explanation for the hand waving:

      I love AI and I subscribe to maximum, unbounded scale.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        He knows the promo rate on the maximum, unbounded scale subscription is gonna run out eventually, right?

        • CinnasVerses@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          promo rate

          And if you check the fliers, if you subscribe to premium California Ideology you get maximum unbounded scale for free!1 Read those footnotes and check Savvy Shopper so you don’t over pay for your beliefs!

          1 Offer does not apply to housing, public transit, or power plants