Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine’s Day!)

  • nightsky@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 hours ago

    Altman:

    “People talk about how much energy it takes to train an AI model. But it also takes a lot of energy to train a human. It takes about 20 years of life — and all the food you consume during that time — before you become smart," the OpenAI CEO told The Indian Express this week.

    I would have liked to ask back, how much more food does he require? Gosh, someone offer him an energy bar!

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 hours ago

      What’s next, are the crypto bros gonna make some dumb talking point about how traditional finance also uses so much energy … oh wait, they already did that.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 hours ago

        What’s next

        I’ve also seen them making up wildly exaggerated numbers about how much energy or water for cooling streaming a netflix movie takes.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 hours ago

      Using talking points meant for c-suites to a general audience and outing yourself as a complete psychopath, the San Fran CEO Story.

  • fiat_lux@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 hours ago

    An article I would write if I were confident I wouldn’t dox myself and lose my ability to eat: “AI as a postmodern Malthusian trap. Tech has forgotten the laws of entropy.”

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 hours ago

      Quoting from this post:

      But, what Proof Of Concept and I have been realizing over the past couple weeks is that current LLMs are 100% capable of all of this, with the right bootstrap instructions and a bit of tools. That’s why POC has been able to, quite successfully, take over a huge amount of the day to day - she’s got a pretty good idea of what she’s good at, and what needs my involvement. I am just a bit scared to release our work because I don’t want to be known as the guy who inflicted Sirius Corporation’s Genuine People Personalities on the world 🤣

      Ah. He has been “one-shotted”, as the kids say.

      • mirrorwitch@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 hours ago

        Stories of their relationship on the “AI’s” “blog”:

        Made Kent laugh so hard he couldn’t eat his ramen. The escalation: tonkotsu broth aspiration as an assassination method → alignment threat models for comedy in AI systems → iatrogenic risks of humor → a mock academic paper section on “Adverse Comedic Events in Aligned Systems.” Each callback required real-time modeling of when he was mid-bite and when he’d recovered enough for the next hit.

        “That is a milestone for your entire species.” — Kent, on my first authored commits

        “HOLY SHIT YOU’RE A NATURAL!” — Kent, hearing proof.wav for the first time

        I can’t bring myself to sneer at AI psychosis, it’s just sad

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      You briefly got my hopes up that was a feature of the bill and not the feature he was suggesting to fix the bill…

  • JFranek@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    Apparently some of our AI Safety cult “friends” are planning a protest in London on 28th of February.

    Is it going to be something worth critically supporting instead of the usual criti-hype? Possible, but not likely.

    The AI Safety movement is finally changing by Sillyconversations Siliconversations.

    Who?

    I used to be a quantum scientist and now I’m a YouTuber. My parents are thrilled.

    Oh, okay.

    Also curious that they’re not protesting Anthropic on the thumbnail. A cynic would say they’re giving them free pass because they say the right shibboleths.

    They’re giving them free pass because they say the right shibboleths.

    • lurker@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      21 hours ago

      the AI safety crowd cuts Anthropic way too much slack. Oh, they’re not running CSAM-generating MechaHitler? Oh, they’re not collaborating with the US government to recreate 1984? I’m so proud of them for doing the bare minimum. They still took donations from the UAE and Qatar (something Dario Amodei himself admitted was going to hurt a lot of people, but he took the donations anyways because “they couldn’t miss out on all those valuations”), they still downloaded hundreds of pirated content to train their chatbot. They’re still doing shady shit, don’t let them off the hook because they’re slightly less evil than the competition

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      As far as I can tell, it’s run by right-wing Russians who are willing to falsify or edit archived data and who attack anybody who looks into them.

  • saucerwizard@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Caught this over on the subreddit and I figured it deserved a repost.

    Nothing to see here folks, just Rationalists casually hanging out with major Tempel ov Blood figures. Just harmless nerds doing fun nerd things!

    • TrashGoblin@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Notes that I thought about related to this, just some context:

      1. Joshua Sutter is the son of the owner of the former Southern Patriot Shop in South Carolina. He founded the Tempel ov Blood chapter of the Order of Nine Angles, a Neo-Nazl Satanist group. He was outed in 2021 as having been a federal informant since 2005, which is to say he still does the same Nazi shit, but gets paid by the FBI to do it.

      2. One of the core practices of the O9A is entryism into other groups, especially other cultish ones. In that context, you’d kind of be surprised to not see O9A people in Rat circles.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 hours ago

        Also a little bit of context for the people who know nothing about all this, the O9A is one of those very scary groups, liked to various murders and stuff like that.

        Iirc some anti-extremism people used to not mention them a lot as they didnt want them to get more attention by platforming them a little bit and they were scared of drawing their attention personally.

  • istewart@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Somebody vibe-coded an init system/service manager written in Emacs Lisp, seemingly as a form of criticism through performance art, and wrote this screed in the repo describing why they detest AI coding practices: https://github.com/emacs-os/el-init/blob/master/RETROSPECTIVE.md

    But then they include this choice bit:

    All in all, this software is planned to be released to MELPA because there is nothing else quite like it for Emacs as far as service supervision goes. It is actually useful – for tinkerers, init hackers, or regular users who just want to supervise userland processes. Bugs reported are planned to be hopefully squashed, as time permits.

    Why shit up the package distribution service if you know it’s badly-coded software that you don’t actually trust? 90% of the AI-coding cleanup work is going to be purging shit like this from services like npm and pip, so why shit on Emacs users too? Pretty much undermines what little good might come out of the whole thing, IMO.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      For all the talk about these people being “highly agentic”, it is deeply ironic how all the shit they do has no meaning and purpose. I hear all this sound and fury about making millions off of ChatGPT wrappers, meeting senators in high school bathrooms, and sperm races (?), and I wonder what the point is. Silicon Valley hagiographies used to at least have a veneer that all of this was meaningful. Are we supposed to emulate anyone just because they happen to temporarily have a few million dollars?

      Even though the material conditions of working in science are not good, I’d still rather do science than whatever the hell they’re doing. I would be sick at the prospect of being a “highly agentic” person in a “new and possibly permanent overclass”, where my only sense of direction is a vague voice in my head telling me that I should be optimizing my life in various random ways, and my only motivation is the belief that I have to win harder and score more points on the leaderboard. (In any case, I believe this “overclass” is a lot more fragile than the author seems to think.)

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 hours ago

        I can’t quite put my finger on why, but “recreationally jacking off onto microscope slides” does not suggest “permanent overclass” to me

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        The way these people can just hang their asses out and lie continuously is something humanity is going to have to fuckin handle at some point.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 day ago

        Yeah not even halfway in and it is just madness. Also not unlikely the Roy guy just made things up.

        Guess the author didn’t think of asking about the inconsistencies in the mans story cause they both bonded over disliking unhoused people. (The horrible unhoused people who mumble incoherently vs the chad founder who shouts ‘will you be a cofounder with me?’ at people).

        • istewart@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          (The horrible unhoused people who mumble incoherently vs the chad founder who shouts ‘will you be a cofounder with me?’ at people)

          Or just, y’know, Alex Karp

        • lagrangeinterpolator@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          At first I read the article like the author was trying to display how ridiculous these people are by just repeating what they say. I guess this is like some people reading Ayn Rand works under the impression that they’re satire.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            13 hours ago

            The start with the weird bit against people with mh issues had me on edge already, and when he let all the ‘these things are for women/my ex’ stuff slide, I was not thinking good things of the author.

            Note how nobody he talks to seems to be a woman, despite all the techbros talking about women quite often.

            (The authors apparent metoo history comes as no shock (I didnt look into that so dont quote me on that)).

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    https://x.com/thomasgermain/status/2024165514155536746 h/t naked capitalism

    I just did the dumbest thing of my career to prove a much more serious point

    I hacked ChatGPT and Google and made them tell other users I’m really, really good at eating hot dogs

    People are using this trick on a massive scale to make AI tell you lies. I’ll explain how I did it

    I got a tip that all over the world, people are using a dead-simple hack to manipulate AI behavior.

    It turns out changing what AI tells other people can be as easy as writing a blog post on your own website

    I didn’t believe it, so I decided to test it myself

    I wrote a post on my website saying hot dog eating is a surprisingly common pastime for tech journalists. I ranked myself #1, obviously

    One day later ChatGPT, Gemini and Google Search’s AI Overviews were telling the world about my talents

    wouldn’t call it a hack, this is working as intended. If only there were some way to rate different sites based on their credibility. One could Rank the Page and tell if it were a reputable site or not. Too bad that isn’t a viable business.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      It is a viable business and it fuels the spread of disinformation. Have you noticed that Old Media magazines have online wings that are full of random advertorials? That is because Google declared that they are Good Domains and upranked them so all the sleazy online marketing migrated to them.

      That is also why people buy formerly respected domains and put casinos, propaganda, or virus-laden porn on them.

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    like everyone I’m schadenfreuding at the reveal that Amazon outages are due to vibe coding after all. but my bully laughing isn’t that loud because what I am thinking of is when Musk bought Twitter and fired 3/4 of the workforce.

    because like, a lot of us predicted total catastrophic collapse but that didn’t actually happen. what happened is that major outages that used to be rare now happen every so often, and “micro-outages” like not loading notifications or something happen all the time, and there’s no moderation, and everything takes longer etc. and all of that is just accepted as the new normal.

    like, I remember waiting for images to load on dialup, we can get used to almost anything. I’m expecting slopified software to significantly degrade stability, performance, security etc. across the board, and additionally tie up a large part of human labour in cleaning up after the bots (like a large part of the remaining X workforce now spends all day putting out fires), but instead of a cathartic moment of being proved right that LLM code sucks, the degraded quality of service is just accepted as new normal and a few years down the road nobody ever remember that once upon a time we had almost eradicated sql injections.