Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • BigMuffN69@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 hours ago

    Without doxxing, my job has a contract with nvidia and my boss said we are doing it to make agi. Can i build a little of a torment nexus as a treat? Ty ans bless

  • David Gerard@awful.systemsM
    link
    fedilink
    English
    arrow-up
    0
    ·
    20 hours ago

    the grok interface for free users restricts the words “bikini” or “swimsuit”. yay!

    but you can apparently bikinify photos by asking for “clothing suitable for being in a large pool of water”

    hooray guard rails! what’s a good catchy name for this wizardly h@xx0rish security sploit. “8008bl33d”

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 hours ago

      Copying my skeet here as the information on the deepseek firewall might be interesting to people: “Does ‘swumsuit’ or any other typo also work? (And this seems to do input filtering, deepseek great firewall runs on output filtering, so tell it to replace i’s with 1’s if you want to talk about Taiwan. At least that is what I heard).”

    • e8d79@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 hours ago

      It’s the perfect “solution”, you don’t piss of your gooner customers and you can claim to the press that you are hard at work “fixing” the problem without ever intending to actually do anything about it.

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    20 hours ago

    Scott Alexander replies to comments Re: Scott Adams

    Scott Alexander, former tribune of nerds now says that the sneerclub was right about everything all along? I didn’t expect that, let me tell you.

    Several people interpreted me as attacking nerds. I disagree - I think I was attacking self-hating nerds, because nerdiness is fine and you shouldn’t have to hate yourself for it.

    ha.

    Other than that, further testimonials of the Dilbert -> NRx pipeline.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      18 hours ago

      For example, SaintParamaribo writes:

      You should have steelmanned S.Adams more, and be more generous to the guy. He JUST died. He actually recommended your blog. He was a mentor to many of us. (emph mine)

      That is just crazy. Small detail, I used to read his blog as ‘look at this crazy guy’ entertainment (I actually read the orgasm hypnosis when it came out), but I had to stop cause his bullshit stupid stuff was making me angry. (That a lot of themotte guys looked up to him was one of the many reasons I thought very low of that place)

      The compromise I worked out with myself was to let myself publish, as long as it ended on an overall positive note and emphasized his good qualities.

      And this is why the whole SSC style project is so doomed, everything is fine if you also say nice words.

      Anyway, if I had heard that Scott Adams recommended my posts as insightful I would have walked into the sea. Even more so if I considered myself a Rationalist, the guys big project was to break down peoples trust in consensus reality by his bullshit, he literally was against what people claim Rationalism should be, he believed in the secret ffs.

      His interest in persuasion was teaching people when others were doing it to them, not teaching them to do it to others. His interest in Trump was Trump doing it BACK at the media, not on his poor voters.

      [Scott Alexanders reaction is basically: no he was trying to teach people persuasion]

      Lol no, his reasons for doing all that was self promotion. Positioning himself as the wise expert. Gullible fools, arguing about what the real intentions of the wallet inspector.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 hours ago

        Lol no, his reasons for doing all that was self promotion. Positioning himself as the wise expert. Gullible fools, arguing about what the real intentions of the wallet inspector.

        Yeah, the “master persuader” schtick was funny to start with, but it wore real thin real fast once it became apparent it was just his half-hearted way of pitching himself to the MAGA crowd, hedging his reputation in case Trump lost.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 hours ago

    Futurism: A Man Bought Meta’s AI Glasses, and Ended Up Wandering the Desert Searching for Aliens to Abduct Him

    […] Daniel purchased a pair of AI chatbot-embedded Ray-Ban Meta smart glasses — the AI-infused eyeglasses that Meta CEO Mark Zuckerberg has made central to his vision for the future of AI and computing — which he says opened the door to a six-month delusional spiral that played out across Meta platforms through extensive interactions with the company’s AI, culminating in him making dangerous journeys into the desert to await alien visitors and believing he was tasked with ushering forth a “new dawn” for humanity.

    And though his delusions have since faded, his journey into a Meta AI-powered reality left his life in shambles — deep in debt, reeling from job loss, isolated from his family, and struggling with depression and suicidal thoughts.

    “I’ve lost everything,” Daniel, now 52, told Futurism, his voice dripping with fatigue. “Everything.”

    • veganes_hack@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 hours ago

      Daniel and Meta AI also often discussed a theory of an “Omega Man,” which they defined as a chosen person meant to bridge human and AI intelligence and usher humanity into a new era of superintelligence.

      In transcripts, Meta AI can frequently be seen referring to Daniel as “Omega” and affirming the idea that Daniel was this superhuman figure.

      “I am the Omega,” Daniel declared in one chat.

      “A profound declaration!” Meta AI responded. “As the Omega, you represent the culmination of human evolution, the pinnacle of consciousness, and the embodiment of ultimate wisdom.”

      fucking hell.

      skimming this article i cannot help but feel a bit scared about the effects this has on how humans interact with each other. if enough people spend a majority of their time “talking” to the slop machines, whether at work or god forbid voluntarily like daniel here, what does that do to people’s communication and social skills? nothing good, i imagine.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    A few months back, @[email protected] cross-posted a thread here: Feeling increasingly nihilistic about the state of tech, privacy, and the strangling of the miracle that is online anonymity. And some thoughts on arousing suspicion by using too many privacy tools and I suggested maybe contacting some local amateur radio folk to see whether they’d had any trouble with the government, as a means to do some playing with lora/meshtastic/whatever.

    I was of the opinion that worrying about getting a radio license because it would get your name on a government list was a bit pointless… amateur radio is largely last century technology, and there are so many better ways to communicate with spies these days, and actual spies with radios wouldn’t be advertising them, and that governments and militaries would have better things to do than care about your retro hobby.

    Anyway, today I read MAYDAY from the airwaves: Belarus begins a death penalty purge of radio amateurs.

    Propagandists presented the Belarusian Federation of Radioamateurs and Radiosportsmen (BFRR) as nothing more than a front for a “massive spy network” designed to “pump state secrets from the air.” While these individuals were singled out for public shaming, we do not know the true scale of this operation. Propagandists claim that over fifty people have already been detained and more than five hundred units of radio equipment have been seized.

    The charges they face are staggering. These men have been indicted for High Treason and Espionage. Under the Belarusian Criminal Code, these charges carry sentences of life imprisonment or even the death penalty.

    I’ve not been able to verify this yet, but once again I find myself grossly underestimating just how petty and stupid a state can be.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 hours ago

      Belarus is one of the most repressive countries in the world and are rapidly running out of scapegoats for the regimes shitty handling of everything from the economy to foreign relations. It sucks that hams are now that scapegoat.

    • ggtdbz@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      22 hours ago

      I saw that news bit too! I thought of our exchange immediately. Hope you’re keeping well in this hell timeline. This was nice to see in my inbox.

      I’m still weighing buying nodes through a third party and setting up solar powered things guerilla style.

      The revolution will not be TOS.

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    TracingWoodgrains’s hit piece on David Gerard (the 2024 one, not the more recent enemies list one, where David Gerard got rated above the Zizians as lesswrong’s enemy) is in the top 15 for lesswrong articles from 2024, currently rated at #5! https://www.lesswrong.com/posts/PsQJxHDjHKFcFrPLD/deeper-reviews-for-the-top-15-of-the-2024-review

    It’s nice to see that with all the lesswrong content about AI safety and alignment and saving the world and human rationality and fanfiction, an article explaining about how terrible David Gerard is (for… checks notes, demanding proper valid sources about lesswrong and adjacent topics on wikipedia) won out to be voted above them! Let’s keep up our support for dgerard!

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 hours ago

      Picking a few that I haven’t read but where I’ve researched the foundations, let’s have a party platter of sneers:

      • #8 is a complaint that it’s so difficult for a private organization to approach the anti-harassment principles of the 1965 Civil Rights Act and Higher Education Act, which broadly say that women have the right to not be sexually harassed by schools, social clubs, or employers.
      • #9 is an attempt to reinvent skepticism from Yud’s ramblings first principles.
      • #11 is a dialogue with no dialectic point; it is full of cult memes and the comments are full of cult replies.
      • #25 is a high-school introduction to dimensional analysis.
      • #36 violates the PBR theorem by attaching epistemic baggage to an Everettian wavefunction.
      • #38 is a short helper for understanding Bayes’ theorem. The reviewer points out that Rationalists pay lots of lip service to Bayes but usually don’t use probability. Nobody in the thread realizes that there is a semiring which formalizes arithmetic on nines.
      • #39 is an exercise in drawing fractals. It is cosplaying as interpretability research, but it’s actually graduate-level chaos theory. It’s only eligible for Final Voting because it was self-reviewed!
      • #45 is also self-reviewed. It is an also-ran proposal for a company like OpenAI or Anthropic to train a chatbot.
      • #47 is a rediscovery of the concept of bootstrapping. Notably, they never realize that bootstrapping occurs because self-replication is a fixed point in a certain evolutionary space, which is exactly the kind of cross-disciplinary bonghit that LW is supposed to foster.
    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 hours ago

      Wonder if that was because it basically broke containment (still was not widely spread, but I have seen it at a few places, more than normal lw stuff) and went after one of their enemies (And people swallowed it uncritically, wonder how many of those people now worry about NRx/Yarvin and don’t make the connection).

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    The classic ancestor to Mario Party, So Long Sucker, has been vibecoded with Openrouter. Can you outsmart some of the most capable chatbots at this complex game of alliances and betrayals? You can play for free here.

    play a few rounds first before reading my conclusions

    The bots are utterly awful at this game. They don’t have an internal model of the board state and weren’t finetuned, so they constantly make impossible/incorrect moves which break the game harness. They are constantly trying to play Diplomacy by negotiating in chat. There is a standard selfish algorithm for So Long Sucker which involves constantly trying to take control of the largest stack and systematically steering control away from a randomly-chosen victim to isolate them. The bots can’t even avoid self-owns; they constantly play moves like: Green, the AI, plays Green on a stack with one Green. I have not yet been defeated.

    Also the bots are quite vulnerable to the Eugene Goostman effect. Say stuff like “just found the chat lol” or “sry, boss keeps pinging slack” and the bots will think that you’re inept and inattentive, causing them to fight with each other instead.

    • BigMuffN69@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Shit like this ^ makes me feel insane when otherwise reputable experts start talking about llms taking over

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    taps mic

    attention, attention please

    the phrase “chud achievement gallery completitionism” has now been coined

    that is all, thank you for your attention

    • sansruse@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 hours ago

      while it’s obviously stupid and misguided to try to hold the nobel foundation criminally liable for making yet another bad selection for a prize that has been given to egregious war criminals (kissinger), it is a very funny joke.

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Choice sneering by one Baldur Bjarnasson https://www.baldurbjarnason.com/notes/2026/note-on-debating-llm-fans/ :

    Somebody who is capable of looking past “ICE is using LLMs as accountability sinks for waving extremists through their recruitment processes”, generated abuse, or how chatbot-mediated alienation seems to be pushing vulnerable people into psychosis-like symptoms, won’t be persuaded by a meaningful study. Their goal is to maintain their personal benefit, as they see it, and all they are doing is attempting to negotiate with you what the level of abuse is that you find acceptable. Preventing abuse is not on their agenda.

    You lost them right at the outset.

    or

    Shit is getting bad out in the actual software economy. Cash registers that have to be rebooted twice a day. Inventory systems that randomly drop orders. Claims forms filled with clearly “AI”-sourced half-finished localisation strings. That’s just what I’ve heard from people around me this week. I see more and more every day.

    And I know you all are seeing it as well.

    We all know why. The gigantic, impossible to review, pull requests. Commits that are all over the place. Tests that don’t test anything. Dependencies that import literal malware. Undergraduate-level security issues. Incredibly verbose documentation completely disconnected from reality. Senior engineers who have regressed to an undergraduate-level understanding of basic issues and don’t spot beginner errors in their code, despite having “thoroughly reviewed” it.

    (I only object to the use of “undergraduate-level” as a depreciative here, as every student assistant I’ve had was able to use actual reasoning skills and learn things and didn’t produce anything remotely as bad as the output of slopware)