Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 hours ago

    NotAwfulTech and AwfulTech converged with some ffmpeg drama on twitter over the past few days starting here and still ongoing. This is about an AI generated security report by Google’s “Big Sleep” (with no corresponding Google authored fix, AI or otherwise). Hackernews discussed it here. Looking at ffmpeg’s security page there have been around 24 bigsleep reports fixed.

    ffmpeg pointed out a lot of stuff along the lines of:

    • They are volunteers
    • They have not enough money
    • Certain companies that do use ffmpeg and file security reports also have a lot of money
    • Certain ffmpeg developers are willing to enter consulting roles for companies in exchange for money
    • Their product has no warranty
    • Reviewing LLM generated security bugs royally sucks
    • They’re really just in this for the video codecs moreso than treating every single Use-After-Free bug as a drop-everything emergency
    • Making the first 20 frames of certain Rebel Assault videos slightly more accurate is awesome
    • Think it could be more secure? Patches welcome.
    • They did fix the security report
    • They do take security reports seriously
    • You should not run ffmpeg “in production” if you don’t know what you’re doing.

    All very reasonable points but with the reactions to their tweets you’d think they had proposed killing puppies or something.

    A lot of people seem to forget this part of open source software licenses:

    BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW

    Or that venerable old C code will have memory safety issues for that matter.

    It’s weird that people are freaking out about some UAFs in a C library. This should really be dealt with in enterprise environments via sandboxing / filesystem containers / aslr / control flow integrity / non-executable memory enforcement / only compiling the codecs you need… and oh gee a lot of those improvements could be upstreamed!

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 hours ago

      For a moment there I was worried that ffmpeg had turned fash.

      Anyway, amazing job ffmpeg, great responses. No notes

  • saucerwizard@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 hours ago

    Watching another rationalist type on twitter become addicted to meth. You guys weren’t joking.

    (no idea who - just going by the subtweets).

  • Seminar2250@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 hours ago

    Like a complete fucking idiot, I paid for two years of protonmail right before discovering they are fascists. I would like to move to another provider. I have until August. I have been considering Forward Email. Anyone have thoughts on this provider or recommendations?

    • antifuchs@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 hours ago

      Im very very happy on Fastmail. They are sensible people who offer mainly email (and calendar stuff) with no overpromises. Their servers are hosted in the USA tho, so that may affect your choice.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 hours ago

      haven’t seen them before, but a short tour around their infra/systems providers isn’t particularly exciting - depending on both your threat model and what-you-want in a vendor

      some parts/pages do provide some detail in encouraging depth, but I’d have to do a much more full review to give you a good answer

      there’s been a couple of “where email” threads over the last year, tuta’s still one of the top options on that but you can check the threads if you want to see some of the other promising options

    • e8d79@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 hours ago

      I am using posteo.de. They are good but I dislike that they have no option for using your own domain which makes switching provider really annoying. If I had to choose a provider again I would probably go with mailbox.org.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 hours ago

      not outside of the fascist playbook to claim that they are the real victims. The example that comes to mind is the myth of white genocide, but also literally any fascist rhetoric is like that.

      It’s well trodden ground to say that genAI usage and support for genAI resonates with populist/reactionary/fascist themes in that it inherently devalues and dehumanises, and it promotes anti-intellectualism. If you can be replaced by AI, what worth do you have? And why think if the AI can do it for you?

      So, of course this stuff being echoed in spaces where the majority are ignorant to the nazi tilt. They can’t and don’t understand fascism on a structural level, they can only identify it when it’s trains and gas chambers.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 hours ago

      It’s been a while since I used Reddit. Is the thesis that subscribers to ChatGPT will be rounded up and killed? By whom? For what stated reason? It sounds like a weird inversion of victimhood, considering the number of GenAI user (even if they’re just casual users) and the massive money and hype around GenAI by companies and way too many govs.

          • sc_griffith@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 hours ago

            I haven’t touched image generators and idk how different their products are, if at all. but I think of this as the default AI “illustrated” style. very low on detail outside of the objects of focus, heavy line work, flat, rounded, muted colors

        • sansruse@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 hours ago

          this is weird. My first thought is that it’s just another vector of normalization for the idea that people who are afraid of and Post about genocide or other forms of discriminatory violence are not to be taken seriously. By putting a variety of insane victimhood appropriating subcultures into the internet milieu, it allows people to ignore what’s happening (and what may be about to happen) in the real world, where groups of people actually are subject to fascistic violence.

          • BlueMonday1984@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 hours ago

            Probably one part normalisation, one part AI supporters throwing tantrums when people don’t treat them like the specialiest little geniuses they believe they are. These people have incredibly fragile egos, after all.

          • YourNetworkIsHaunted@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 hours ago

            That is my thought as well. It’s like the “you call everyone you disagree with a Nazi” argument from the 90s and 00s - discrediting attempts to call out fascist and genocidal ideas creates a lot of cover for those ideas to spread without being appropriately checked. It helps create a situation where serious and respectable people can keep arguing that things aren’t that bad all the way until they get pushed onto a cattle car.

    • Seminar2250@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 hours ago

      I just saw this. I sent an email to Framework a few days ago asking if they would delete my account and letting them know this was the reason.

  • a_certain_individual@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 hours ago

    Boss at new job just told me we’re going all-in on AI and I need to take a core role in the project

    They want to give LLMs access to our wildly insecure mass of SQL servers filled with numeric data

    Security a non factor

    😂🔫

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 hours ago

      Sounds like the thing to do is to say yes boss, get Baldur Bjarnason’s book on business risks and talk to legal, then discover some concerns that just need the boss’ sign-off in writing.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 hours ago

    Thoughts / notes on Nostr? A local on a tech site is pushing it semi-hard, and I just remember it being mentioned in the same breath as Bluesky back in the day. It ticks a lot of techfash boxes - decentralized, “uncensorable”, has Bitcoin’s stupid Lightning protocol built in.

    • flere-imsaho@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 hours ago

      nostr neatly covers all obsessions of dorsey. it’s literally fash-tech (original dev, fiatjaf, is a right-wing nutjob; and current development is driven by alex gleason of the truth dot social fame), deliberately designed to be impossible to moderate (“censorship-resilient”); the place is full of fascists, promptfondlers and crypto dudes.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 hours ago

      exploding-heads, openly trumpist lemmy instance, fucked off there when admin got bored of baiting normal people, make of that what you will

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 hours ago

        flashback: even back then handful of regulars objected that nostr is packed with cryptobros and spam, so it’s like that for 2y minimum

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 hours ago

      Jack Dorsey seems to like throwing money at it:

      Jack Dorsey, the co-founder of Twitter, has endorsed and financially supported the development of Nostr by donating approximately $250,000 worth of Bitcoin to the developers of the project in 2023, as well as a $10 million cash donation to a Nostr development collective in 2025.

      (source: wiki)

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    More flaming dog poop appeared on my doorstep, in the form of this article published in VentureBeat. VB appears to be an online magazine for publishing silicon valley propaganda, focused on boosting startups, so it’s no surprise that they’d publish this drivel sent in by some guy trying to parlay prompting into writing.

    Point:

    Apple argues that LRMs must not be able to think; instead, they just perform pattern-matching. The evidence they provided is that LRMs with chain-of-thought (CoT) reasoning are unable to carry on the calculation using a predefined algorithm as the p,roblem grows.

    Counterpoint, by the author:

    This is a fundamentally flawed argument. If you ask a human who already knows the algorithm for solving the Tower-of-Hanoi problem to solve a Tower-of-Hanoi problem with twenty discs, for instance, he or she would almost certainly fail to do so. By that logic, we must conclude that humans cannot think either.

    As someone who already knows the algorithm for solving the ToH problem, I wouldn’t “fail” at solving the one with twenty discs so much as I’d know that the algorithm is exponential in the number of discs and you’d need 2^20 - 1 (1048575) steps to do it, and refuse to indulge your shit reasoning.

    However, this argument only points to the idea that there is no evidence that LRMs cannot think.

    Argument proven stupid, so we’re back to square one on this, buddy.

    This alone certainly does not mean that LRMs can think — just that we cannot be sure they don’t.

    Ah yes, some of my favorite GOP turns of phrases, “no unknown unknowns” + “big if true”.

    • Seminar2250@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 hours ago

      This is a fundamentally flawed argument. If you ask a human who already knows the algorithm for solving the Tower-of-Hanoi problem to solve a Tower-of-Hanoi problem with twenty discs, for instance, he or she would almost certainly fail to do so. By that logic, we must conclude that humans cannot think either.

      “I don’t understand recursion” energy