Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(December’s finally arrived, and the run-up to Christmas has begun. Credit and/or blame to David Gerard for starting this.)

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    A second post on software project management in a week, this one from deadsimpletech: failed software projects are strategic failures.

    A window into another it disaster I wasn’t aware of, but clearly there is no shortage of those. An australian one this time.

    And of course, without having at least some of that expertise in-house, they found themselves completely unable to identify that Accenture was either incompetent, actively gouging them or both.

    (spoiler alert, it was both)

    Interesting mention of clausewitz in the context of management, which gives me pause a bit because techbros famously love the “art of war”, probably because sun tzu was patiently explaining obvious things to idiots and that works well on them. “On war” might be a better text, I guess.

    https://deadsimpletech.com/blog/failed_software_projects

    • nfultz@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      I associate Clausewitz (and especially John Boyd) references more with a Palantir / Stratfor / Booz / LE-MIC-consulting class compared to your typical bay area YC techbro in the US, and a very different crowd over in AU / NZ where grognards probably outnumber the actual military. LWers never bring up Clausewitz either but love Sun Tzu. But as far as software strategy posts go, I’d much rather read a Clausewitz tie-in than, say, Mythical Man Month or Agile anything.

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        29 days ago

        Much of the content of mythical man month is still depressingly relevant, especially in conjunction with brooks’ later stuff like no silver bullets. A lot of senior tech management either never read it, or read it so long ago that they forgot the relevant points beyond the title.

        It’s interesting that clausewitz doesn’t appear in lw discussions. That seems like a big point in favour of his writing.

        • nfultz@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          29 days ago

          If you liked Brooks, you might give Gerald Weinberg a try. A bit more folksy / less corporate.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      I clicked as I was curious as to what markers of AI use would appear. I immediately realised the problem: if it is written with AI then I wouldn’t want to read it, and thus wouldn’t be able to tell. Luckily the author’s profile cops to being “AI assisted”, which could mean a lot of things that just boil down to “slop forward”.

      • lagrangeinterpolator@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        26 days ago

        The most obvious indication of AI I can see is the countless paragraphs that start with a boldfaced “header” with a colon. I consider this to be terrible writing practice, even for technical/explanatory writing. When a writer does this, it feels as if they don’t even respect their own writing. Maybe their paragraphs are so incomprehensible that they need to spoonfeed the reader. Or, perhaps they have so little to say that the bullet points already get it across, and their writing is little more than extraneous fluff. Yeah, much larger things like sections or chapters should have titles, but putting a header on every single paragraph is, frankly, insulting the reader’s intelligence.

        I see AI output use this format very frequently though. Honestly, this goes to show how AI appeals to people who only care about shortcuts and bullshitting instead of thinking things through. Putting a bold header on every single paragraph really does appeal to that type.

        • CinnasVerses@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          26 days ago

          Also endless “its not X— its Y,” an overheated but empty style, and a conclusion which promises “It documents specific historical connections between specific intellectual figures using publicly available sources.” when there are no footnotes or links. Was ESR on the Extropians mailing list or did plausible string generator emit that plausible string?

          Chatbots are good at generating writing in the style of LessWrong because wordy vagueness based on no concrete experience is their whole thing.

  • fullsquare@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    27 days ago

    anyone else spent their saturday looking for gas turbine datasheets? no?

    anyway, the bad, no good, haphazard power engineering of crusoe

    neoclouds on top of silicon need a lot of power that they can’t get because they can’t get substation big enough, or maybe provider denied it, so they decided that homemade is just as fine. in order to turn some kind of fuel (could be methane, or maybe not, who knows) into electricity they need gas turbines and a couple of weeks back there was a story that crusoe got their first aeroderivative gas turbines from GE https://www.tomshardware.com/tech-industry/data-centers-turn-to-ex-airliner-engines-as-ai-power-crunch-bites this means that these are old, refurbished, modified jet engines put in a chassis with generator and with turbofan removed. in total they booked 29 turbines from GE, LM2500 series, and some other, PE6000 from other company called proenergy* and probably others (?) for alleged 4.5GW total. for neoclouds generators of this type have major advantage that 1. they exist and backlog isn’t horrific, the first ones delivered were contracted in december 2024, so about 10 months, and onsite construction is limited (sometimes less than month) 2. these things are compact and reasonably powerful, can be loaded on trailer in parts and just delivered wherever 3. at the same time these are small enough that piecewise installation is reasonable (34.4MW per, so just from GE 1GW total spread across 29)

    and that’s about it from advantages. these choices are fucking weird really. the state of the art in turning gas to electricity is to first, take as big gas turbine as practical, which might be 100MW, 350MW, there are even bigger ones. this is because efficiency of gas turbines increases with size, because big part of losses comes from gas slipping through the gap between blades and stator/rotor. the bigger turbine, the bigger cross-sectional area occupied by blades (~r^2), and so gap (~r) is less important. this effect is responsible for differences in efficiency of couple of percent just for gas turbine, for example for GE, aeroderivative 35MW-ish turbine (LM2500) we’re looking at 39.8% efficiency, while another GE aeroderivative turbine (LMS100) at 115MW has 43.9% efficiency. our neocloud disruptors stop there, with their just under 40% efficient turbines (and probably lower*) while exhaust is well over 500C and can be used to boil water, which is what any serious powerplant does in combined cycle. this additional steam turbine gives about third of total generated energy, bringing total efficiency to some 60-63%.

    so right off the bat, crusoe throws away about third of usable energy, or alternatively for the same amount of power they burn 50-70% more gas, if they even use gas and not for example diesel. they specifically didn’t order turbines with this extra heat recovery mechanism, because, based on datasheet https://www.gevernova.com/content/dam/gepower-new/global/en_US/downloads/gas-new-site/products/gas-turbines/gev-aero-fact-sheets/GEA35746-GEV-LM2500XPRESS-Product-Factsheet.pdf they would get over 1.37GW, while GE press announcement talked about “just under 1GW” which matches only with the oldest type of turbine there (guess: cheapest), or maybe some mix with even older ones than what is shown. this is not what serious power generating business would do, because for them every fraction of percent matters. while it might be possible to get heat recovery steam boiler and steam turbine units there later, this means extra installation time (capex per MW turns out to be similar) and more backlog, and requires more planning and real estate and foresight, and if they had that they wouldn’t be there in the first place, would they. even then, efficiencies get to maybe 55% because turns out that these heat exchangers required for for professional stuff are huge and can’t be loaded on trailer, so they have to go with less

    so it sorta gets them power short term, and financially it doesn’t look well long term, but maybe they know that and don’t care because they know they won’t be there to pay bills for gas, but also if these glorified gensets are only used during outages or otherwise not to their full capacity then it doesn’t matter that much. also gas turbines in order to run efficiently need to run hot, but the hottest possible temperature with normal fuels would melt any material we can make blades of, so the solution is to take double or triple amount of air than needed and dilute hot gases this way, which also means these are perfect conditions for nitric oxide synthesis, which means smog downwind. now there are SCRs which are supposed to deal with it, but it didn’t stop musk from poisoning people of memphis when he did very similar thing

    * proenergy takes the same jet engine that GE does and turns it into PE6000, which is probably mostly the same stuff as LM6000, except that GE version is 51MW and proenergy 48MW. i don’t know whether it’s derated or less efficient still, but for the same gas consumption it would be 37.5%

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      You’d think that they’d eventually run out of ways to say “fuck you, got mine” but here we are I guess. I’m going to guess that they’re not subject to the same kinds of environmental regulations or whatever that an actual power plant would be because it’s not connected to the grid?

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        26 days ago

        why wouldn’t they be subject to emission controls if they’re islanded? anyway they aren’t islanded, they’re using gas turbines to supplement what they can draw from substation or the other way around, either way it’s probably all synchronized and connected, they just put these turbines behind the meter

        i guess they’re not subject to emission controls because they’re in texas and anything green is woke, so they might just not do any of that and vent all the carbon monoxide and nitrogen oxides these things belch. also, no surprises if they fold before emaciated epa gets to them, if republicans don’t prevent it outright that is

        welcome to the abyss, it sucks here

        i mostly meant to point out that it looks like they prioritized delivery speed and minimum construction, while paying top dollar for extra 50-70% fuel so it makes sense short term, and who cares what comes in two years when they’re under. this also means they bought out all gas turbines money can buy. if marine diesels weren’t so heavy these would be next

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    26 days ago

    Yud explains, over 3k words, that not only is he smarter than everyone else, he is also saner, and no, there’s no way you can be as sane as him

    Eliezer’s Unteachable Methods of Sanity

    (side note - it’s weird that LW, otherwise so anxious about designing their website, can’t handle fucking apostrophes correctly)

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      26 days ago

      Ah, prophet-maxxing. ‘they have no hope of understanding and I have no hope of explaining in 30 seconds’

      The first and oldest reason I stay sane is that I am an author, and above tropes. Going mad in the face of the oncoming end of the world is a trope.

      This guy wrote this (note I don’t think there is anything wrong with looking like a nerd (I mean I have a mirror somewhere, so I don’t want to be a hypocrite on this), but looking like one and saying you are above tropes is something, there is also HPMOR)

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      26 days ago

      Handshake meme of Yud and Rorschach praising Harry S Truman

      From the comments:

      I got Claude to read this text and explain the proposed solution to me

      Once you start down the Claude path, forever will it dominate your destiny…

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    27 days ago

    2 links from my feeds with crossover here

    Lawyers, Guns and Money: The Data Center Backlash

    Techdirt: Radicalized Anti-AI Activist Should Be A Wake Up Call For Doomer Rhetoric

    Unfortunately Techdirt’s Mike Masnick is a signatory some bullshit GenAI-collaborationist manifesto called The Resonant Computing Manifesto, along with other suspects like Anil Dash. Like so many other technolibertarian manifestos, it naturally declines to say how their wonderful vision would be economically feasible in a world without meaningful brakes on the very tech giants they profess to oppose.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      i am pretty sure i am shredding the Resonant Computing Manifesto for Monday

      and of course Anil Dash signed it

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 days ago

        The people who build these products aren’t bad or evil.

        No, I’m pretty sure that a lot of them just are bad and evil.

        With the emergence of artificial intelligence, we stand at a crossroads. This technology holds genuine promise.

        [citation needed]

        [to a source that’s not laundered slop, ya dingbats]

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          26 days ago

          to a source that’s not laundered slop, ya dingbats

          Ha thats easy. Read Singularity Sky by Charles Stross see all the wonders the festival brings.

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      It is important to note that the reviews were detected as being ai generated by an ai tool.

      This is a marketing puff piece.

      I mean, I expect that loads of the submissions are by slop extruders… under the circumstances, how could they not be? But until someone does the legwork of checking this, it’s just another magic-eight-ball-says-maybe, dressed up as science.

      • lagrangeinterpolator@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Unfortunately, I don’t think anyone is ever going to go through all 19,797 submissions and 75,800 reviews (to one conference, in one year) and manually review them all. Then again, using the ultra-advanced cutting-edge innovative statistical technique of randomly sampling a few papers/reviews, one can still get useful conclusions.

        • JFranek@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          all 19,797 submissions and 75,800 reviews (to one conference, in one year)

          tired: Dead Internet Theory wired: Dead Conferences Theory

        • blakestacey@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          At least this example grew out of actual humans being suspicious.

          Dozens of academics have raised concerns on social media about manuscripts and peer reviews submitted to the organizers of next year’s International Conference on Learning Representations (ICLR), an annual gathering of specialists in machine learning. Among other things, they flagged hallucinated citations and suspiciously long and vague feedback on their work.

          Graham Neubig, an AI researcher at Carnegie Mellon University in Pittsburgh, Pennsylvania, was one of those who received peer reviews that seemed to have been produced using large language models (LLMs). The reports, he says, were “very verbose with lots of bullet points” and requested analyses that were not “the standard statistical analyses that reviewers ask for in typical AI or machine-learning papers.”

          We seem to be in a situation where everybody knows that the review process has broken down, but the “studies” that show it are criti-hype.

          Welcome to the abyss. It sucks here (academic edition).

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      that being a hung banner (rather than wall-mount or so) borders on being a tacit acknowledgement that they know their shit is unpopular and would get vandalised in a fucking second if it were easy (or easier!) to get to

      even then, I suspect that banner will not stay unscathed for long

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    (e, cw: genocide and culturally-targeted hate by the felon bot)

    world’s most divorced man continues outperforming black holes at sucking

    404 also recently did a piece on his ego-maintenance society-destroying vainglory projects

    imagine what it’s like in his head. era-defining levels of vacuous.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      From the replies

      I wonder what prompted it to switch to Elon being worth less than the average human while simultaneously saying it’d vaporize millions if it could prolonged his life in a different sub-thread

      It’s odd to me that people still expect any consistency from chatbots. These bots can and will give different answers to the same verbatim question. Am I just too online if I have involuntarily encountered enough AI output to know this?

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I know it is a bit of elitism/priviledge on my part. But if you don’t know about the existence of google translate(*), perhaps you shouldn’t be doing vibe coding like this.

      *: this of course, could have been a LLM based vibe translation error.

      E: And I guess my theme this week is translations.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      After the bubble collapses, I believe there is going to be a rule of thumb for whatever tiny niche use cases LLMs might have: “Never let an LLM have any decision-making power.” At most, LLMs will serve as a heuristic function for an algorithm that actually works.

      Unlike the railroads of the First Gilded Age, I don’t think GenAI will have many long term viable use cases. The problem is that it has two characteristics that do not go well together: unreliability and expense. Generally, it’s not worth spending lots of money on a task where you don’t need reliability.

      The sheer expense of GenAI has been subsidized by the massive amounts of money thrown at it by tech CEOs and venture capital. People do not realize how much hundreds of billions of dollars is. On a more concrete scale, people only see the fun little chat box when they open ChatGPT, and they do not see the millions of dollars worth of hardware needed to even run a single instance of ChatGPT. The unreliability of GenAI is much harder to hide completely, but it has been masked by some of the most aggressive marketing in history towards an audience that has already drunk the tech hype Kool-Aid. Who else would look at a tool that deletes their entire hard drive and still ever consider using it again?

      The unreliability is not really solvable (after hundreds of billions of dollars of trying), but the expense can be reduced at the cost of making the model even less reliable. I expect the true “use cases” to be mainly spam, and perhaps students cheating on homework.

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Pessimistically I think this scourge will be with us for as long as there are people willing to put code “that-mostly-works” in production. It won’t be making decisions, but we’ll get a new faucet of poor code sludge to enjoy and repair.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      yeah as I posted on mastodong.soc, it continues to make me boggle that people think these fucking ridiculous autoplag liarsynth machines are any good

      but it is very fucking funny to watch them FAFO

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      The documentation for “Turbo mode” for Google Antigravity:

      Turbo: Always auto-execute terminal commands (except those in a configurable Deny list)

      No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. It’s not even named similarly to dangerous modes in other software (like “force” or “yolo” or “danger”)

      Just a cool marketing name that makes users want to turn it on. Heck if I’m using some software and I see any button called “turbo” I’m pressing that.

      It’s hard not to give the user a hard time when they write:

      Bro, I didn’t know I needed a seatbelt for AI.

      But really they’re up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user “well in our small print somewhere we used the phrase ‘Gemini can make mistakes’ so why did you enable turbo mode??”

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The documentation for “Turbo mode” for Google Antigravity:

    Turbo: Always auto-execute terminal commands (except those in a configurable Deny list)

    No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. It’s not even named similarly to dangerous modes in other software (like “force” or “yolo” or “danger”)

    Just a cool marketing name that makes users want to turn it on. Heck if I’m using some software and I see any button called “turbo” I’m pressing that.

    It’s hard not to give the user a hard time when they write

    Bro, I didn’t know I needed a seatbelt for AI.

    But really they’re up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user “well in our small print somewhere we used the phrase ‘Gemini can make mistakes’ so why did you enable turbo mode??”

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Edited it into a reply to Hanson now believing in Aliens, but seems like the SSC side of rationalism has a larger group of people also believing in miracles: https://www.astralcodexten.com/p/the-fatima-sun-miracle-much-more (I have not in depth read the article, going by what others reported about this incident, there also seem to be related LW posts).

    Read it a bit now, noticed that scott doesn’t know people who speak Portuguese and is relying on mt. (Also unclear what type of mt).

    • BioMan@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      The long expected collapse of the rationalists out of their flagging cult into ordinary religion and conspiracy theory continues apace.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Reposted from sunday, for those of you who might find it interesting but didn’t see it: here’s an article about the ghastly state of it project management around the world, with a brief reference to ai which grabbed my attention, and made me read the rest, even though it isn’t about ai at all.

    Few IT projects are displays of rational decision-making from which AI can or should learn.

    Which, haha, is a great quote but highlights an interesting issue that I hadn’t really thought about before: if your training data doesn’t have any examples of what “good” actually is, then even if your llm could tell the difference between good and bad, which it can’t, you’re still going to get mediocrity out (at best). Whole new vistas of inflexible managerial fashion are opening up ahead of us.

    The article continues to talk about how we can’t do IT, and wraps up with

    It may be a forlorn request, but surely it is time the IT community stops repeatedly making the same ridiculous mistakes it has made since at least 1968, when the term “software crisis” was coined

    It is probably healthy to be reminded that the software industry was in a sorry state before the llms joined in.

    https://spectrum.ieee.org/it-management-software-failures

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Considering the sorry state of the software industry, plus said industry’s adamant refusal to learn from its mistakes, I think society should actively avoid starting or implementing new software, if not actively cut back on software usage when possible, until the industry improves or collapses.

      That’s probably an extreme position to take, but IT as it stands is a serious liability - one that AI’s set to make so much worse.

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        For a lot of this stuff at the larger end of the scale, the problem mostly seems to be a complete lack of accountability and consequences, combined with there being, like, four contractors capable of doing the work, with three giant accountancy firms able to audit the books.

        Giant government projects always seem to be a disaster, be they construction, heathcare, IT, and no heads ever roll. Fujitsu was still getting contracts from the UK government even after it was clear they’d been covering up the absolute clusterfuck that was their post office system that resulted in people being driven to poverty and suicide.

        At the smaller scale, well. “No warranty or fitness for any particular purpose” is the whole of the software industry outside of safety critical firmware sort of things. We have to expend an enormous amount of effort to get our products at work CE certified so we’re allowed to sell them, but the software that runs them? we can shovel that shit out of the door and no-one cares.

        I’m not sure will ever escape “move fast and break things” this side of a civilisation-toppling catastrophe. Which we might get.

        • BlueMonday1984@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          I’m not sure will ever escape “move fast and break things” this side of a civilisation-toppling catastrophe. Which we might get.

          Considering how “vibe coding” has corroded IT infrastructure at all levels, the AI bubble is set to trigger a 2008-style financial crisis upon its burst, and AI itself has been deskilling students and workers at an alarming rate, I can easily see why.

          • o7___o7@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            In the land of the blind the one-eyed man will make a killling as an independent contractor cleaning up after this blows up.

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Bubble or Nothing | Center for Public Enterprise h/t The Syllabus, dry but good.

    Data centers are, first and foremost, a real estate asset

    They specifically note that after the 2-5 year mini-perm the developers are planning on dumping the debt into commercial mortgage backed securities. Echoes of 2008.

    However, project finance lawyers have mentioned that many data center project finance loans are backed not just by the value of the real estate but by tenants’ cash flows on “booked-but-not-billing” terms — meaning that the promised cash flow need not have materialized.

    Echoes of Enron.