• 0 Posts
  • 68 Comments
Joined 3 years ago
cake
Cake day: July 19th, 2023

help-circle
  • Wikipedia at 25: A Wake-Up Call h/t metafilter

    It’s a good read overall, makes some good points about global south.

    The hostility to AI tools within parts of our community is understandable. But it’s also strategic malpractice. We’ve seen this movie before, with Wikipedia itself. Institutions that tried to ban or resist Wikipedia lost years they could have spent learning to work with it. By the time they adapted, the world had moved on.

    AI isn’t going away. The question isn’t whether to engage. It’s whether we’ll shape how our content is used or be shaped by others’ decisions.

    Short of wikipedia shipping it’s own chatbot that proactively pulls in edits and funnels traffic back I think the ship has sailed. But it’s not unique, same thing is happening to basically everything with a CC license including SO and FOSS writ large. Maybe the right thing to is put new articles are AGPL or something, a new license that taints an entire LLM at train time.



  • I’ll be brutally honest about that question: I think that if “they might train on my code / build a derived version with an LLM” is enough to drive you away from open source, your open source values are distinct enough from mine that I’m not ready to invest significantly in keeping you. I’ll put that effort into welcoming the newcomers instead.

    No he won’t.

    I’ve found myself affected by this for open source dependencies too. The other day I wanted to parse a cron expression in some Go code. Usually I’d go looking for an existing library for cron expression parsing—but this time I hardly thought about that for a second before prompting one (complete with extensive tests) into existence instead.

    He /knows/ about pcre but would rather prompt instead. And pretty sure this was already answered on stack overflow before 2014.

    That one was a deliberately provocative question, because for a new HTML5 parsing library that passes 9,200 tests you would need a very good reason to hire an expert team for two months (at a cost of hundreds of thousands of dollars) to write such a thing. And honestly, thanks to the existing conformance suites this kind of library is simple enough that you may find their results weren’t notably better than the one written by the coding agent.

    He didn’t write a new library from scratch, he ported one from Python. I could easily hire two undergrads to change some tabs to curlies, pay them in beer, and yes, I think it /would/ be better, because at least they would have learned something.



  • From a new white paper Financing the AI boom: from cash flows to debt, h/t The Syllabus Hidden Gem of the Week

    The long-term viability of the AI investment surge depends on meeting the high expectations embedded in those investments, with a disconnect between debt pricing and equity valuations. Failure to meet expectations could result in sharp corrections in both equity and debt markets. As shown in Graph 3.C, the loan spreads charged on private credit loans to AI firms are close to those charged to non-AI firms. If loan spreads reflect the risk of the underlying investment, this pattern suggests that lenders judge AI-related loans to be as risky as the average loan to any private credit borrower. This stands in stark contrast to the high equity valuations of AI companies, which imply outsized future returns. This schism suggests that either lenders may be underestimating the risks of AI investments (just as their exposures are growing significantly) or equity markets may be overestimating the future cash flows AI could generate.

    Por que no los dos? But maybe the lenders are expecting a bailout… or just gullible…

    That said, to put the macroeconomic consequences into perspective, the rise in AI-related investment is not particularly large by historical standards (Graph 4.A). For example, at around 1% of US GDP, it is similar in size to the US shale boom of the mid-2010s and half as large as the rise in IT investment during the dot-com boom of the 1990s. The commercial property and mining investment booms experienced in Japan and Australia during the 1980s and 2010s, respectively, were over five times as large relative to GDP.

    Interesting point, if AI is basically a rounding error for GDP… But I also remember the layoffs in 2000-1 and 2014-5, they weren’t evenly distributed and a lot of people got left behind, even if they weren’t as bad as '08.









  • From the new Yann LeCunn interview https://www.ft.com/content/e3c4c2f6-4ea7-4adf-b945-e58495f836c2

    Meta made headlines for trying to poach elite researchers from competitors with offers of $100mn sign-on bonuses. “The future will say whether that was a good idea or not,” LeCun says, deadpan.

    LeCun calls Wang, who was hired to lead the organisation, “young” and “inexperienced”.

    “He learns fast, he knows what he doesn’t know . . . There’s no experience with research or how you practise research, how you do it. Or what would be attractive or repulsive to a researcher.”

    Wang also became LeCun’s manager. I ask LeCun how he felt about this shift in hierarchy. He initially brushes it off, saying he’s used to working with young people. “The average age of a Facebook engineer at the time was 27. I was twice the age of the average engineer.”

    But those 27-year-olds weren’t telling him what to do, I point out.

    “Alex [Wang] isn’t telling me what to do either,” he says. “You don’t tell a researcher what to do. You certainly don’t tell a researcher like me what to do.”

    OR, maybe nobody /has/ to tell a researcher what to do, especially one like him, if they’ve already internalized the ideology of their masters.



  • internet comment etiquette with erik just got off YT probation / timeout from when YouTube’s moderation AI flagged a decade old video for having russian parkour.

    He celebrated by posting the below under a pipebomb video.

    Hey, this is my son. Stop making fun of his school project. At least he worked hard on it. unlike all you little fucks using AI to write essays about books you don’t know how to read. So you can go use AI to get ahead in the workforce until your AI manager fires you for sexually harassing the AI secretary. And then your AI health insurance gets cut off so you die sick and alone in the arms of your AI fuck butler who then immediately cremates you and compresses your ashes into bricks to build more AI data centers. The only way anyone will ever know you existed will be the dozens of AI Studio Ghibli photos you’ve made of yourself in a vain attempt to be included. But all you’ve accomplished is making the price of my RAM go up for a year. You know, just because something is inevitable doesn’t mean it can’t be molded by insults and mockery. And if you depend on AI and its current state for things like moderation, well then fuck you. Also, hey, nice pipe bomb, bro.





  • Waterfox lore - it got acquired by System1 of “Guy Who Runs Three Companies Called Fidelity But Not The Fidelity You Know Probably Doesn’t Care That There’s Already A Company Called System1 That Does That Same Thing As The System1 His SPAC Is Buying. Just saying.” 1 and then went private again 2, presumably they bought it back after the stock predictably tanked. Subprime adtech is a strange place.

    IMHO should just bring back iceweasel, but what do i know.