• 0 Posts
  • 11 Comments
Joined 6 months ago
cake
Cake day: May 16th, 2025

help-circle
  • Oh boy, another AI doom video popped up on my feed. Time for more morbid curiosity. The topic is about Big Yud and Nate Soares’s new book (“If You Build It, Everyone Dies”) about how AI is gonna kill us all. I have better things to waste 30 minutes on, so I’m not watching the full video, but the thumbnail (“The 7 Minute War”) kinda suggests what the contents are gonna be.

    Thankfully, the description of the video has a Google doc with their sources! I’m sure it’s full of hard evidence from careful experiments that logically demonstrate why their doomsday scenario is something to worry about, not just a random assortment of Anthropic blog posts and completely unrelated events.

    Somehow, there are a bunch of sources for the first 2 minutes of the video.

    “In the New York Times’ best-selling book, which was endorsed by Nobel laureates and the godfathers of AI” Geoffrey Hinton — Personal estimate >50% existential risk.

    Geoffrey “All radiologists will be replaced in 5 years” Hinton, Nobel laureate in physics, famous for his work in … physics.

    “researchers from the Machine Intelligence Research Institute describe in detail one potential example future” Machine Intelligence Research Institute — The Sable scenario from If Anyone Builds It, Everyone Dies by Yudkowsky & Soares. Fictional narrative illustrating risks, not prediction.

    This is not the first we’ve seen from MIRI, and I have a feeling it will not be the last. The monster under my bed is a fictional narrative illustrating risks, not prediction.

    “AI researchers have known this has been potentially a very bad idea since at least 2024” Anthropic/Apollo Research — Multiple 2024 papers document deceptive/self-preserving behaviors in controlled evaluations.

    They are still trying to flog the Anthropic/Apollo Research claims that chatbots will lie to you if you tell them to lie to you.

    “They spin up 200,000 GPUs and let Sable think for 16 hours straight” xAI/NVIDIA — Colossus supercomputer in Memphis scaling toward ~200,000 GPUs for Grok training.

    What does this even demonstrate? Some people can do some stuff with some GPUs? I ate some oatmeal today. Now everyone should be thoroughly convinced of my oatmeal-eating abilities.

    I watched for a few seconds around the timestamp, and it seems to be the beginning of their scifi story, I mean, AGI scenario. Yes, if you want to convince people that your scenario is plausible, I’m sure this is the part that you need serious amounts of evidence for. Remember, almost half the sources have timestamps for the first two minutes of the video.

    “a stunt to see if Sable can crack famous math problems like the Riemann hypothesis” Clay Mathematics Institute — Riemann Hypothesis remains unsolved after 160+ years, considered most famous unsolved problem in pure mathematics.

    Again, what does this demonstrate? I tried solving P vs NP with a cheeseburger. That didn’t work either. The only purpose of mentioning this is for narrative window dressing, because Math Is For Smart People.

    These are the sources for just the first two minutes. After that, they get a bit sparse.

    “Back in 2024, smaller models showed flashes of the same behavior” Multiple Papers — Documented deception/scheming findings in frontier models.

    “Claude 3.7 was caught repeatedly cheating on coding tasks even when told to stop”

    More Anthropic blog posts and system cards? Come on, I can’t sneer the same thing twice in one post!

    “Steal cryptocurrency from weak exchanges just like hackers did to Mt. Gox in 2011” U.S. Department of Justice — Russian nationals charged for 2011 Mt. Gox hack. 647,000-850,000 BTC stolen.

    I don’t know what this has to do with supporting the validity of their AI doomsday scenario, but kudos to them for showing why cryptocurrency is also stupid, I guess.

    “or Bybit in 2025” Reuters/FBI — Largest cryptocurrency theft to date. FBI attributed to North Korean Lazarus Group.

    More? I guess this is hard evidence for showing why cryptocurrency is stupid. I still don’t understand how this demonstrates that AI is scary.

    “Reminder, this scenario is based on years of technical research by the Machine Intelligence Research Institute, laid out in the book If Anyone Builds It Everyone Dies” MIRI — Meta-commentary explaining the scenario is illustrative, not predictive.

    I knew MIRI would be back. It’s illustrative, not predictive! Please don’t blame us if none of this even remotely happens! But it’s based on years of technical research. An entire graduate student’s worth of output in a decade.

    “In 2023, a human gave an LLM access to the internet and created an X account, Terminal of Truths, which gained hundreds of thousands of followers and launched its own crypto meme coin that reached a literal billion dollar market cap” Terminal of Truths — Real-world example of AI agent gaining social media following and wealth.

    The link they give references … another one of their own videos. You really are not beating the circular reference allegations here. Even if the purported story is somehow accurate, this again demonstrates how cryptocurrency is stupid. At least they use an LLM as a prop this time.

    “Gain of function research. Any one of them could be hijacked to unleash catastrophe.” Science/CIDRAP — Fouchier and Kawaoka created ferret-transmissible H5N1. Controversial GOF research began 2011.

    I think Yud is obsessed with this topic in particular. Better than diamondoid bacteria, I guess. Again, the AI just magically comes in and uses this stuff somehow.

    “The number one and number two most cited living scientists across all fields think scenarios like this are not only possible but likely to happen. And the average AI researcher thinks there is a 16% chance of AI causing human extinction.”

    Okay, let me be completely serious for this one. What would someone do if they truly believed that their work would lead to a horrible disaster, such as the extinction of humanity? Would they continue to work in the field, let alone make enough contributions to rise to the top? Alright I’m done.


  • Every time I hear a moderate AI argument (e.g. AI will be an aid for searching literature or writing code), it’s like, “Look, it’s impressive that the AI managed to do this. Sure, it took about three dozen prompts over five hours, made me waste another five hours because it generated some completely incorrect nonsense that I had to verify, produced an answer that was much lower quality than if I had just searched it up myself, and boiled two lakes in the process. You should acknowledge that there is something there, even if it did take a trillion dollars of hardware and power to grind the entire internet and all books and scientific papers into a viscous paste. Your objections are invalid because I’m sure things are gonna improve because Progress.”

    I am doubly annoyed when I turn my back and they switch back to spouting nonsense about exponential curves and how AI is gonna be smarter than humans at literally everything.



  • More AI bullshit hype in math. I only saw this just now so this is my hot take. So far, I’m trusting this r/math thread the most as there are some opinions from actual mathematicians: https://www.reddit.com/r/math/comments/1o8xz7t/terence_tao_literature_review_is_the_most/

    Context: Paul Erdős was a prolific mathematician who had more of a problem-solving style of math (as opposed to a theory-building style). As you would expect, he proposed over a thousand problems for the math community that he couldn’t solve himself, and several hundred of them remain unsolved. With the rise of the internet, someone had the idea to compile and maintain the status of all known Erdős problems in a single website (https://www.erdosproblems.com/). This site is still maintained by this one person, which will be an important fact later.

    Terence Tao is a present-day prolific mathematician, and in the past few years, he has really tried to take AI with as much good faith as possible. Recently, some people used AI to search up papers with solutions to some problems listed as unsolved on the Erdős problems website, and Tao points this out as one possible use of AI. (I personally think there should be better algorithms for searching literature. I also think conflating this with general LLM claims and the marketing term of AI is bad-faith argumentation.)

    You can see what the reasonable explanation is. Math is such a large field now that no one can keep tabs on all the progress happening at once. The single person maintaining the website missed a few problems that got solved (he didn’t see the solutions, and/or the authors never bothered to inform him). But of course, the AI hype machine got going real quick. GPT5 managed to solve 10 unsolved problems in mathematics! (https://xcancel.com/Yuchenj_UW/status/1979422127905476778#m, original is now deleted due to public embarrassment) Turns out GPT5 just searched the web/training data for solutions that have already been found by humans. The math community gets a discussion about how to make literature more accessible, and the rest of the world gets a scary story about how AI is going to be smarter than all of us.

    There are a few promising signs that this is getting shut down quickly (even Demis Hassabis, CEO of DeepMind, thought that this hype was blatantly obvious). I hope this is a bigger sign for the AI bubble in general.



  • Most restaurant origin stories involve someone sharing their favorite taco recipe or whatever. These guys start off with a bad pop-history explanation of the battle of Alesia. That’s how you know their food is great.

    There’s more where the founder of the company talks about how he really hated working at his family’s restaurant while growing up (good sign). Knowing that his family came from China adds another layer of weirdness, in my opinion. The characters where the company name comes from (改革) can be read in both Chinese (gǎigé) and Japanese (kaikaku) and mean the same thing (reform) in both languages. It just feels so weird that he talks so much fluff about Julius Caesar, mentions his family from China and then, out of the blue, uses a Japanese name for the company. What is with these people fetishizing ancient Rome and Japan so much?





  • After seeing this, I reminded myself that I’ve seen this type of thing happen before. Over the past half year, so many programmers enthusiastically embraced vibe coding after seeing one or two impressive results when trying it out for themselves. We all know how that is going right now. Baldur Bjarnason had some great essays (1, 2) about the dangers of relying on self-experimentation when judging something, especially if you’re already predisposed into believing it. It’s like a mark believing in a psychic after he throws out a couple dozen vague statements and the last one happens to match with something meaningful, after the mark interprets it for him.

    Edit: Accidentally hit reply too early.