

Every time I hear a moderate AI argument (e.g. AI will be an aid for searching literature or writing code), it’s like, “Look, it’s impressive that the AI managed to do this. Sure, it took about three dozen prompts over five hours, made me waste another five hours because it generated some completely incorrect nonsense that I had to verify, produced an answer that was much lower quality than if I had just searched it up myself, and boiled two lakes in the process. You should acknowledge that there is something there, even if it did take a trillion dollars of hardware and power to grind the entire internet and all books and scientific papers into a viscous paste. Your objections are invalid because I’m sure things are gonna improve because Progress.”
I am doubly annoyed when I turn my back and they switch back to spouting nonsense about exponential curves and how AI is gonna be smarter than humans at literally everything.
Oh boy, another AI doom video popped up on my feed. Time for more morbid curiosity. The topic is about Big Yud and Nate Soares’s new book (“If You Build It, Everyone Dies”) about how AI is gonna kill us all. I have better things to waste 30 minutes on, so I’m not watching the full video, but the thumbnail (“The 7 Minute War”) kinda suggests what the contents are gonna be.
Thankfully, the description of the video has a Google doc with their sources! I’m sure it’s full of hard evidence from careful experiments that logically demonstrate why their doomsday scenario is something to worry about, not just a random assortment of Anthropic blog posts and completely unrelated events.
Somehow, there are a bunch of sources for the first 2 minutes of the video.
Geoffrey “All radiologists will be replaced in 5 years” Hinton, Nobel laureate in physics, famous for his work in … physics.
This is not the first we’ve seen from MIRI, and I have a feeling it will not be the last. The monster under my bed is a fictional narrative illustrating risks, not prediction.
They are still trying to flog the Anthropic/Apollo Research claims that chatbots will lie to you if you tell them to lie to you.
What does this even demonstrate? Some people can do some stuff with some GPUs? I ate some oatmeal today. Now everyone should be thoroughly convinced of my oatmeal-eating abilities.
I watched for a few seconds around the timestamp, and it seems to be the beginning of their scifi story, I mean, AGI scenario. Yes, if you want to convince people that your scenario is plausible, I’m sure this is the part that you need serious amounts of evidence for. Remember, almost half the sources have timestamps for the first two minutes of the video.
Again, what does this demonstrate? I tried solving P vs NP with a cheeseburger. That didn’t work either. The only purpose of mentioning this is for narrative window dressing, because Math Is For Smart People.
These are the sources for just the first two minutes. After that, they get a bit sparse.
More Anthropic blog posts and system cards? Come on, I can’t sneer the same thing twice in one post!
I don’t know what this has to do with supporting the validity of their AI doomsday scenario, but kudos to them for showing why cryptocurrency is also stupid, I guess.
More? I guess this is hard evidence for showing why cryptocurrency is stupid. I still don’t understand how this demonstrates that AI is scary.
I knew MIRI would be back. It’s illustrative, not predictive! Please don’t blame us if none of this even remotely happens! But it’s based on years of technical research. An entire graduate student’s worth of output in a decade.
The link they give references … another one of their own videos. You really are not beating the circular reference allegations here. Even if the purported story is somehow accurate, this again demonstrates how cryptocurrency is stupid. At least they use an LLM as a prop this time.
I think Yud is obsessed with this topic in particular. Better than diamondoid bacteria, I guess. Again, the AI just magically comes in and uses this stuff somehow.
Okay, let me be completely serious for this one. What would someone do if they truly believed that their work would lead to a horrible disaster, such as the extinction of humanity? Would they continue to work in the field, let alone make enough contributions to rise to the top? Alright I’m done.