

Chariots of the Gods was released in 1968. I think that ship may have sailed decades ago.


Chariots of the Gods was released in 1968. I think that ship may have sailed decades ago.


Out of all of the things he did, that is one of those things.


Btw “Don’t Die” is a Bryan Johnson adjacent longevity community slogan which the writer is very likely to have seen often around twitter
Or it could be a reference to what often is said before the start of a match of a video game (though they probably left out “kick ass” for marketing purposes).
Edit: actually, considering that, maybe there’s a reveal at the end that they’re in the Basilisk torture sim, so… there might be something there?


“An investigator from the San Francisco Public Defender’s Office lawfully served a subpoena on Mr. Altman because he is a potential witness in a pending criminal case,” spokesperson Valerie Ibarra said in a statement to SFGATE.
In a post on X, the group wrote that one of their public defenders had managed to serve Sam Altman with a subpoena, requiring him to testify at their upcoming trial. They explained that the case involves their previous non-violent demonstrations, including blocking the entrance and the road in front of OpenAI’s offices on multiple occasions.
“All of our non-violent actions against OpenAI were an attempt to slow OpenAI down in their attempted murder of everyone and every living thing on earth.”
So it’s not because he’s being prosecuted.


I could also see the response to the bubble bursting being something like “At least the economy crashing delayed the murderous superintelligence.”


Not only is the MC a genetically modified supersoldier, he’s a genetically modified supersoldier who was created to fight insurrectionists. They didn’t even know that the Covenant existed until after the program finished.


The glasses also support prescription lenses along with transitional lenses that automatically adjust to light.
Actual prescription lenses or generic supermarket lenses?


Now, Talukdar thinks we’re only one year away from experiencing a holodeck ourselves.
Very skeptical indeed.


Give it a year or so and they’ll both pretend that this never happened.


According to the article, it could be higher than 1.5 billion, though by how much they don’t really say. But they’re estimating about $3000 per book. For a class action that actually seems extremely high.


Anthropic to pay 1.5 billion to authors
That’s… quite a bit. I wonder if we’ll start to see more complaints about people reaching their token limit “prematurely” or if Anthropic will release a $1000/month “unlimited” plan or something like that.


Thing is, that by December 2023, the time of the archive, there was already a scandal with someone using ChatGPT to do the work of discovery. While he might have stopped doing PR work for DoNotPay by that time, he was willing to advertise the fact that he did do such PR work for such a company. It shows either a lack of due diligence in researching his clients, or maybe it was just a paycheque for him. Perhaps he thought he knew more than what he actually did. Or maybe there was something else, I’m not clairvoyant.
It’s clear that he’s pivoted from that viewpoint, but it does make me curious what happened between then and now that caused him to become skeptical.


According to the archived website, he did do PR for DoNotPay, which is advertised as “The first robot lawyer.”
It’s certainly possible though that at the time he thought there was more potential for this sort of AI than there actually was, though that could also mean that his flip is relatively recent.
Or maybe it’s something else.
So, I’m not an expert study-reader or anything, but it looks like they took some questions from the MMLU, modified it in some unspecified way and put it into 3 categories (AI, human, AI-human), and after accounting for skill, determined that people with higher theory of mind had a slightly better outcome than people with lower theory of mind. They determined this based on what the people being tested wrote to the AI, but what they wrote isn’t in the study.
What they didn’t do is state that people with higher theory of mind are more likely to use AI or anything like that. The study also doesn’t mention empathy at all, though I guess it could be inferred.
Not that any of that actually matters because how they determined how much “theory of mind” each person had was to ask Gemini 2.5 and GPT-4o.