

bad at epistemology
Gwern once denied chaos theory in a way that Freeman Dyson called out in 1985, and as LessWrongers go he is a pretty clear thinker!


bad at epistemology
Gwern once denied chaos theory in a way that Freeman Dyson called out in 1985, and as LessWrongers go he is a pretty clear thinker!


It can help to think of the current US administration as less the Third Reich, more a postcolonial dictatorship. That is also a description not a value judgement: POTUS is a reality-TV star who dreams of being a Mafia boss.


How the frigg does anyone in the SF Bay Area in 2026 still believe that most of what big American web service companies do is driven by the profit motive? They are more like big-talking Geniuses getting a king to give them some money and promising they will make something cool (with Google’s and Facebook’s advertising and AWS and Amazon retail standing in for taxing millions of peasants). Arms like Google ads and Amazon Web Services fund billions of dollars of money-losing nonsense.


That depends, remember that you are losing five years of growth and dividends and if you are are a wise investor every year or so you sell some of what is doing best and buy some of what has been doing worst to lock in your gains. Also remember that you have to buy back in to gain anything from that crash, and its really hard to buy stocks when the market has just lost half its value. Most people who ‘successfully predict a recession’ sell too early and buy back too late so are not very far ahead.
But if you really believe that the US economy will crash by say the end of 2027, you think other assets will do better, and there are ways to buy them.
The USA is about 60% of the global stock market by market capitalization. Right now its much less than 60% of my stock holdings.


Investing is all about relative performance. If you think ‘AI’ stocks are a bubble. you think that some other stocks will do better than them over the next five years, and you can allocate your assets accordingly.


Usually AI boosters are claiming that soon most humans will be economically useless, not that it would be terrible if there were fewer white people. One reason people avoid having children is that they feel economically insecure and doubt there will be respected places in society for their offspring.
Dwarkesh Patel is the only other Indian American I have seen who is friends with our friends.


It is a viable business and it fuels the spread of disinformation. Have you noticed that Old Media magazines have online wings that are full of random advertorials? That is because Google declared that they are Good Domains and upranked them so all the sleazy online marketing migrated to them.
That is also why people buy formerly respected domains and put casinos, propaganda, or virus-laden porn on them.


gutter racist and eugenicist beliefs
The Sam Kriss article in Harpers above focuses on “these people don’t know how to be happy” and never gets around to saying “Scott Alexander is gentle in person but wants to get rid of or sterilize poor brown people and helped people like Curtis Yarvin rise to power.”
Note that SlateScott’s group home is named after a Lord of the Rings location


Its prudent to be skeptical of anonymous Internet posts, but its also prudent to read a Leverage staffer on how her boss “had three long-term consensual relationships with women employed by Leverage Research or affiliated organizations”, close the tab, and make a note to never have anything to do with anyone from that organization in the future.


Posting for archival and indexing purposes: u/GorillasAreForEating found an Urbit post titled “Quis cancellat ipsos cancellores?” which complains that Aella takes it on herself to exclude people and movements from the broader LessWrong/Effective Altruist community. The poster says that Aella was the anonymous person who pushed CFAR to finally do something about Brent Dill, because she was roommates with “Persephone.” He or she does not quite say that any of the accusations were untrue, just that “an anonymous, unverified report” says that some details were changed by an editor, and that her Medium post was of “dramatically lower fidelity, but higher memetic virulence” than Brent’s buddies investigating him behind closed doors (Dill posted about domming a 16-year-old who he met when she was 15). The poster accuses Aella of using substances and BDSM games to blur the line of consent.
Often, people in messed-up situations point at a very similar situation and say “at least we are not like that.” I hope that all of these people find friends who can give them perspective that none of these communities are healthy or just.


Do we have any idea why some of the Zizians ended up in Vermont? The only thing in their network that comes to mind is the Monastic Academy for the Preservation of Life on Earth (MAPLE, a Buddhist-flavoured CFAR offshoot with the usual Medium post accusing leaders of sexual and psychological abuse)
Vermont and New Hampshire have clusters of generic Libertarians.


shade
If you follow world politics, it has been obvious that Noam Chomsky is a useful idiot since the 1990s and probably the 1970s. I wish he had learned from the Khmer Rouge that not everyone who the NYT says is a bad guy is a good guy!


That BlueSky account found professional provocateur (“opinion columnist”) Freddie deBoer making AI 2027 author Scott Alexander retreat to “my median for AGI is more like early 2030s” https://freddiedeboer.substack.com/p/im-offering-scott-alexander-a-wager deBoer seems to be some kind of hereditarian so let them fight.


(Some people might have been concerned to read that) almost 3,000 “researchers, experts and entrepreneurs” have signed an open letter calling for a ban on developing artifical intelligence (AI) for “lethal autonomous weapons systems” (LAWS), or military robots for short. Instead, I yawned. Heavy artillery fire is much more terrifying than the Terminator.
The people who signed the letter included celebrities of the science and high-tech worlds like Tesla’s Elon Musk, Apple co-founder Steve Wozniak, cosmologist Stephen Hawking, Skype co-founder Jaan Tallinn, Demis Hassabis, chief executive of Google DeepMind and, of course, Noam Chomsky. They presented their letter in late July to the International Joint Conference on Artificial Intelligence, meeting this year in Buenos Aires.
They were quite clear about what worried them: “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”
“Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populations, warlords wishing to perpetrate ethnc cleansing, etc.”
The letter was issued by the Future of Life Institute which is now Max Tegmark and Toby Walsh’s organization.
People have worked on the general pop culture that inspired TESCREAL, and on the current hype, but less on earlier attempts to present machine minds as a clear and present danger. This has the ‘arms race’ narrative, the ‘research ban’ proposed solution, but focuses on smaller dangers.


patio11’s sometimes bosses at Stripe are such ambitious capitalists that they sometimes scare him. Maybe his racist friends told him that the Japanese are honorary Aryans?


I like this reply on Reddit:
I do my PhD in fair evaluation of ML algorithms, and I literally have enough work to go through until I die. So much mess, non-reproducible results, overfitting benchmarks, and worst of all this has become a norm. Lately, it took our team MONTHS to reproduce (or even just run) a bunch of methods to just embed inputs, not even train or finetune.
I see maybe a solution, or at least help, in closer research-business collaboration. Companies don’t care about papers really, just to get methods that work and make money. Maxing out drug design benchmark is useless if the algorithm fails to produce anything usable in real-world lab. Anecdotally, I’ve seen much better and more fair results from PhDs and PhD students that work part-time in the industry as ML engineers or applied researchers.
This can go a good way (most of the field becomes a closed circle like parapsychology) or a bad way (people assume the results are true and apply them, like the social priming or Reinhart and Rogoff’s economic paper with the Excel error).


I like the quote by John Swartzwelder in chapter 1.


A 2025 UBC master’s thesis on our friends’ ideas and their literary antecedents https://dx.doi.org/10.14288/1.0449985 The supervisor was born around the time that Elron Hubbard, Jack Parsons, RAH, and their wives and lovers were having a chaotic transition to the postwar world.


Yes, I think the people who should have opinions beyond “the state government found some fraud and is investigating further cases” are people who live in Minnesota and have connections to daycare or immigrant communities. Its notorious that the NYT repackages stories by reporters in smaller orgs (or randos on social media) and puts its own spin on them! They don’t have a specific editorial line on social services in the Midwest, just instincts.
Baldur Bjarnason has a whole book on the business risks of LLM use https://illusion.baldurbjarnason.com/