he/him | any

I wrangle code, draw pictures, and play bad music. You might find some of it here.

  • 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: March 13th, 2024

help-circle








  • I don’t see much to laugh at here myself. Hank may have been a massive fencesitter on AI, but I still think his reaction to Sora’s completely goddamn justified. This shit is going to enable scams, misinformation and propaganda on a Biblical fucking scale, and undermine the credibility of video evidence for good measure.

    No, it’s absolutely justified and I agree with basically everything he says in the video (esp. the title, there is really no reason for technology like this to exist in the hand of the public, or anyone really, there’s zero upsides to it). It’s just funny to me because the video is just so different from his usual calm stuff.

    But honestly, good for him and (hopefully) his community too.


  • After kinda fence-sitting on the topic of AI in general for while, Hank Green is having a mental breakdown on YouTube over Sora2 and it’s honestly pretty funny.

    If you’re the kind of motherfucker who will create SlopTok, you are not the kind of motherfucker who should be in charge of OpenAI.

    Not that anyone should be in charge of that shitshow of a company, but hey!

    Bonus sneer from the comment section:

    Sam Altman in Feb 2015: “Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”

    Sam Altman in Dec 2015, after co-founding OpenAI: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

    Sam Altman 4 days ago, on his personal blog: “we are going to have to somehow make money for video generation.”


  • Which AI models, though? Your synthetic text extruder LLMs that can’t accurately surpass humans at anything unless you train them specifically to do that and which are kinda shite even then unless you look at it exactly the right way? Or that fabled brain simulation AI that doesn’t even exist?

    Instead, he prefers to describe future AI systems as a “country of geniuses in a data center,” […] [and] that such systems would need to be “smarter than a Nobel Prize winner across most relevant fields.”

    Ah, “future” AI systems. As in the ones we haven’t built yet, don’t know how to build, and don’t even know whether we can build them. But let’s just feed more shit into Habsburg GPT in the meantime, maybe one will magically pop out.