I am the journeyer from the valley of the dead Sega consoles. With the blessings of Sega Saturn, the gaming system of destruction, I am the Scout of Silence… Sailor Saturn.

  • 0 Posts
  • 11 Comments
Joined 2 years ago
cake
Cake day: June 29th, 2023

help-circle
  • Grokipedia just dropped: https://grokipedia.com/

    It’s a bunch of LLM slop that someone encouraged to be right wing with varying degrees of success. I won’t copy paste any slop here, but to give you an idea:

    • Grokipedia’s article on Wikipedia uses the word “ideological” or “ideologically” 23 times (compared with Wikipedia using it twice in it’s Wikipedia article).
    • Any articles about transgender topics tend to mix in lots of anti-transgender misinformation / slant, and use phrases like “rapid-onset gender dysphoria” or “biological males”. The last paragraph of the article “The Wachowskis” is downright unhinged.
    • The articles tend to be long and meandering. I doubt even Grokipedia proponents will ultimately get much enjoyment out of it.

    Also certain articles have this at the bottom:

    The content is adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License.




  • Yet another billboard.

    https://www.reddit.com/r/bayarea/comments/1ob2l2o/replacement_ai_billboard_in_san_francisco_who/

    https://replacement.ai/

    This time the website is a remarkably polished satire and I almost liked it… but the email it encourages you to send to your congressperson is pretty heavy on doomer talking points and light on actual good ideas (but maybe I’m being too picky?):

    spoiler

    I am a constituent living in your district, and I am writing to express my urgent concerns about the lack of strong guardrails for advanced AI technologies to protect families, communities, and children.

    As you may know, companies are releasing increasingly powerful AI systems without meaningful oversight, and we simply cannot rely on them to police themselves when the stakes are this high. While AI has the potential to do remarkable things, it also poses serious risks such as the manipulation of children, the enablement of bioweapons, the creation of deepfakes, and significant unemployment. These risks are too great to overlook, and we need to ensure that safety measures are in place.

    I urge you to enact strong federal guardrails for advanced AI that protect families, communities, and children. Additionally, please do not preempt or block states from adopting strong AI protections, as local efforts can serve as crucial safeguards.

    Thank you for your time and attention to this critical issue.








  • Ah yes the typical workflow for LLM generated changes:

    1. LLM produces nonsense at the behest of employee A.
    2. Employee B leaves a bunch of edits and suggestions to hammer it into something that’s sloppy but almost kind of makes sense. A soul-sucking error prone process that takes twice as long as just writing the dang code.
    3. Code submitted!
    4. Employee A gets promoted.

    Also the fact that this isn’t integrated with tests shows how rushed the implementation was. Not even LLM optimists should want code changes that don’t compile or that break tests.