I am the journeyer from the valley of the dead Sega consoles. With the blessings of Sega Saturn, the gaming system of destruction, I am the Scout of Silence… Sailor Saturn.

  • 0 Posts
  • 14 Comments
Joined 2 years ago
cake
Cake day: June 29th, 2023

help-circle


  • NotAwfulTech and AwfulTech converged with some ffmpeg drama on twitter over the past few days starting here and still ongoing. This is about an AI generated security report by Google’s “Big Sleep” (with no corresponding Google authored fix, AI or otherwise). Hackernews discussed it here. Looking at ffmpeg’s security page there have been around 24 bigsleep reports fixed.

    ffmpeg pointed out a lot of stuff along the lines of:

    • They are volunteers
    • They have not enough money
    • Certain companies that do use ffmpeg and file security reports also have a lot of money
    • Certain ffmpeg developers are willing to enter consulting roles for companies in exchange for money
    • Their product has no warranty
    • Reviewing LLM generated security bugs royally sucks
    • They’re really just in this for the video codecs moreso than treating every single Use-After-Free bug as a drop-everything emergency
    • Making the first 20 frames of certain Rebel Assault videos slightly more accurate is awesome
    • Think it could be more secure? Patches welcome.
    • They did fix the security report
    • They do take security reports seriously
    • You should not run ffmpeg “in production” if you don’t know what you’re doing.

    All very reasonable points but with the reactions to their tweets you’d think they had proposed killing puppies or something.

    A lot of people seem to forget this part of open source software licenses:

    BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW

    Or that venerable old C code will have memory safety issues for that matter.

    It’s weird that people are freaking out about some UAFs in a C library. This should really be dealt with in enterprise environments via sandboxing / filesystem containers / aslr / control flow integrity / non-executable memory enforcement / only compiling the codecs you need… and oh gee a lot of those improvements could be upstreamed!


  • Grokipedia just dropped: https://grokipedia.com/

    It’s a bunch of LLM slop that someone encouraged to be right wing with varying degrees of success. I won’t copy paste any slop here, but to give you an idea:

    • Grokipedia’s article on Wikipedia uses the word “ideological” or “ideologically” 23 times (compared with Wikipedia using it twice in it’s Wikipedia article).
    • Any articles about transgender topics tend to mix in lots of anti-transgender misinformation / slant, and use phrases like “rapid-onset gender dysphoria” or “biological males”. The last paragraph of the article “The Wachowskis” is downright unhinged.
    • The articles tend to be long and meandering. I doubt even Grokipedia proponents will ultimately get much enjoyment out of it.

    Also certain articles have this at the bottom:

    The content is adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License.




  • Yet another billboard.

    https://www.reddit.com/r/bayarea/comments/1ob2l2o/replacement_ai_billboard_in_san_francisco_who/

    https://replacement.ai/

    This time the website is a remarkably polished satire and I almost liked it… but the email it encourages you to send to your congressperson is pretty heavy on doomer talking points and light on actual good ideas (but maybe I’m being too picky?):

    spoiler

    I am a constituent living in your district, and I am writing to express my urgent concerns about the lack of strong guardrails for advanced AI technologies to protect families, communities, and children.

    As you may know, companies are releasing increasingly powerful AI systems without meaningful oversight, and we simply cannot rely on them to police themselves when the stakes are this high. While AI has the potential to do remarkable things, it also poses serious risks such as the manipulation of children, the enablement of bioweapons, the creation of deepfakes, and significant unemployment. These risks are too great to overlook, and we need to ensure that safety measures are in place.

    I urge you to enact strong federal guardrails for advanced AI that protect families, communities, and children. Additionally, please do not preempt or block states from adopting strong AI protections, as local efforts can serve as crucial safeguards.

    Thank you for your time and attention to this critical issue.








  • Ah yes the typical workflow for LLM generated changes:

    1. LLM produces nonsense at the behest of employee A.
    2. Employee B leaves a bunch of edits and suggestions to hammer it into something that’s sloppy but almost kind of makes sense. A soul-sucking error prone process that takes twice as long as just writing the dang code.
    3. Code submitted!
    4. Employee A gets promoted.

    Also the fact that this isn’t integrated with tests shows how rushed the implementation was. Not even LLM optimists should want code changes that don’t compile or that break tests.