• expr@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 hours ago

    I mean yeah, I agree that’s unbelievably stupid. But when people talk about guardrails generally, they are talking about controlling the output of the LLM, which is what I was saying is not possible to do.

    • Programmer Belch@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      That’s also true but considering that option is unavailable, there are multiple ways to protect against AI hallucinations.

      This was the future AI ethics people were warning:

      Picture a robot you tell to make an apple pie.

      To get to the apples a human is blocking the path.

      The robot just kills the human by running at full speed through them.

      Considering the robot is that dumb to try and go through the human, you can make the robot smaller or lighter so that bumping someone is not harmful.

      None of these options is considered when talking about AI, line go up and other buzzwords I guess.