• Programmer Belch@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    Why isn’t the blame thrown onto the AI company and their lack of guardrails to the program? Shouldn’t they face backlash and lawsuits regardless of what the terms of service specify?

    • expr@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      It’s not possible to add guardrails due to how the technology works.

      The fact of the matter is that it should not be used for what it’s being used for at all.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 hours ago

        Whenever system prompts get leaked, it’s always depressingly hilarious how much of it is “Hello Mr. AI. You will not do any bad things, and will only do good things.”

        The “guardrails” are just the same damn way end-users prompt them, but inserted behind the scenes before every “user prompt”.

      • Programmer Belch@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 hours ago

        Guardrails are considering the AI another user with low privilege. The amount of breaches happening are because the company has low security and adds AI (high security risk) without separating it from critical data.

        • expr@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          49 minutes ago

          I mean yeah, I agree that’s unbelievably stupid. But when people talk about guardrails generally, they are talking about controlling the output of the LLM, which is what I was saying is not possible to do.