• Evotech@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    7 months ago

    Not really. Depending on the implementation.

    It’s not like ddg is going to keep training their own version of llama or mistral

    • regrub@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      7 months ago

      I think they mean that a lot of careless people will give the AIs personally identifiable information or other sensitive information. Privacy and security are often breached due to human error, one way or another.

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        7 months ago

        But these open models don’t really take new input into their models at any point. They don’t normally do that type of inference training.

        • regrub@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          7 months ago

          That’s true, but no way for us to know that these companies aren’t storing queries in plaintext on their end (although they would run out of space pretty fast if they did that)

      • shotgun_crab@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        But that’s a human error as you said, the only way to fix it is by using it correctly as an user. AI is a tool and it should be handled correctly like any other tool, be it a knife, a car, a password manager, a video recording program, a bank app or whatever.

        I think a bigger issue here is that many people don’t care about their personal information as much as their lives.