It’s all made from our data, anyway, so it should be ours to use as we want

  • drkt@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    5 days ago

    Forcing a bunch of neural weights into the public domain doesn’t make the data they were trained on also public domain, in fact it doesn’t even reveal what they were trained on.

    • deegeese@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      17
      ·
      5 days ago

      LOL no. The weights encode the training data and it’s trivially easy to make AI generators spit out bits of their training data.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            8
            ·
            5 days ago

            No, he’s challenging the assertion that it’s “trivially easy” to make AIs output their training data.

            Older AIs have occasionally regurgitated bits of training data as a result of overfitting, which is a flaw in training that modern AI training techniques have made great strides in eliminating. It’s no longer a particularly common problem, and even if it were it only applies to those specific bits of training data that were overfit on, not on all of the training data in general.

            • 31337@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 days ago

              Last time I looked it up and calculated it, these large models are trained on something like only 7x the tokens as the number of parameters they have. If you thought of it like compression, a 1:7 ratio for lossless text compression is perfectly possible.

              I think the models can still output a lot of stuff verbatim if you try to get them to, you just hit the guardrails they put in place. Seems to work fine for public domain stuff. E.g. “Give me the first 50 lines from Romeo and Juliette.” (albeit with a TOS warning, lol). “Give me the first few paragraphs of Dune.” seems to hit a guardrail, or maybe just forced through reinforcement learning.

              A preprint paper was released recently that detailed how to get around RL by controlling the first few tokens of a model’s output, showing the “unsafe” data is still in there.

      • stephen01king@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 days ago

        How easy are we talking about here? Also, making the model public domain doesn’t mean making the output public domain. The output of an LLM should still abide by copyright laws, as they should be.