This might also be an automatic response to prevent discussion. Although I’m not sure since it’s MS’ AI.

  • LWD@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    10 months ago

    Large language model training is based on more than one model at a time, if that’s the right term for it. One of them is the amalgam of answers from the internet (just imagine feeding Reddit into a Markov bot). The other is handcrafted responses by the corporation that runs the robot, which allow it to create (for lack of a better term) “politically correct” responses that will do everything from keeping things g-rated, remaining civil, preventing suggesting acts of terrorism, and protecting the good name of the corporation itself from being questioned.

    Both of these models run on your question at the same time.

    • Hotzilla@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      Copilot runs with GPT4-turbo. It is not trained differently than openai’s GPT4-turbo, but it has different system prompts than openai, which tend to make it more easy to just quit discussion. I have never seen openai to say that I will stop this conversation, but copilot does it daily.

      • LWD@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        10 months ago

        So by “different system prompts”, you mean Microsoft injects something more akin to their own modifiers into the prompt before passing it over to OpenAI?

        (The same way somebody might modify their own prompt, “explain metaphysics” with their own modifiers like “in the tone of a redneck”?)

        I assumed OpenAI could slot in extra training data as a whole extra component, but that also makes sense to me… And would probably require less effort.

        • Hotzilla@sopuli.xyz
          link
          fedilink
          arrow-up
          3
          ·
          10 months ago

          Yeah, pretty much like that, in Azure and paid openai both let you modify the system prompt also. There is also a creativity (temperature) property that can be modified. When too high, it will hallucinate more, if too low, it will give same output everytime.

          Retraining the model costs like hundred million and weeks of computing power.