This might also be an automatic response to prevent discussion. Although I’m not sure since it’s MS’ AI.

  • AnyOldName3@lemmy.world
    link
    fedilink
    arrow-up
    64
    arrow-down
    1
    ·
    10 months ago

    If the AI had any actual I, it might point out that the most recent Halloween Document was from twenty years ago, and Microsoft’s attitudes have changed in that time. After all, they make a lot of money from renting out Linux VMs through Azure, so it’d be silly for them to hate their revenue stream.

    I’d be unsurprised if it’s just set up to abandon the conversation if accused of lying, rather than defending its position.

    • r_se_random@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      I tried some prompts and that’s exactly what it did. OP here was accusatory in their prompts, and I guess that triggered the LLM to end the conversation.

      I asked it upfront about Halloween documents, and it shared that they were anti-FOSS. I asked about MS’s stance on FOSS, and it shared the challenges and collaborations.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    ·
    10 months ago

    Google Gemini does a better job IMO by rejecting the premise of the obviously biased question in the first place.

    • slacktoid@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      10 months ago

      I mean this is also why Al Jazeera covers US politics better than US sources. tho it at least hits that business reasons.

  • otp@sh.itjust.works
    link
    fedilink
    arrow-up
    36
    arrow-down
    6
    ·
    edit-2
    10 months ago

    I think the LLM won here. If you’re being accusational and outright saying its previous statement is a lie, you’ve already made up your mind. The chatbot knows it can’t change your mind, so it suggests changing the topic.

    It’s not a spokesperson/bot for Microsoft, not a lawyer. So it knows when it should shut itself off.

    • naevaTheRat@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      13
      ·
      10 months ago

      The chatbot doesn’t know anything. It has no state like that, your text just gets appended to it’s text.

      It has been prompted to disengage from disagreement or something similar. By a human designer.

      • otp@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        I don’t know why the discourse about AI has become so philosophical.

        When I’m playing a single-player game and I say “the AI opponents know I’m hiding behind cover, so they threw a grenade!”, I don’t mean that the video game gained sentence and discovered the best thing to do to win against me.

        When playing a stealth game, we say “The enemy can’t see you if you’re behind cover”, not “The enemy has been programmed to not take any action the player character when said player character is identified as being granted the Cover status”.

    • webghost0101@sopuli.xyz
      link
      fedilink
      arrow-up
      10
      ·
      10 months ago

      To add, i have seen this behavior the moment you get to argumentative so its not like its purposely singling some topics out.

  • DingoBilly@lemmy.world
    link
    fedilink
    arrow-up
    20
    arrow-down
    2
    ·
    10 months ago

    The AI actually handled that pretty well.

    I’d say it’s much more reasonable than the person messaging it in this situation who comes off a bit unhinged.

  • plz1@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    10 months ago

    I get Copilot to bail on conversations so often like your example that I’m only using it for help with programming/code snippets at this point. The moment you question accuracy, bam, chat’s over.

    I asked if there was a Copilot extension for VS Code, and it said yup, talked about how to install it, and even configure it. That was completely fabricated, and as soon as I asked for more detail to prove it was real, chat’s over.

    • DetectiveSanity@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      That would force them to reveal it’s sources (unconsented scraping) hence make them liable for any potential lawsuits. As such they would need to withdraw from revealing sources

  • eveninghere@beehaw.org
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    10 months ago

    This is actually an unfair experiment. This behavior is not specific to questions about MS. Copilot is simply incapable of this type of discussion.

    Copilot tends to just paraphrase text it read, and when I challenge the content, it ends the conversation like this, instead of engaging in a meaningful dialogue.

  • Gamma@beehaw.org
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    edit-2
    10 months ago

    Tbf your evidence is >20 year old documents, general EEE behavior (without examples), and something that isn’t really relevant to your initial claim. I’m not surprised it decided to respectfully hang up. Did you want to to argue?

    • DigitalDilemma@lemmy.ml
      link
      fedilink
      arrow-up
      7
      ·
      10 months ago

      OP definitely wanted an argument - but it can only have been for imaginary internet points.

      Arguing with an AI is pointless - it’s intellectual masturbation - and using biased and weak examples is, if anything, going to train the opponent to be more dumb. (Anyone else remember teaching Megahal to swear on IRC?)

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    10 months ago

    I’m pretty sure since Microsoft Tay, conversational agents are incentivized to drop the subject if the chat becomes too combative / antagonistic.

  • Sims@lemmy.ml
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    6
    ·
    10 months ago

    Every single Capitalist model or corporation will do this deliberately with all their AI integration. ALL corporations will censor their AI integration to not attack the corporation or any of their strategic ‘interests’. The Capitalist elite in the west are already misusing wokeness (i’m woke) to cause global geo-political splits and all western big tech are following the lead (just look at Gemini), so they are all biased towards the fake liberal narrative of super-wokeness, ‘democracy’/freedumb, Ukraine good, Taiwan not part of China, Capitalism good and all the other liberal propaganda and bs. Its like a liberal cancer that infects all AI tools. Nasty.

    Agree or disagree with that, but none of us probably want elite psychopaths to decide what we should think/feel about the world, and its time to ditch ALL corporate AI services and promote private, secure and open/free AI - not censored or filled with liberal dogmas and artificial ethics/morals from data to finetuning.

  • LWD@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    10 months ago

    Large language model training is based on more than one model at a time, if that’s the right term for it. One of them is the amalgam of answers from the internet (just imagine feeding Reddit into a Markov bot). The other is handcrafted responses by the corporation that runs the robot, which allow it to create (for lack of a better term) “politically correct” responses that will do everything from keeping things g-rated, remaining civil, preventing suggesting acts of terrorism, and protecting the good name of the corporation itself from being questioned.

    Both of these models run on your question at the same time.

    • Hotzilla@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      Copilot runs with GPT4-turbo. It is not trained differently than openai’s GPT4-turbo, but it has different system prompts than openai, which tend to make it more easy to just quit discussion. I have never seen openai to say that I will stop this conversation, but copilot does it daily.

      • LWD@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        10 months ago

        So by “different system prompts”, you mean Microsoft injects something more akin to their own modifiers into the prompt before passing it over to OpenAI?

        (The same way somebody might modify their own prompt, “explain metaphysics” with their own modifiers like “in the tone of a redneck”?)

        I assumed OpenAI could slot in extra training data as a whole extra component, but that also makes sense to me… And would probably require less effort.

        • Hotzilla@sopuli.xyz
          link
          fedilink
          arrow-up
          3
          ·
          10 months ago

          Yeah, pretty much like that, in Azure and paid openai both let you modify the system prompt also. There is also a creativity (temperature) property that can be modified. When too high, it will hallucinate more, if too low, it will give same output everytime.

          Retraining the model costs like hundred million and weeks of computing power.

  • kat_angstrom@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    10 months ago

    See, it’s intellectually dishonest conversations like these that make me completely uninterested in engaging with anyone’s LLM for any reason.

  • toastal@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    10 months ago

    Developers can stop using Microsoft products today; say NO to neo-EEE including Windows, WSL, GitHub, Sponsors, Copilot, VS Code, Codespaces, Azure, npm, & Teams