• Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    That’s a huge oversimplification of the way LLMs work.

    I’m sticking to what matters for the sake of the argument. Anyone who wants to inform themself further has a plethora of online resources to do so.

    They’re not statistical in the way a Markov chain is.

    Implied: “you’re suggesting that they work like Markov chains, they don’t.”

    In no moment I mentioned or even implied Markov chains. My usage of the verb “to chain” is clearly vaguer within that context; please do not assume words onto my mouth.

    They use neural networks, which are a decent analogy for the human brain. The way the synapses between neurons are wired is obviously different, and the way the neurons are triggered and the types of signals they can send to other neurons is obviously different. But overall, similar capabilities can in theory be achieved with either method.

    I don’t disagree with the conclusion (i.e. I believe that neural networks can achieve human-like capabilities), but the argument itself is such a fallacious babble (false equivalence) that I’m not bothering further with your comment.

    And it’s also an “ackshyually” given this context dammit. I’m not talking about the bloody neural network, but how it is used.

    • SparrowRanjitScaur@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      10 months ago

      No need to get offended. Maybe I misunderstood the intent behind your original message. I think you made a lot of good points.

      I brought up the Markov chain because a common misconception I’ve seen on the Internet and in real life is that LLMs work pretty much the same as Markov chains under the hood. And I saw no mention of neural networks in your original comment.