• ???@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    11 months ago

    Do you mean the embeddings? https://platform.openai.com/docs/guides/embeddings/what-are-embeddings

    If so:

    The word embeddings and embedding layers are there to represent data in ways that allow the model to make use of them to generate text. It’s not the same as the model acting as a human. It may sound like a human in text or even speech, but its reasoning skills are questionable at best. You can try to make it stick to your company policy but it will never (at this level) be able to operate under logic unless you hardcode that logic into it. This is not really possible with these models in that sense of the word, after all they just predict the best next word to say. You’d have to wrap them around with a shit ton of code and safety nets.

    GPT models require massive amounts of data, so they were only that good at languages for which we have massive texts or Wikipedias. If your language doesn’t have good content on the internet or freely available digitalized content on which to train, a machine can still not replace translators (yet, no idea how long this will take until transfer learning is so good we can use it to translate low-resource languages to match the quality of English - French, for example).