• Conradfart@lemmy.ca
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    1 year ago

    They are useful when you need to generate quasi meaningful bullshit in large volumes easily.

    LLMs are being used in medicine now, not to help with diagnosis or correlate seemingly unrelated health data, but to write responses to complaint letters or generate reflective portfolio entries for appraisal.

    Don’t get me wrong, outsourcing the bullshit and waffle in medicine is still a win, it frees up time and energy for the highly trained organic general intelligences to do what they do best. I just don’t think it’s the exact outcome the industry expected.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      I think it’s the outcome anyone really familiar with the tech expected, but that rarely translates to marketing departments and c-suite types.

      I did an LLM project in school, and while that was a limited introduction, it was enough for me to doubt most of the claims coming from LLM orgs. An LLM is good at matching its corpus and that’s about it. So it’ll work well for things like summaries, routine text generation, and similar tasks (and it’s surprisingly good at forming believable text), but it’ll always disappoint with creative work.

      I’m sure the tech can do quite a bit more than my class went through, but the limitations here are quite fundamental to the tech.

    • huginn@feddit.it
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      That’s kinda the point of my above comment: they’re useful for bullshit: that’s why they’ll never be trustworthy