• anotherandrew@mbin.mixdown.ca
    link
    fedilink
    arrow-up
    1
    ·
    18 hours ago

    I don’t know if it’s just my age/experience or some kind of innate “horse sense” But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth. I don’t see that as bad news, I see it as understanding the limitations of the system. Perhaps with a reasonable prompt an LLM can be more honest about when it’s hallucinating?

    • mbtrhcs@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      I don’t know if it’s just my age/experience or some kind of innate “horse sense” But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth

      I’m not sure how you would do that if you are asking about something you don’t have expertise in yet, as it takes the exact same authoritative tone no matter whether the information is real.

      Perhaps with a reasonable prompt an LLM can be more honest about when it’s hallucinating?

      So far, research suggests this is not possible (unsurprisingly, given the nature of LLMs). Introspective outputs, such as certainty or justifications for decisions, do not map closely to the LLM’s actual internal state.