• milicent_bystandr@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    21 hours ago

    Thank you for the more detailed run down. I would set it against two other things, though. One, that for someone who is suicidal or similar, and can’t face or doesn’t know how to find a person to talk to, those beginning interactions of generic therapy advice might (I imagine; I’m not speaking from experience here) do better than nothing.

    From that, secondly, more general about AI. Where I’ve tried it it’s good with things people have already written lots about. E.g. a programming feature where people have already asked the question a hundred different ways on stack overflow. Not so good with new things - it’ll make up what its training data lacks. The human condition is as old as humans. Sure, there’s some new and refined approaches, and values and worldviews change over the generations, but old good advice is still good advice. I can imagine in certain ways therapy is an area where AI would be unexpectedly good…

    …Notwithstanding your point, which I think is quite right. And as the conversation goes on the risk gets higher and higher. I, too, worry about how people might get hurt.

    • idunnololz@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      20 hours ago

      I agree that this like everything else is nuanced. For instance, I think if people who use gen AI as a tool to help with their mental health are knowledgeable about the limitations, then they can craft some ways to use it while minimizing the negative sides. Eg. Maybe you can set some boundaries like you talk to the AI chat bot but you never take any advice from it. However, i think in the average case it’s going to make things worse.

      I’ve talked to a lot of people around me about gen AI recently and I think the vast majority of people are misinformed about how it works, what it does, and what the limitations are.