Yeah, well, that’s just, like, your opinion, man. And if you remove the very concept of capital gain from your “important point”, I think you’ll find your point to be moot.
I’m also going to assume you haven’t been in such a situation as I described with the whole mental health emergency? Because I have. At best I went to the emergency and calmed down before ever seeing a doctor, and at worst I was committed to inpatient care (or “the ward” as it’s also known) before I calmed down, taking resources from the treatment of people who weren’t as unstable as I was, a problem which could’ve been solved with a chatbot. And I can assure you there are people who live outside the major metropolitan areas of North America, it isn’t an extremely rare case as you claim.
if you remove the very concept of capital gain from your “important point”, I think you’ll find your point to be moot.
Profit or not: How is it OK if your personal data is shared with third and fourth parties? How is it OK that AI allows for manipulating vulnerable people in new and unheard of ways?
I’m not saying that’s ok, did you even read my reply or are you just being needlessly contrarian? Or was I just being unclear in my message, because if so I’m sorry. It tends to happen to me.
You’re not the only one who doesn’t live urban and who has mental health issues. I did not want to make it a contest so I did not reply to that.
But.
So.
If I imagine being in such a situation I just don’t see how a chatbot could help me. Even if it was magically available already, possibly as a phone app, and I wouldn’t have to seek it out first.
Yeah, I realize the most important part of the point I was trying to make kinda got glossed over in my own reply (whoops); these LLMs nowadays are programmed to sound empathetic, more than any human can ever continuously achieve because we get tired and annoyed and other human stuff. This combined with the point of not every “emergency” really being an actual emergency leads me to believe that the idea of “therapeutic” AI chatbots could work, but I wouldn’t advocate using any of those that exist nowadays for this, at least if the user has any regards to online privacy. But having a hotline to a being that has all the resources to help you calm yourself down; a being that is always available, never tired, never annoyed, never sick or otherwise out of office; that seems to know you and remember things you have told it before - that sounds compelling as an idea. But then again, a part of that compel probably comes from the abysmal state of psychiatric healthcare that I and many others have witnessed, and this hotline should be integrated into that care. So I don’t know, maybe it’s just wishful thinking on my part, sorry I came across as needlessly hostile.
Yeah, well, that’s just, like, your opinion, man. And if you remove the very concept of capital gain from your “important point”, I think you’ll find your point to be moot.
I’m also going to assume you haven’t been in such a situation as I described with the whole mental health emergency? Because I have. At best I went to the emergency and calmed down before ever seeing a doctor, and at worst I was committed to inpatient care (or “the ward” as it’s also known) before I calmed down, taking resources from the treatment of people who weren’t as unstable as I was, a problem which could’ve been solved with a chatbot. And I can assure you there are people who live outside the major metropolitan areas of North America, it isn’t an extremely rare case as you claim.
Anyway, my point stands.
Profit or not: How is it OK if your personal data is shared with third and fourth parties? How is it OK that AI allows for manipulating vulnerable people in new and unheard of ways?
I’m not saying that’s ok, did you even read my reply or are you just being needlessly contrarian? Or was I just being unclear in my message, because if so I’m sorry. It tends to happen to me.
You’re not the only one who doesn’t live urban and who has mental health issues. I did not want to make it a contest so I did not reply to that.
But.
So.
If I imagine being in such a situation I just don’t see how a chatbot could help me. Even if it was magically available already, possibly as a phone app, and I wouldn’t have to seek it out first.
Sorry.
Yeah, I realize the most important part of the point I was trying to make kinda got glossed over in my own reply (whoops); these LLMs nowadays are programmed to sound empathetic, more than any human can ever continuously achieve because we get tired and annoyed and other human stuff. This combined with the point of not every “emergency” really being an actual emergency leads me to believe that the idea of “therapeutic” AI chatbots could work, but I wouldn’t advocate using any of those that exist nowadays for this, at least if the user has any regards to online privacy. But having a hotline to a being that has all the resources to help you calm yourself down; a being that is always available, never tired, never annoyed, never sick or otherwise out of office; that seems to know you and remember things you have told it before - that sounds compelling as an idea. But then again, a part of that compel probably comes from the abysmal state of psychiatric healthcare that I and many others have witnessed, and this hotline should be integrated into that care. So I don’t know, maybe it’s just wishful thinking on my part, sorry I came across as needlessly hostile.