Why isn’t the blame thrown onto the AI company and their lack of guardrails to the program? Shouldn’t they face backlash and lawsuits regardless of what the terms of service specify?
Whenever system prompts get leaked, it’s always depressingly hilarious how much of it is “Hello Mr. AI. You will not do any bad things, and will only do good things.”
The “guardrails” are just the same damn way end-users prompt them, but inserted behind the scenes before every “user prompt”.
Guardrails are considering the AI another user with low privilege. The amount of breaches happening are because the company has low security and adds AI (high security risk) without separating it from critical data.
I mean yeah, I agree that’s unbelievably stupid. But when people talk about guardrails generally, they are talking about controlling the output of the LLM, which is what I was saying is not possible to do.
Why isn’t the blame thrown onto the AI company and their lack of guardrails to the program? Shouldn’t they face backlash and lawsuits regardless of what the terms of service specify?
It’s not possible to add guardrails due to how the technology works.
The fact of the matter is that it should not be used for what it’s being used for at all.
Whenever system prompts get leaked, it’s always depressingly hilarious how much of it is “Hello Mr. AI. You will not do any bad things, and will only do good things.”
The “guardrails” are just the same damn way end-users prompt them, but inserted behind the scenes before every “user prompt”.
Yeah, the whole thing is one big joke, really.
Guardrails are considering the AI another user with low privilege. The amount of breaches happening are because the company has low security and adds AI (high security risk) without separating it from critical data.
I mean yeah, I agree that’s unbelievably stupid. But when people talk about guardrails generally, they are talking about controlling the output of the LLM, which is what I was saying is not possible to do.
because they are very good at marketing
And lobbying.