not_IO@lemmy.blahaj.zone to Microblog Memes@lemmy.worldEnglish · 2 days agofuck around and find outlemmy.blahaj.zoneimagemessage-square24linkfedilinkarrow-up1633arrow-down16file-text
arrow-up1627arrow-down1imagefuck around and find outlemmy.blahaj.zonenot_IO@lemmy.blahaj.zone to Microblog Memes@lemmy.worldEnglish · 2 days agomessage-square24linkfedilinkfile-text
minus-squareexpr@piefed.sociallinkfedilinkEnglisharrow-up2·11 hours agoI mean yeah, I agree that’s unbelievably stupid. But when people talk about guardrails generally, they are talking about controlling the output of the LLM, which is what I was saying is not possible to do.
minus-squareProgrammer Belch@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up1·7 hours agoThat’s also true but considering that option is unavailable, there are multiple ways to protect against AI hallucinations. This was the future AI ethics people were warning: Picture a robot you tell to make an apple pie. To get to the apples a human is blocking the path. The robot just kills the human by running at full speed through them. Considering the robot is that dumb to try and go through the human, you can make the robot smaller or lighter so that bumping someone is not harmful. None of these options is considered when talking about AI, line go up and other buzzwords I guess.
I mean yeah, I agree that’s unbelievably stupid. But when people talk about guardrails generally, they are talking about controlling the output of the LLM, which is what I was saying is not possible to do.
That’s also true but considering that option is unavailable, there are multiple ways to protect against AI hallucinations.
This was the future AI ethics people were warning:
Considering the robot is that dumb to try and go through the human, you can make the robot smaller or lighter so that bumping someone is not harmful.
None of these options is considered when talking about AI, line go up and other buzzwords I guess.