So it’s like a global initiative in spreading nonsense. Impressive.
The fascist social media influencers are already pushing generated bodycam and surveillance videos for xenophobia etc. A large enough mass of the population doesn’t know what’s real and that’s the goal
Tell me about it. I’m 70, and people my age fall for every fake post they see online. It’s exhausting.
this will sound far reaching, but what would you think about holding internet literacy talks with your (chronological age) peers?
I wish they had broke it out by AI. The article states:
“Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.”
But I don’t see that anywhere in the linked PDF of the “full results”.
This sort of study should also be re-done from time to time to track AI version numbers.
It doesn’t really matter, “AI” is being asked to do a task it was never meant to do. It isn’t good at it, and it will never be good at it.
Using an LLM to return accurate information is like using a shoe to hammer a nail.
Wow, way to completely ignore the content of the comment you’re replying to. Clearly, some are better than others… so, how do the others perform? It’s worth knowing before we make assertions.
The excerpt they quoted said:
“Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.”
So that implies that “the other assistants” performed more than twice as well, so presumably that means encountering serious issues less than 38% of the time (still not great, but better). But they said “more than double the other assistants”, does that mean double the rate of one of the others or double the average of the others? If it’s an average it would mean that some models probably performed better, while others performed worse.
This was the point, what was reported was insufficient information.
Yes, you are a techbro. You suck because your ideas doesn’t take into consideration actual real life. Fuck you.
Wow, that’s just incredibly dismissive and rude. And in response to a completely reasonable comment!
Look, forget the whole AI discussion, I don’t care. Here’s the thing, I really like Lemmy. I really like this community and I want to continue using it as a way to have discussions with people about interesting topics. What I don’t want to see is people yelling insults and swearing at any user they disagree with.
Frankly, that behavior is unwelcome. That’s reddit behavior, you can go there if that’s what you want to do.
There’s a few replies talking about humans misrepresenting the news. This is true, but part of the problem here is that most people understand the concept of bias - even if only to the extent of “my people neutral, your people biased”. But this is less true for LLMs. There’s research which shows that because LLMs present information authoritatively that not only do people tend to trust them, but they’re actually less likely to check the sources that the LLM provides than they would be with other forms of being presented with information.
And it’s not just news. I’ve seen people seriously argue that fringe pseudo-science is correct because they fed a very leading prompt into a chatbot and got exactly the answer they were looking for.
Precision, nuance, and up to the moment contextual understanding are all missing from the “intelligence.”
Like the average American with an 8th grade reading comprehension.
Which is what they used for the training data.
So it’s about on par with humans, then.
Parrot is wrong almost half of the time. Who knew?
Do you realize what you just said!!!
Wow! They have reached parrot intelligence!
Next they might teach it to butterfly! You know, like you’re off the ground and going somewhere in open air, but they just keep building shit right where you’re flying… And lamps!
From there, who knows?!
And then I wonder how frequently humans misinterpret the mistranslated news.
Humans do it often, but they don’t have billions of dollars funding their responses.
Worse: One third of adult actually believe the shit the AI produces.
Yet the LLM seems to be what everyone is pushing, because it will supposedly get better. Haven’t we reached the limits of this model and shouldn’t other types of engines be tried?
Will they change their disclaimer now, from “can be wrong” to “is often wrong”? /s
So less of a percentage than the readers and mass media
buT AI iS hERe tO StAY








