if you want to have a reliable driverless system, you’re going to have to invent trains… again
if you want to have a reliable driverless system, you’re going to have to invent trains… again
they didn’t. This is about a properly written headline by BBC being butchered when summarized by apple intelligence, which they have no control over.
the problems with (the current forms of generative) AI will not be solved, because they cannot be solved. They are intrinsic to the whole framework.
5G hadn’t been invented yet. They had nothing to worry about back then.
/s
and it will suck your servers dry.
we’re sure of it this time!
/s
They want AI because apple tells them they do.
They also don’t want AI because their experiences say the opposite.
llm and search should not be in the same sentence
telegram is one of the worst, privacy wise.
american, right? It’s very popular in other places.
and the bar is getting lower. Fast iteration, releasing broken, poorly understood, barely maintainable pieces of shit as quickly as one can.
Fucking agile
shouldn’t b loop until it’s <a instead of <=a ?
I have caught one of my cats opening some doors
no, but I used to have one. He didn’t like looking for them either.
I LOVE finding bugs.
I HATE looking for them
these will never be a thing. Noise
it’s ok. I don’t need protection from myself.
but only until the next update, which will probably break half your extensions, because they are entirely unsupported and uncared for bythe gnometeam
This is not error correction issue though. Error correction means taking known data and adding redundancy to it so that damaed pieces can be repaired. This makes the message longer.
An llm’s output does not contain error correction. It’s just the output. And it doesn’t contain any errors, mathematically speaking. The hallucination is the correct output. It is what the statistics it gathered from its training set determined is most likely. A “correct” llm output is indistinguishable from a “hallucination”, mathematically, and always will be. A hallucination is simply “some output that some human, somewhere, doesn’t like”, and that’s uncomputable. And outputs that people subjectively consider as “hallucinations” cannot be eliminated, because an llm is, fundamentally, a probabilistic algorithm. If you added error correction to an llm’s output all you’d be able to recover is the llm’s original output, “hallucinations” and all.
Tldr: “hallucinations” are a subjective thing. A Hallucination" is not an error that can be corrected after-the-fact, because it is not an error in the first place.