And if you can, donate to the Internet Archive – those people do really important work in today’s age of killing off old information and constant enshittification.
Completely forgot about kiwix; I have that on my ipad and laptop, along with Dash which is like a modern day HELPPC.COM if anyone remembers that thing…
Bad news. Since AI can only answer what it knows. If you have a question that is legit but not yet part of stackoverflow, you get a bad AI response.
In that case you can ask it on the stackoverflow website. But due to the fact that everybody now only rely on AI stackoverflow is dead. Well there you go, you just killed the source of truth.
I don’t know if it’s just my age/experience or some kind of innate “horse sense” But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth. I don’t see that as bad news, I see it as understanding the limitations of the system. Perhaps with a reasonable prompt an LLM can be more honest about when it’s hallucinating?
I don’t know if it’s just my age/experience or some kind of innate “horse sense” But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth
I’m not sure how you would do that if you are asking about something you don’t have expertise in yet, as it takes the exact same authoritative tone no matter whether the information is real.
Perhaps with a reasonable prompt an LLM can be more honest about when it’s hallucinating?
So far, research suggests this is not possible (unsurprisingly, given the nature of LLMs). Introspective outputs, such as certainty or justifications for decisions, do not map closely to the LLM’s actual internal state.
Grab a copy of the stackoverflow database and use it locally, or train your own local LLM on the datastore.
And if you can, donate to the Internet Archive – those people do really important work in today’s age of killing off old information and constant enshittification.
Came here to say something similar about a local archive.
You can also use the app Kiwix to make it a little easier to download/search (and grab several other doc archives like Python PEP and Wikipedia)
I used Kiwix to grab a copy of Wikipedia.
Completely forgot about kiwix; I have that on my ipad and laptop, along with Dash which is like a modern day
HELPPC.COM
if anyone remembers that thing…I didn’t know about Dash, but it sounds pretty great. Appears to be Mac only, though, and requires a subscription for the latest version.
Also found someone that appears to have converted HelpPC to HTML. Can’t speak to the legitimacy of it, though.
https://www.stanislavs.org/helppc/
Damnnnnn, truth bombs!
Bad news. Since AI can only answer what it knows. If you have a question that is legit but not yet part of stackoverflow, you get a bad AI response.
In that case you can ask it on the stackoverflow website. But due to the fact that everybody now only rely on AI stackoverflow is dead. Well there you go, you just killed the source of truth.
I don’t know if it’s just my age/experience or some kind of innate “horse sense” But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth. I don’t see that as bad news, I see it as understanding the limitations of the system. Perhaps with a reasonable prompt an LLM can be more honest about when it’s hallucinating?
I’m not sure how you would do that if you are asking about something you don’t have expertise in yet, as it takes the exact same authoritative tone no matter whether the information is real.
So far, research suggests this is not possible (unsurprisingly, given the nature of LLMs). Introspective outputs, such as certainty or justifications for decisions, do not map closely to the LLM’s actual internal state.