- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
A fully automated, on demand, personalized con man, ready to lie to you about any topic you want doesnāt really seem like an ideal product. I donāt think thatās what the developers of these LLMs set out to make when they created them either. However, Iāve seen this behavior to a certain extent in every LLM Iāve interacted with. One of my favorite examples was a particularly small-parameter version of Llama (I believe it was Llama-3.1-8B) confidently insisting to me that Walt Disney invented the Matterhorn (like, the actual mountain) for Disneyland. Now, this is something along the lines of what people have been calling āhallucinationsā in LLMs, but the fact that it would not admit that it was wrong when confronted and used confident language to try to convince me that it was right, is what pushes that particular case across the boundary to what I would call ācon-behaviorā. Assertiveness is not always a property of this behavior, though. Lately, OpenAI (and Iām sure other developers) have been training their LLMs to be more āagreeableā and to acquiesce to the user more often. This doesnāt eliminate this con-behavior, though. Iād like to show you another example of this con-behavior that is much more problematic.
The LLM isnāt trained to be reliable, itās trained to be confident.
And itās promoted by business people with the exact same skill set who have been rewarded for it. I would argue though that thereās nothing wrong with what LLMs are doing: theyāre doing what they were trained to do. The con is in how the confidently unreliable techbros sell it to us as a source of knowledge and understanding akin to a search engine, when itās nothing of the sort.
Ironically I do believe AI would make a great CEO/business person. As hilarious as it would be to get to see CEOs replaced by their own product, whatās horrifying about that is no matter how dystopian our situation now is and now matter how much our current CEOs seem like incompetent sociopaths, a planet run by corporations run by incompetent but brutally efficient sociopathic AI CEOs seems certain to become even more dystopian.
So are all the leaders at my company.
Confidence is promoted over competence every time.
an llm is a cool way to rephrase your own thoughts back at you. itās pretty useful for brainstorming. or masturbation. i sure hope thatās all anyone uses it for
Honestly, itās a great source of truly stupid ideas. Itās convenient to have a total idiot on hand at all times to make dumb suggestions when asked, inspiring me to think of something better since the standard was set so low.
Confidence mixed with a lack of domain knowledge is a tale as old as time. Thereās not always a con in play ā think Pizzagate ā but this certainly isnāt restricted to LLMs, and given the training corpus, a lot of that shit is going to slip in.
Itās really unclear where we go from here, other than it wonāt be good.
Thatās why AI companies have been giving out generic chatbots for free, but charge for training domain-specific ones. People paying for using the generic ones, is just the tip of the iceberg.
The future is going to be local or on-prem LLMs, fine tuned on domain knowledge, most likely multiple ones per business/user. It is estimated that businesses are holding orders of magnitude more knowledge, than what has been available for AI training. Will also be interesting to see what kind of exfiltration becomes possible, when one of those internal LLMs gets leaked.
Iām sure that, as with Equifax, there will be no consequences. Shareholders didnāt rebel then; why would they in the face of a massive LLM breach?
Itās going to be funnier: imagine throwing in tons of data at an LLM, most of the data will get abstracted and grouped, most will be extractable indirectly, some will be extractable verbatim⦠and any piece of it might be a hallucination, no guarantees! š .
Courts will have a field day with that.Oh, yeah. Hilarity at its finest. Just call it a glorified database and a day.
Randomly obfuscated database: you donāt get exactly the same data, and most of the data is lost, but sometimes can get something similar to the data, if you manage to stumble upon the right prompt.
AI can be useful without being right about everything. But the user has to know enough to push back or just write it themselves when necessary. And in my experience the same is true when pairing with another developer, too.
Itās a tool, not a solution. Though itās valid to say the folks touting its miracle capabilities are full of shit. It is imperfect, but itās not worthless. Itās not a con man, itās just confidently wrong. Iāve worked with/for a lot of people like that.
When itās trying to convince you that itās right using tricks of confidence, Iād say itās behaving like a con man. At least itās indistinguishable from the behavior of a con man.
Itās too dumb to try and trick you. Itās responding to being called out the way people tend to because thatās what itās emulating. And yeah, thatās not great.
All I can say is AI has wasted my time and saved me time. And in my case, more of the latter than the former.
Yeah, I think we agree on that point. I didnāt mean to make it sound like itās intentionally trying to trick you.