A decade in the space is impressive. It shows dedication and time invested. That alone deserves recognition.
Still, the points you are repeating are familiar. They are recycled claims from years ago. If the goal is to critique novelty, repeating the same arguments does not advance it.
You say LLMs have zero intentional logic. That is true if by intentional logic you mean human consciousness or goals. It is false if you mean emergent behaviors and the ability to combine information in ways no single source explicitly wrote. Eliminating nuance with absolute terms makes it easy to dismiss valid evidence.
Calling someone an AI fanboy signals preference for labels over analysis. That approach does not strengthen an argument. Specific examples do. Concrete failures, reproducible tests, or papers are what advance discussion.
It is also not accurate to suggest that anyone pitches LLMs as supreme beings. Most people treat them as complex tools that produce surprising results. Their speed, scale, and capacity to identify patterns exceed human ability, but they remain tools. Critiquing them as if they were gods is a strawman.
If you want this discussion to matter, show a single reproducible example where an LLM fails in a way your logic cannot explain. Otherwise, repeating slogans and metaphors only illustrates a resistance to evidence.
I am not here to argue for ideology. I am here to examine claims. That is a choice. It is also a choice to resist slogans and demand specificity. Fun, fun. Another fun day.
A decade in the space is impressive. It shows dedication and time invested. That alone deserves recognition.
Still, the points you are repeating are familiar. They are recycled claims from years ago. If the goal is to critique novelty, repeating the same arguments does not advance it.
You say LLMs have zero intentional logic. That is true if by intentional logic you mean human consciousness or goals. It is false if you mean emergent behaviors and the ability to combine information in ways no single source explicitly wrote. Eliminating nuance with absolute terms makes it easy to dismiss valid evidence.
Calling someone an AI fanboy signals preference for labels over analysis. That approach does not strengthen an argument. Specific examples do. Concrete failures, reproducible tests, or papers are what advance discussion.
It is also not accurate to suggest that anyone pitches LLMs as supreme beings. Most people treat them as complex tools that produce surprising results. Their speed, scale, and capacity to identify patterns exceed human ability, but they remain tools. Critiquing them as if they were gods is a strawman.
If you want this discussion to matter, show a single reproducible example where an LLM fails in a way your logic cannot explain. Otherwise, repeating slogans and metaphors only illustrates a resistance to evidence.
I am not here to argue for ideology. I am here to examine claims. That is a choice. It is also a choice to resist slogans and demand specificity. Fun, fun. Another fun day.