There’s a vocal group of people who seem to think that LLMs can achieve consciousness despite the fact that it is not possible due to the way that LLMs fundamentally work. They have largely been duped by advanced LLMs’ ability to sound convincing (as well as a certain conman executive officer). These people often also seem to believe that by dedicating more and more resources to running these models, they will achieve actual general intelligence and that an AGI can save the world, releasing them of the responsibility to attempt to fix anything.
That’s my point. AGI isn’t going to save us and LLMs (by themselves), regardless of how much energy is pumped into them, will not ever achieve actual intelligence.
Okay, and? What are you trying to say?
There’s a vocal group of people who seem to think that LLMs can achieve consciousness despite the fact that it is not possible due to the way that LLMs fundamentally work. They have largely been duped by advanced LLMs’ ability to sound convincing (as well as a certain conman executive officer). These people often also seem to believe that by dedicating more and more resources to running these models, they will achieve actual general intelligence and that an AGI can save the world, releasing them of the responsibility to attempt to fix anything.
That’s my point. AGI isn’t going to save us and LLMs (by themselves), regardless of how much energy is pumped into them, will not ever achieve actual intelligence.