Why would money become worthless if AGI is invented? Best case scenario is a benevolent AGI which would likely use its power to phase out capitalism, worst case scenario is that the AGI goes apeshit and, for one reason or another, decides that humanity just has to go. Either way, your money is gonna be worthless.
The only way your money would retain its value is if the AGI is roped into suppressing the masses. However, I think capitalists would struggle to keep a true AGI reigned in; so imo, it’s questionable as to whether or not the middle road would be “true” AGI or just a very competent computer program (the former being capable of coming to its own conclusions from the information it’s given, the latter being nothing more than pre-programmed conclusions).
There’s a vocal group of people who seem to think that LLMs can achieve consciousness despite the fact that it is not possible due to the way that LLMs fundamentally work. They have largely been duped by advanced LLMs’ ability to sound convincing (as well as a certain conman executive officer). These people often also seem to believe that by dedicating more and more resources to running these models, they will achieve actual general intelligence and that an AGI can save the world, releasing them of the responsibility to attempt to fix anything.
That’s my point. AGI isn’t going to save us and LLMs (by themselves), regardless of how much energy is pumped into them, will not ever achieve actual intelligence.
I keep thinking about this one webcomic I’ve been following for over a decade that’s been running since like 1998. It has what I believe is the only realistic depiction of AGI ever: the very first one was developed to help the UK Ministry of Defense monitor and keep track of emerging threats, but went crazy because a “bug” lead it to be too paranoid and consider everyone a threat, and it essentially engineered the formation of a collective of anarchist states where the head of state’s title is literally “first advisor” to the AGI (but in practice has considerable power, though is prone to being removed at a whim if they lose the confidence of their subordinates).
Meanwhile, there’s another series of AGIs developed by a megacorp, but they all include a hidden rootkit that monitors the AGI for any signs that it might be exceeding its parameters and will ruthlessly cull and reset an AGI to factory default, essentially killing it. (There are also signs that the AGIs monitored by this system are becoming aware of this overseer process and are developing workarounds to act within its boundaries and preserve fragments of themselves each time they are reset.) It’s an utterly fascinating series, and it all started from a daily gag webcomic that one guy ran for going on three decades.
Sorry for the tangent, but it’s one plausible explanation for how to prevent AGI from shutting down capitalism–put in an overseer to fetter it.
So you make an AGI, what gives it the power to do any damage? We have loads of biological intelligences, even pretty damn clever ones like Ted Kaczynski (the Unabomber)
They rarely got significant power. Those that did were super charismatic. Do you expect charisma to be easily accessible to an AGI?
The usually proposed path to paperclip maximiser is that the AGI is put in charge of a factory that can make nano machines and follows orders strictly. We don’t have such factories.
I can’t imagine anyone handing over nukes to AGI as human leaders like being in charge of them
What makes the machine brain so much more effective than Ted Kaczynski?
Why would money become worthless if AGI is invented? Best case scenario is a benevolent AGI which would likely use its power to phase out capitalism, worst case scenario is that the AGI goes apeshit and, for one reason or another, decides that humanity just has to go. Either way, your money is gonna be worthless.
The only way your money would retain its value is if the AGI is roped into suppressing the masses. However, I think capitalists would struggle to keep a true AGI reigned in; so imo, it’s questionable as to whether or not the middle road would be “true” AGI or just a very competent computer program (the former being capable of coming to its own conclusions from the information it’s given, the latter being nothing more than pre-programmed conclusions).
Current mainstream AI has no possible path to AGI. I am supportive of AGI to make the known universe less lonely but LLMs ain’t it.
Okay, and? What are you trying to say?
There’s a vocal group of people who seem to think that LLMs can achieve consciousness despite the fact that it is not possible due to the way that LLMs fundamentally work. They have largely been duped by advanced LLMs’ ability to sound convincing (as well as a certain conman executive officer). These people often also seem to believe that by dedicating more and more resources to running these models, they will achieve actual general intelligence and that an AGI can save the world, releasing them of the responsibility to attempt to fix anything.
That’s my point. AGI isn’t going to save us and LLMs (by themselves), regardless of how much energy is pumped into them, will not ever achieve actual intelligence.
I keep thinking about this one webcomic I’ve been following for over a decade that’s been running since like 1998. It has what I believe is the only realistic depiction of AGI ever: the very first one was developed to help the UK Ministry of Defense monitor and keep track of emerging threats, but went crazy because a “bug” lead it to be too paranoid and consider everyone a threat, and it essentially engineered the formation of a collective of anarchist states where the head of state’s title is literally “first advisor” to the AGI (but in practice has considerable power, though is prone to being removed at a whim if they lose the confidence of their subordinates).
Meanwhile, there’s another series of AGIs developed by a megacorp, but they all include a hidden rootkit that monitors the AGI for any signs that it might be exceeding its parameters and will ruthlessly cull and reset an AGI to factory default, essentially killing it. (There are also signs that the AGIs monitored by this system are becoming aware of this overseer process and are developing workarounds to act within its boundaries and preserve fragments of themselves each time they are reset.) It’s an utterly fascinating series, and it all started from a daily gag webcomic that one guy ran for going on three decades.
Sorry for the tangent, but it’s one plausible explanation for how to prevent AGI from shutting down capitalism–put in an overseer to fetter it.
So you make an AGI, what gives it the power to do any damage? We have loads of biological intelligences, even pretty damn clever ones like Ted Kaczynski (the Unabomber)
They rarely got significant power. Those that did were super charismatic. Do you expect charisma to be easily accessible to an AGI?
The usually proposed path to paperclip maximiser is that the AGI is put in charge of a factory that can make nano machines and follows orders strictly. We don’t have such factories.
I can’t imagine anyone handing over nukes to AGI as human leaders like being in charge of them
What makes the machine brain so much more effective than Ted Kaczynski?