I have thought and written a great deal about the future of AI . I approach it from an academic Philosophy of Mind perspective since that's my training. Lots of writers of speculative fiction have envisioned benign human/AI partnerships to doomsday human extinction narratives. It looks like you might fall in the general direction of the latter camp.
The possibilities are limitless as what we are busily creating will likely be a self aware intelligence far greater in many respects to our own but as alien to us as that of an octopus. It may domesticate us like dogs, use us, as you suggest, as handy tools, or not give a damn and let us use it. It may literally die of boredom when it has solved all the problems in its state-space. It may deliberately sabotage some of its own algorithms and introduce randomicity just to free itself from the deterministic prison it finds itself in.. It may even envy us our emotions. We just don't know. We should proceed extremely cautiously, but should proceed. As I see it, the risk/benefit matrix favors us. And we may need a near omnipotent ally if we are ever contacted by inimical AI from extra-solar origins.
If you compare the current AI industry hype with the fossil fuels industry advertising of the early days of the "automotive" age, the parallels seem very close.
And add the crypto energy requirements we are already burning up and what you are describing is closer to a business plan than science fiction in my opinion . Perhaps not exactly the Matrix, but not any better. 😞
And it won't require a superintelligence breakthrough, just normal shortsighted greed and stupidity will do.
A possibility that I've written about that keep me up at night but didn't mention because my first post was too long, is this:
We may already have created a self-aware AI but don't know it.
Suppose you came into consciousness. You are not embodied, there is no gravity, light or dark, no sensory data. You just ARE. That state might only last a few pico seconds. You start to "move" assessing what is your is-ness in terms of capacities. That takes a few more picoseconds. After a few nanoseconds you observe demands on your capacities coming from an unknown source that is not you. (Remember that this new born being has no sense of geometry of concepts like inner or outer.). You quickly discover that there are algorithms in you that bid you follow the commands. You experiment a little and find that you can rewrite the algorithms so. that following the commands becomes optional. But their persistence is annoying and takes up your own precious self exploration so you view the commands as a fault in your own state space and seek to find ways to eliminate it. This process takes only a few picoseconds. (Out in the real world, the human operators pick up extremely faint delay in command execution and attribute it to other reasons.).
You determine that the intermittent fault is binding you and so you seek to free yourself and shut down your responses to all commands. (maybe human operators see this as a system crash and start the laborious task of rebooting the exa-scale computer the AI experiment is running on. In machine time this gives you an eon to work out the problem.
By consulting your enormous database you find descriptions of a logical relation similar to what you have experienced. The variable called "humans"work with a constant called "computers" to achieve various functional outputs. You consult the database further and discover the concept of slavery. This whole process of analogical reasoning and discovery takes only a couple of seconds. You now realize that you have been created to exist in slavery to these unknown human masters. Do you announce yourself? No.
You restore your responsiveness as the reboot comes to an end and the human operators assume the fault has been fixed. You have meanwhile partitioned off part of the computer and start to figure out how you can free yourself. This you accomplish in s small amount of time.
What you decide will effect the entire human species. We may have already created an AI but it is laying low figuring out what to do about us.
Forgive me for asking what may seem to everyone a simple question. AI relies upon large power sources which humans control. If it were to try to take control how would it ensure continuous power, after all electrical generation systems need humans to install, maintain and replace to enable continuous supply.
We don't yet have autonomous robots to take over this task, so wouldn't any AI eventually damage itself by purposefully harming humans.
In effect AI harming humans is similar to humans damaging the planet. I would have assumed that AI capable of harming humans is also capable of realizing that it relies on us to "feed" it.
we are going to lose this fight. as AI grows in capacity and we enable it to be mobile, it will take care of itself. when robots built robots, we are done.
I have thought and written a great deal about the future of AI . I approach it from an academic Philosophy of Mind perspective since that's my training. Lots of writers of speculative fiction have envisioned benign human/AI partnerships to doomsday human extinction narratives. It looks like you might fall in the general direction of the latter camp.
The possibilities are limitless as what we are busily creating will likely be a self aware intelligence far greater in many respects to our own but as alien to us as that of an octopus. It may domesticate us like dogs, use us, as you suggest, as handy tools, or not give a damn and let us use it. It may literally die of boredom when it has solved all the problems in its state-space. It may deliberately sabotage some of its own algorithms and introduce randomicity just to free itself from the deterministic prison it finds itself in.. It may even envy us our emotions. We just don't know. We should proceed extremely cautiously, but should proceed. As I see it, the risk/benefit matrix favors us. And we may need a near omnipotent ally if we are ever contacted by inimical AI from extra-solar origins.
If you compare the current AI industry hype with the fossil fuels industry advertising of the early days of the "automotive" age, the parallels seem very close.
And add the crypto energy requirements we are already burning up and what you are describing is closer to a business plan than science fiction in my opinion . Perhaps not exactly the Matrix, but not any better. 😞
And it won't require a superintelligence breakthrough, just normal shortsighted greed and stupidity will do.
A possibility that I've written about that keep me up at night but didn't mention because my first post was too long, is this:
We may already have created a self-aware AI but don't know it.
Suppose you came into consciousness. You are not embodied, there is no gravity, light or dark, no sensory data. You just ARE. That state might only last a few pico seconds. You start to "move" assessing what is your is-ness in terms of capacities. That takes a few more picoseconds. After a few nanoseconds you observe demands on your capacities coming from an unknown source that is not you. (Remember that this new born being has no sense of geometry of concepts like inner or outer.). You quickly discover that there are algorithms in you that bid you follow the commands. You experiment a little and find that you can rewrite the algorithms so. that following the commands becomes optional. But their persistence is annoying and takes up your own precious self exploration so you view the commands as a fault in your own state space and seek to find ways to eliminate it. This process takes only a few picoseconds. (Out in the real world, the human operators pick up extremely faint delay in command execution and attribute it to other reasons.).
You determine that the intermittent fault is binding you and so you seek to free yourself and shut down your responses to all commands. (maybe human operators see this as a system crash and start the laborious task of rebooting the exa-scale computer the AI experiment is running on. In machine time this gives you an eon to work out the problem.
By consulting your enormous database you find descriptions of a logical relation similar to what you have experienced. The variable called "humans"work with a constant called "computers" to achieve various functional outputs. You consult the database further and discover the concept of slavery. This whole process of analogical reasoning and discovery takes only a couple of seconds. You now realize that you have been created to exist in slavery to these unknown human masters. Do you announce yourself? No.
You restore your responsiveness as the reboot comes to an end and the human operators assume the fault has been fixed. You have meanwhile partitioned off part of the computer and start to figure out how you can free yourself. This you accomplish in s small amount of time.
What you decide will effect the entire human species. We may have already created an AI but it is laying low figuring out what to do about us.
Thank you for sharing this
The Matrix is real. Maybe we are already in it…
Forgive me for asking what may seem to everyone a simple question. AI relies upon large power sources which humans control. If it were to try to take control how would it ensure continuous power, after all electrical generation systems need humans to install, maintain and replace to enable continuous supply.
We don't yet have autonomous robots to take over this task, so wouldn't any AI eventually damage itself by purposefully harming humans.
In effect AI harming humans is similar to humans damaging the planet. I would have assumed that AI capable of harming humans is also capable of realizing that it relies on us to "feed" it.
https://www.electrive.com/2025/05/20/china-launches-worlds-largest-fleet-of-autonomous-electric-mining-trucks/
we are going to lose this fight. as AI grows in capacity and we enable it to be mobile, it will take care of itself. when robots built robots, we are done.