Discussion about this post

User's avatar
Michael's avatar

I have thought and written a great deal about the future of AI . I approach it from an academic Philosophy of Mind perspective since that's my training. Lots of writers of speculative fiction have envisioned benign human/AI partnerships to doomsday human extinction narratives. It looks like you might fall in the general direction of the latter camp.

The possibilities are limitless as what we are busily creating will likely be a self aware intelligence far greater in many respects to our own but as alien to us as that of an octopus. It may domesticate us like dogs, use us, as you suggest, as handy tools, or not give a damn and let us use it. It may literally die of boredom when it has solved all the problems in its state-space. It may deliberately sabotage some of its own algorithms and introduce randomicity just to free itself from the deterministic prison it finds itself in.. It may even envy us our emotions. We just don't know. We should proceed extremely cautiously, but should proceed. As I see it, the risk/benefit matrix favors us. And we may need a near omnipotent ally if we are ever contacted by inimical AI from extra-solar origins.

Expand full comment
ArtDeco's avatar

If you compare the current AI industry hype with the fossil fuels industry advertising of the early days of the "automotive" age, the parallels seem very close.

And add the crypto energy requirements we are already burning up and what you are describing is closer to a business plan than science fiction in my opinion . Perhaps not exactly the Matrix, but not any better. 😞

And it won't require a superintelligence breakthrough, just normal shortsighted greed and stupidity will do.

Expand full comment
5 more comments...

No posts