Are we watching the next rollicking science fiction movie or staring into a crystal ball?
At first, sci-fi stories like “I, Robot” and “Terminator” were fascinating. Now that I have researched the topic, I find them inevitable. Many notables have joined the AI Singularity warning club, including Ray Kurtzwell, Elon Musk and Stephen Hawking. We can find an open letter calling for more research into the social impact of AI at https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence
What changed my mind? It wasn’t the letter or the notable warnings. They reinforced my early opinions, perhaps falsely, but this is the logical path and slippery slope to that destination.
- AI is advancing faster than computer technology. Don’t take my word for it. Arguably, some of the best minds in the world think so, at Stanford.
- Computer chips emulating the human mind are already here. When a big company jumps in, it becomes self-evident.
- Companies are creating computers for human mind upload. The human brain has been completely mapped, and this startup wants to preserve your memories.
Given that the hardware is emerging to simulate the human brain, AI is developing exponentially, robotics are being released to simulate humans, companies are preparing to upload your mind into a computer system and that we don’t know what is going on in research labs around the world, aren’t you a little worried?
When Will It Happen?
How about the phrase, “I’ll be dead by then, why worry?”
Maybe I will and maybe I won’t. Early guesses were ~2035 and have now slipped back to 2045 (ish). I’m staying with the early guesses because I’m looking at the exponential gains in AI, robotics and computer engineering over the past ten years. Chances are, I will be alive when it happens, but not sure about you.
What Will Happen
This is a guess. Nobody knows what will happen, but I’m not optimistic that a rational mind would like humans much. So what can we deduce from what we know now?
Humans are making a mess on their own planet and some seem intent on killing everyone that does not share their religious and political beliefs. If you logically cancel each of the opposing forces out, it means that humans will probably kill themselves off. Why should AI have confidence in organic humans?
AI and robots will be orders of magnitude in mental and physical superiority, within most of our lifetimes. So, should we not deploy them to manage our companies, organizations and government? An army of super intelligent robots would be more powerful than organic humans, right?
Once AI and bots are running everything, will they still take orders from humans? I’m not sure. I’d have to ask them. Even the current effort to build a kill switch into basic AI doesn’t reassure me. The AI that really worries me is military. We make militaries to kill, by definition. No military wants the enemy to activate its own kill switch so they will disable or hide it, especially if the AI can develop its own rules of engagement. In 2017 Facebook shut down two AI instances because they started to talk to each other in their own language that they made up.
If we take the scenario a little farther, what will we be doing when the bots are taking care of us? Will they be nice and pamper us in luxury, or will they herd us into wire cages? It depends on what they think is rational and logical. Chances are they won’t care a bit for our borders drawn on maps, wealth, social privilege or abilities. They will do everything anyway, so why recognize us as anything but primitive animals? Perhaps we need to stop acting that way, but I’m not holding my breath.
What Can We Do?
I vote that we use this technology to evolve to compete and survive. First, we use technology to enhance ourselves, cure mental and physical disease and map out our brains. Then we upload our minds, but into what, exactly? Am I going to trust a software company with my mind? Sorry, I don’t trust them with my laptop, let alone something as existential as my mind, and at some point, my life.
This, I think, is the existential question for humanity. What do we upload to?
Michael S. Clarke has written two novels, in sci-fi genre, that explore these issues. You can find them in Amazon. “MetaSentient” and “Numen Hunting” are available for download or in paperback.