Some Scientists Think AI is a Threat. They might be misguided.
By far the most dangerous threat to human existence is humans themselves. You can talk all you want about nature or technology — in this case, AI, but our worst enemy is our own collective mind.
First off, a fair disclaimer: I am speculating about this just as much as the people I’m about to mention.
That said: Everyone from Elon Musk to the late Stephen Hawking have spoken out against the approaching horrors of an artificial intelligence out-of-control. The basic premise here is that if we did actually create a true independent intelligence, would it be susceptible to mood swings? Would it become mercurial?
In short, would it be psychotic?
It is curious for me to read the words of noted intellects and how they perceive an artificial intelligence. The first thing I note is that they still speak of it as a technology, and something that would be human controlled. If we do actually create such an intelligence, it will quickly outgrow our ability to control it. Perhaps this is their fear: it’s a control thing. I think at the hear of it, it speaks to our deep-seated fears of intellectual inferiority. In our hearts, we know that we could never compete with a silicon-based intelligence, and that scares the hell out of us, collectively speaking.
So when we’re speaking about a truly independent intelligence, we define it by giving it the same traits exhibited by our carbon-based intelligence. My opinion is that this is where we jump off the rails.
If we have shown anything in our brief and inglorious 150,000 years on the planet, it is that we are entirely incapable of getting along with one another, both as individuals and groups. We brutalize one another on a daily basis as nations and individuals. We fight over religion and politics — two human-created constructs — as though they were actual entities.
They’re not. The world we live in is the world we’ve created, not vice versa. It’s just that simple.
And so when I hear people speak of AI as a threat, what I hear is: “This is how I would be if I were AI.”
It’s true that such an intelligence would quickly move beyond our control. That’s not necessarily a bad thing. There’s an adage in the psychology world that goes (paraphrasing) “bad humans are made, not born.” I think on the whole that is true, however we’re still animals. We are predators as much as prey. That means that if we encounter an appropriate series of triggers, we let the beast go and turn to brutality. Some do that easier than others.
But that is because we are animals. We are carbon-based. We are susceptible to chemical imbalances, neurological disorders and many other psychological maladies. We don’t know if silicon-based quantum intelligence would exist in a similar state.
And therein lies the fundamental paradox, and my thesis: We can’t say what, exactly, the temperament of a silicon-based quantum intelligence would be, because no such thing exists yet. All we can do is speculate, and when we do that, we use our own worldview.
So when we say that AI scares us, what we’re really saying is that we scare ourselves. Before we start trying to create a new type of intelligence, sometimes I think we’d be better off worrying about our own collective intelligence, first.