The narrative of artificial intelligence is flawed. The story being written about them is one of fear, a monstering of robotics that has them poised to turn their back on humans, squishing us between electrical fingertips. There is nothing good about these machines and, although we have yet to create or meet them, we have been told to regard them with fear.
Brilliant, technologic futurists like Elon Musk are the main sources of this narrative. Those like Musk—with money and large followings—have the power and knowledge to make artificial intelligence truly as smart as we have imagined it to be. With that comes a power to inform us of what to expect—and we have thusly gotten talk of demons and Tweets that instill fear in those seriously and casually interested in A.I. Adding to the pile of worries that ebola has started, we now have to consider our demise at the hands of robots.
Why do we jump to this conclusion? It seems scripted—literally. We’ve been conditioned to fear the future of artificial intelligence because that is what Terminator and Philip K. Dick and The Matrix and Arthur C. Clarke have taught us: we will create artificial intelligence that will kill us. We are bad so we create robots that are just as bad. Any parenting books and moral compasses for A.I. can be thrown out the window and replaced with an overconfidence that we will see these aggressive robotics who will crush us all in the name of doing good in our lifetime. They will be our demise.
This malevolent arc is invented. The stories of happy robots who just help—the *batteries not included aliens—are non-existent. This is because we haven’t crafted good A.I. and, therefore, the narrative of positivity and hope isn’t enforced. All we get is Hal 9000 hunting us. We should not be irresponsible with technology, sure, but there is no reason to use fear tactics based in fiction to make a mountain out of a non-existent, way far in the future mole hill.
In some ways, Musk is right though: we are already the malevolent A.I. The moment humans came into existence, we began a self-destructive series of activities that may lead to the ruining of our planet. We’ve enabled climate change and are running out of resources and, essentially, have shit the bed that billions sleep in. The Earth offered herself to us and we took as much advantage of her as possible. We have subsequently done this to each other (to Native Americans, to Africans, and—most recently—to the Middle East as we, in the West, try to control their oil) and have created an environment where we could let this happen. Again: we are bad.
Like Earth did for us, Musk’s view of artificial intelligence suggests that we will create children who will unknowingly stab us in the back: that is the threat he speaks of. That is evolution. The robots will be great and wonderful and helpful and the future. Unfortunately, by the time it is too late, they will have taken advantage of us in the way we did to Earth.
This is all a narrative, though. It is a fabricated story that I just made up: I just Musk’d you. While this can seem true, there is no evidence of this: it’s a story we tell ourselves because we like to be scared and we love to get as close to the cliff’s edge as possible, stare at death, and then take a few steps back to safety. Artificial intelligence has the promise of as many rewards as it does risks.
The singularity looms because of Roko’s Basilisk: we thought the bad thought and now the bad thought is the only one that can become reality. But why can’t there be good thoughts? Why can’t we focus on the positives? It would be foolish to neglect the negatives but the fear mongering Musk is using will summon evil A.I. You won’t hear about the potential for transhumanism or how computers can—and are—helping: it’s only evil. It’s only evil because we have made them evil. That’s the way we talk about A.I.
Please Note: The artificial intelligence uprising is likely not happening in our lifetime as your computer still crashes and robots are only just now getting “good” at soccer. We have a long way to go, Musk.