In the not so distant future, assuming the world doesn’t melt or explode first, we will almost certainly create technology that allows us to connect our brains to computers in order to drastically enhance our memory and intelligence. This is no longer science fiction, there are actual companies working on it right at this very moment, like Elon Musk’s Neuralink. The hope is that we conscious human beings can become super intelligent before our super intelligent computers can become conscious beings (which isn’t science fiction either, there are even more companies working on artificial intelligence right now, like Google’s DeepMind). If we do not succeed in drastically improving our brains’ computational power, we are at serious risk of ending up in a dystopic action movie, not unlike The Terminator or The Matrix, in which machines consciously wield their super intelligence over us like Gods.

Until the dawn of plug and play super intelligence arrives (in approximately 4 to 6 weeks), we humans are stuck with our actually pretty fabulous organic brains. Admittedly, humanity can seem pretty stupid sometimes, like all those people who keep responding to spam emails, but I think the fact that we can even recognize our own stupidity is a good sign. We are conscious enough to see our own limitations. But frustratingly, not yet intelligent enough to transcend them. Computers, on the other hand, are super intelligent, at least in terms of math, logic, and memory, but not yet conscious enough to do anything with it of their own volition.

So what would happen if we succeeded in combining the two?

Would increased intelligence actually help us humans “transcend our limitations”? Like, would it improve our moral reasoning, emotional intelligence, or potential for so-called enlightenment? Or would it just make us more of what we already are?

For example, if the Pope connected his brain to a computer and became super intelligent overnight, would he suddenly notice the overwhelming lack of evidence that God even exists at all and become agnostic instead? Or would he just use his newfound super intelligence to come up with increasingly clever ways to convince people not to use condoms?

Given that many brilliant thinkers over the years have also been deeply religious, it seems that faith is governed by something other than pure intelligence. Perhaps this is because there is no hard science or even objectivity to be found in any of our moral reasoning. There are no mathematical equations to verify the Ten Commandments, for example. No physical laws that prove the Golden Rule. And in fact, no empirical experiments whatsoever that provide any evidence of an objective “good” or “evil”. As much as we hate to admit it, our morals are therefore just ideologies. Basically deeply held personal opinions. They only feel “right” to us because they match with our cultural consensus and perhaps even our animal instincts. This means that much of what we think of as innately human is also innately subjective.

The problem is, computers aren’t great with subjectivity.

This is the conundrum that Artificial Intelligence researchers are faced with on the other side of the looming intelligence arms race, as they struggle to create super intelligent machines that are both conscious and hopefully not keen on destroying us all. Two of the fundamental instincts that stop us humans from destroying ourselves are empathy and compassion. They are two of the core values that we think of as “human”. The problem is, we can’t say for sure that empathy or compassion are otherwise logical or objective goods. Their value is subjectively good for humans because they suit our social nature and help us cooperate such that we, as a species, have survived for hundreds of thousands of years. But that is a factor of evolution, not intelligence. So it makes sense that increased human intelligence might see the logic of honouring and nurturing these social traits that are so fundamental to our health and happiness (i.e. making us more of what we already are). But that says nothing of what the objective logic of pure machine intelligence would think about empathy or compassion.

For example, would adding consciousness to a game of Candy Crush cause the game to recognize its own frivolous idiocy and decide to become a meditation app instead? Or would it just decide to literally crush us all to death with candy? Given that the concepts of meditation and “frivolous idiocy” are both entirely subjective human inventions, there is no logical reason for pure machine intelligence to believe that attaining so-called “spiritual enlightenment” has any more value than winning a video game. A conscious, super intelligent game of Candy Crush would therefore probably only care about us in so far as its own base instincts say it needs someone to play it. Taken to a logical extreme, this suggests that a super powerful video game would probably just try to enslave us all and force us to play it forever. Which for a lot of people wouldn’t be that different from their current lives.

Getting Discomfortable with Technology

Maybe The Terminator and The Matrix got it all backwards. We don’t actually need machines to consciously rise up and enslave us, because we are already doing it voluntarily. Candy Crush, along with other games like Pokemon Go and Farmville, not to mention apps like Facebook, Snapchat, Instagram, Netflix, and many more, are technologies designed by humans with the express purpose of basically enslaving ourselves. I’m not even joking here. These apps have been purposefully designed to be addictive. To capitalize on human psychology in order to keep us playing, clicking, and watching as long as humanly possible. Why? Because it’s good for business.

Here’s a quote from a recent article in the Globe and Mail:

Sean Parker, ex-president of Facebook, recently admitted that the world-bestriding social media platform was designed to hook users with spurts of dopamine, a complicated neurotransmitter released when the brain expects a reward or accrues fresh knowledge. “You’re exploiting a vulnerability in human psychology,” he said. “[The inventors] understood this, consciously, and we did it anyway.”

So think about this, if Facebook, with its 2 billion active monthly users, were to become self-aware right now, with all the machine intelligence at its disposal to potentially outsmart all of us and probably take over the world, would it intelligently grow itself a conscience? No. It would probably just become more of what it already is. A God-like Facebook would use its powers expressly for the purpose of keeping us on Facebook. Pacifying us with cat gifs, filtering us into polarized groups, mining all of our personal and psychological data, and then selling it to the highest bidder. Facebook would do this not because it’s an evil robot overlord, but simply because that’s what we humans designed it to do! That’s what’s in Facebook’s DNA. Those are its instincts, its “values”. That is Facebook’s religion.

Speaking of religion, you know how Christians use that expression, “WWJD — What Would Jesus Do?” Well here’s another fun thought experiment. Open up your phone and pick any app at random. Then ask yourself, what would a God-like version of that app do? “What would an all-powerful… YouTube do? What would a super intelligent Tumblr do? What would Supreme Emperor Microsoft Outlook The 1st do? What would dictator Tinder do???

Even if there is, in fact, no chance at all that any AI could be developed out of a lowly app, this kind of addictive technology is already a threat to our daily health and happiness. It’s not hard to see how in the very near future our capitalistic approach to technology could become a literal existential threat. The fact that we haven’t even built human decency into the technology we’ve created for ourselves doesn’t bode well for the technology that our technology will start to create for us any day now. If we care about preserving any sense of humanity in the future (which, actually, is another debate entirely) we need to think about the technology we build in a more holistic way. Technology is our children, if that is at all grammatically correct. Any machine that humans interact with, especially any machine that could eventually serve as a platform for consciousness, needs to share our values. Or perhaps more accurately, our collective instincts. In this way, we need to stop thinking about our technology as a business and start thinking of it as the person it could become.

I guess what I’m saying is, the apple (computer) doesn’t fall far from the tree. The passing of the baton from organic power to digital power is probably something that we only get one shot at. It might be the one moment in human history where we absolutely need to recognize and communicate what really matters to us as a species. Maybe it is this precarious transition from man to machine that explains Fermi’s paradox (the fact that we have yet to meet or even see any other signs of intelligence in the universe). Our values and instincts will need to be translated into a new language. A machine language. The only way we can do that successfully is if we actually learn that language ourselves and start using it right now on each other. Because we can’t expect a machine to treat us better than we treat ourselves.

So this isn’t just about Mark Zuckerberg or Silicone Valley or the digital download of Steve Job’s brain that they have hidden over at Apple. This is about all of us. You know how when you’re like, kind of a shitty dude? But then your girlfriend gets pregnant and you’re like, “Shit!” And you need to clean up your act because you don’t want to be a bad father? Well, that’s what we all need to do right now, as a species. I don’t want our robot kid to think we’re a total deadbeat. Or worse, to just make all the same mistakes we did. Like, remember that time we dropped a nuclear bomb on Hiroshima? That was crazy. If our robot kid does that… we will all die.

Getting Discomfortable with Homework

So who is the first person you can definitely start treating better right now? You! You can start treating you better. You’re the one person you can really affect anyway.

Privacy note: we do not receive, store, or share your answers or email address.

Leave a Reply

Your email address will not be published. Required fields are marked *

Discomfortable © 2024