Whether you’re aware of it or not, artificial intelligence is all around us and what constitutes “artificial intelligence” is becoming as definitionally nebulous as … well, whatever. Should we call your thermostat artificially intelligent because it regulates temperature in your home without being told what to do? Is voice recognition artificial intelligence? How about the chess game on your smart phone? Common vernacular might say no to some of these things while AI programmers might disagree and insist that yes, some these things require a certain degree of artificial intelligence programming – and yet, these are all very different from what we mean when we talk about Ex Machina – or maybe even Hal or Number 5.
AI is already hitting us by storm and improving virtually every area of our lives: Science news agencies recently announced artificial pancreas to regulate insulin for diabetics, the first AI attorney is working at a fancy law firm, Japan is using AI to capture criminals, and AI machines are about to invade our fast food dining experience: “thank you for your order at KFC; I’m guessing by your choices, you’d probably enjoy a side of fries – is that right?” At the end of the day, these types of AI seem harmless enough – and some are so helpful, they’ll be saving lives – so why the fuss over the “danger” AI represents? Stephen Hawking is just an alarmist right? That petition about disallowing AI until we figure out how to keep them from exterminating our race was just a PR stunt right? And that AI that does interviews – she was teasing when she agreed to help annihilate humans … right?
The question is an interesting one that news reporters, science magazines, and most bloggers are largely sidestepping because the answers are … well, philosophically founded and scientifically uncertain. Why? I believe it boils down to two things: unknown factual discoveries and emotions.
AI Factual Discoveries
Last week, we heard about an AI machine that can guess your age by analyzing your blood. Yup. It only needed to analyze some 60,000 samples to do that but it is correct around 80% of the time. Why is that significant? How long would it take a human to figure that out? Scratch that. How long would it take you to analyze 60,000 samples of anything? The manpower involved in tasks like that is enormous. But an AI machine could do it in a small fraction of the time. And since they can save us literally hundreds of years of research, they could help launch us into new technologies and scientific discoveries we haven’t even begun to imagine. That sounds pretty exciting – I’m pumped – but it also leaves us with a great degree of uncertainty. What will AI learn? What if it learns something dreadfully disconcerting? What if it determines that in three hundred years, the human race will annihilate itself by overusing resources or by engaging in violent wars over territory? Sure, we’re happy if it learns how to cure cancer or how to perfect gene editing so we can all be tall, tan, fit, and bright as the nearest sun but what if it delivers terrible news? How will we handle that? Of course, that’s the foundation of plot lines involving Hal and Ex Machina (thank you Number 5 for lightening up the potential of AI robots) but we could also run into the conundrum of determining whether or not the AI’s information is based upon some sophisticated programming errors that lead to some variation of false correlations – you know, like the one that determines that the frequency of consumption of cheese is directly correlated to the number of people who die by entanglement in their bed sheets? Or that pointed out that increased sour cream sales increased biking accidents? What then? The fact is, we don’t know. AI is ultimately dependent upon human programming. If we do that well enough, it might be able to improve its own programming and reprogram itself but that just seems a little too sci-fi doesn’t it? Not really. This is a realistic possibility by the end of the century. So, in the long run, what AI will be able to do is entirely unknown simply because AI will help us to discover things we couldn’t discover on our own. And when it discovers these new things, what will it do with that information? If it is programmed without any emotions, it will simply spit out the information for our use. That sounds great. But if it is programmed with emotions, that changes the playing field entirely.
Emotions
Love, hate, jealousy, envy, happiness, ecstasy – these are emotions we don’t fully understand. That is, we don’t know where they come from, how they can be biologically controlled, or how much they really influence our decisions. That said, with all of the amazing breakthroughs in brain studies, it isn’t difficult to foresee that we’ll largely answer these questions in the next couple or few decades. That is, we’ll at least know how to extract those codes from the brain. And if we can take those codes from the brain, we can put them in AI. And that is where the playing field really changes. It’s one thing for an AI to learn which country will win a war in a couple decades if preemptive measures aren’t taken to stop them. It’s another thing for that AI to report that information. It is quite another thing for the AI to determine what it should do with that information – that requires an emotion, a motivation to act beyond its simple programming responsibilities of analyzing data and spitting out results.
To be fair, adding emotions to an AI could be useful in therapy sessions. It could make dolls more fun to play with and it could make KFC ordering kiosks more enjoyable by finding opportunities to offer situational jokes that make everyone more comfortable and happy – which might make us buy more chicken as well! With results like that, it isn’t difficult to imagine billions of corporate dollars being used to advance AI to its greatest potential. AI could learn how to regulate our body chemistry in such a way as to make us happy and therefore more productive … and maybe if it programmed itself with those happy feelings, it might be more productive as well … at least, it might be more proactive. Some people think it is unlikely that AI will ever be programmed with emotions but I doubt those folks have really thought this through. How long are we going to resist the urge to find out what a computer might say or report when programmed to love its user? Or how it might react to being programmed with the feelings that come with a first kiss? Get real. Thousands of teenage boys will be dying to find out what the computer will say. And thousands of romance novelists are going to want its opinion about the most romantic sequences it can create. And it’s only a matter of time before that information is placed in a robot or cyborg.
The real question is: why should we care if a computer is programmed to fall in love with its user? why should we care if virtual reality machines programmed with emotion interact with us in the next age of video games? We shouldn’t. Well, unless of course the AI is programmed with a protocol for self preservation, for anger, for bigotry, or for hatred. That’s when things get really messy and that is what incites bright thinkers like Stephen Hawking to fear the future of AI. Maybe you’re not interested in programming AI to figure out what it will say if it is programmed with anger but it’s all but guaranteed that psychologists and psychiatrists are going to want to study this very carefully. And if they are prohibited by regulations to do so, every teenage boy addicted to Diablo or Grand Theft Auto will want to give it a try. And unregulated countries? The short answer is this: someone is going to find out. Just look at how pathetic cloning and stem cell regulations succeeded in stopping research. Sorry socialists – we can’t control the entire world – just our corner of it. And when we find out what happens when AI is armed with emotions, who is going to purchase the technology? Sooner or later, the answer is: the highest bidding country or the highest bidding criminal. In short, the question isn’t whether or not we can avoid some apocalyptic takeover by AI via government regulation, the question is how are we going to address the inevitable question of the best way to prevent AI from snowballing into something devastatingly catastrophic? And really, the answer doesn’t have to be that scary. One blogger suggests that swarm intelligence may be the solution. It’s an interesting suggestion. Surely, we’ll be able to program benevolent AI, network them together to share their research and analyses, and let them find a solution … after all, they’ll be hundreds of times more intelligent than us right?
And here’s another thought. Recently, quantum computers outperformed regular computers. Someone developed a video recorder that works via mind control from a contact lens. Scientists put the sum of human knowledge onto a tiny disk the size of a contact lens. What are the chances that we’ll be equipped with our own AI quantum chips in the next hundred years or so? And what if we synced all of those chips via a vast network so that we could each have complete access to the sum of human experience at the speed of our thoughts? Is this pushing your brain too hard? If so, you haven’t been following breaking science news very carefully lately! Individualized AI quantum chips may not be the medium by which we use swarm intelligence to defeat the attack of nefarious, bigoted AI drones but it illustrates a simple idea: we don’t have to solve all of the potential problems AI may pose to humanity; we just have to have the foresight to predict the types of problems it may pose to humanity and always be prepared to be a step ahead of the competition. Pardon me for noticing but that’s starting to sound like capitalism. Did I just conclude that capitalism is the solution to warring AI cyborgs or was that a logical fallacy? Let me know in the comments below.