Can a Computer Think Like a Pilot?
“Whatever consciousness is, it’s not a computation or something that can be described by a physical computation." —Sir Roger Penrose
So, will I be replaced in the cockpit by a black box or some Terminator-style robot? Live under a sky full of aircraft flown by collections of plastic and wire without a human brain standing by in the right seat or on the ground?
Spoiler alert: Not now, not ever.
Imagine you’re a computer programmer who’s decided to write a set of rules (a program) for a machine that will drive your car from Austin to College Station on Texas Highway 21. Since you are a top-notch programmer who never makes mistakes, your plan is to ride in the back seat and catch up on your sleep while the computer does the driving. You’ve outfitted your car with the fastest computer ever made and every possible sensor inside and outside the vehicle. It’s a little past midnight now and you’re ready to go.
Now, Highway 21 is one of those five-lane roads with two lanes going in each direction and a center lane for turning left. Because of the rules that you’ve programmed, your computer knows that it’s OK to drive in the two right lanes, not OK to drive in the two left lanes, and it’s sometimes OK, under an additional set of rules, to drive in the center lane.
A few miles this side of the town of Old Dime Box (a real town), some highwaymen with bad intentions have arranged a line of plastic traffic cones across the center lane and the two lanes on your side of the highway. As your car approaches it senses the obstacles in the lanes and slows to a stop. The sensors report that there are no cones in the two left lanes, but the programmed rules says those lanes are a no-go zone, so there you sit. Things soon go from bad to worse when you notice a group of guys with clubs emerging from the dark behind your car.
Well, it turns out that you’re not the ace programmer you thought you were.
This is the point in the trip where all of the things that your computer doesn’t know, can’t know, may just get you killed. Chief among those things is the most likely motives of a group of guys who have placed highway cones in front of your car and are now approaching armed with clubs.
Thank goodness, you didn’t remove the steering wheel or the gas pedal. That’s because you’re about to do something your computer will never be able to do—you’re going to ad-lib yourself out of trouble as you jump in the front seat, roll over the flimsy cones, and leave the highwaymen behind.
There are problems that computers cannot solve. Even if you work with the cleverest programmers, even if you have infinite time and energy and more computing power than could possibly ever exist in the universe, some things will always be impossible for a computer to work out. The explanation of that would take two or three more articles, so you’ll just have to trust me. But, if you don’t trust me, look up the Halt/Not Halt paradox posited by Alan Turing.
Without getting too far into the weeds, it’s important to remember that there are structural differences between human and artificial intelligence (AI). What we call AI today is designed around a neural network, a computer algorithm that imitates certain functions of the human brain. It contains virtual neurons in layers, connected to each other, that more or less pass on information based on some predefined rules about that information’s importance to the next layer. The AI “learns,” so to speak, by making mistakes and modifying the rules to come up with more accurate answers. In short, neural networks are solving an optimization problem. No one knows in detail how humans learn, but that’s definitely not how it works. AI needs a lot of training with a lot of carefully prepared rules and data, which is very unlike the way human intelligence works. Neural nets do not build models of the world; instead they classify patterns, and pattern recognition can fail with only small changes made to the pattern.
Most important, unlike your brain, AI is not good at generalizing what it learns from one situation and applying that past experience to formulate solutions to the current problem.
Now, think of a time you’ve called on your experience and creativity to get you out of a “situation,” and then try to write a set of instructions for a computer to analyze that situation and do what you did. Without even realizing it you are ad-libbing every time you drive your car, every time you fly your airplane. Ad-libbing is the process of real-time creativity and problem solving and like art, intuition, and imagination is inextricably tied to consciousness, the most mysterious aspect of our lives. Where do ideas really come from? Until we know that, we will never be able to define its computational rules.
Now that I’ve buried AI, let’s resurrect it to do the work that it’s really good at, which is its ability to think fast. Neural networks operating on a standard computer perform around 10 billion operations per second while real human neurons fire at a rate of about one thousand times per second. That speed advantage means the computer excels at extrapolating data that doesn’t have well-understood trends. Perhaps the point of artificial intelligence is not to make it all that similar to human intelligence. After all, the most useful machines we have—like airplanes and cars—are useful exactly because they do not mimic humans. Instead, we should concentrate on building systems and machines that can do tasks we are not good at.
Every year that goes by brings remarkable new technologies, made possible with the help of AI, making our airplanes safer to fly and, by extension, more useful. I recently began flying an airplane with synthetic vision. While on an approach to minimums, a representation of the runway appears on the primary flight display long before we are low enough to see it out the window. And now comes even more help for the pilot, such as emergency descent capabilities and Garmin’s Autoland. The benefits of these technologies and the amazing stuff that will soon follow are obvious. and the impulse to embrace them sooner rather than later is a natural reaction. But the ceding of responsibility to an artificial intelligence has to carry with it advantages that justify the risks, of which there will always be plenty.
From a system developer’s standpoint there are three key risk areas that ought to make for a few sleepless nights. The first, and perhaps the most obvious, is any system’s susceptibility to hacking and malware. The ability to manage this risk will depend on the AI’s ability to recognize atypical threats arriving from unorthodox directions not conforming to any known patterns. As I mentioned earlier, this is not one of AI’s strong suits.
The second is the risk of creating a flawed instruction set that causes a misalignment between the pilot’s goals and the goals of the AI. To stretch a point, a command like, “Fly me to Amarillo as quickly as possible”—without carefully defining the rules of the airspace and limitations of your aircraft—might leave behind a trail of violations and broken engine parts.
And lastly, the most intractable of the risks: artificial intelligence bias. AI bias can be introduced consciously or unconsciously by the programmer or by the inclusion of flawed data. This problem is real and everywhere, often creating unintended consequences. AI bias is a real human phenomenon, too, happening any time a person’s flawed judgement causes him or her to override a system’s conclusions. It happens to me every time I ignore my Waze app and take a different route.
As strange as this sounds, we’ve still got a long way to go toward a place we will never reach, but the trip alone is well worth the rewards. It’s a great time to be a pilot. Every day we replace systems and tools with better ones—except for that one tool we can never replace, the uniquely inventive brain of the human pilot.
Get Redbird Landing updates delivered to your inbox.