Could tragedies like the crash of Germanwings Flight 9525 in the French Alps be prevented with technology?

I was recently asked by a friend to comment on this situation, as a technologist and futurist. His question was whether some form of computerized override could be built to prevent a suicidal act such as the one committed on this flight.

My answer is that ultimately it comes down to how much trust we are willing to place in computerized decision-making systems, versus the amount of trust we place in humans.

Human pilots have a tremendous amount of context available to them as they look at their various flight instruments to assess their surroundings, weather conditions, other air traffic, etc. And beyond their instruments, they also use their eyes to visually scan the sky for hazards that may not be visible to the instruments for a variety of reasons.

In order to simulate that human capability, a computer would need to be equipped with a vision system capable of discriminating between an infinite number of unknown objects, seen from infinite different angles. It would then be required to possess a system able to evaluate the relative hazards presented by those objects, and be able to make corresponding adjustments to the plane’s trajectory to avoid those hazards.

Finally, we would require a system of this sort to have total trust in the readings presented by its instruments. A large array of sensors measure cabin pressure, outside pressure, altitude, direction, air speed, ground speed, and other variables that can influence the decision-making process. An unfortunate problem with such sensors is that they can fail, or be fooled by various conditions, such as ice build-up.

This was the case in the loss of Air France Flight 447, in which the pitot tubes iced up, making the system incapable of measuring air speed. In this case, even the human pilots did not react correctly, because they lacked proper contextual information.

In the Flight 9525 crash, the pilot in command of the aircraft apparently caused the plane to enter a steep dive from 38,000 feet to 100 feet. He did so by altering the programming of the autopilot. This would lead one to ask questions such as “is there ever a valid reason to set an autopilot to those parameters?” So perhaps some range-checking on human inputs to the system could be employed to prevent this sort of autopilot-based attack.

Assuming that the autopilot system rejected this input, would the pilot still be able to descend rapidly using manual control? If so, how would a computer know that the reason for the steep descent was not to avoid some other hazard? Or, how could the computer be sure the pilot wasn’t acting correctly based on his own observations, despite a failed sensor?

For these reasons, I suspect that a fully automated recovery system to prevent this kind of tragedy is unlikely, given current technology.

It’s worth noting that some well-respected luminaries within the technology community, such as Steve Wozniak, have recently come out against rapid development of artificial intelligence technology, stating that it is “scary and very bad for people.” On the topic, Bill Gates was quoted as saying “I don’t understand why some people are not concerned.” Their positions are interesting, as the further development of highly-capable artificial intelligence could also potentially serve humanity very well. I see their views as pessimistic, but not totally far-fetched.

It will be interesting to watch the development of these technologies in the coming decades. Regardless of whether you believe they are beneficial or dangerous, the implications for humanity are fascinating. I can’t wait to see where it all goes, personally.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>