The future of autonomous aircraft

Catch up with Professor Ella Atkins, the director of U-M’s Autonomous Aerospace Systems (A2SYS) Lab, and hear from her about autonomous flight systems and its implications.

Imagine a world of aerial delivery drones bringing goods right to your door, small air taxis with fewer than six passengers flying about cities, supersonic airliners crossing continents and oceans, and sixth-generation fighter aircraft patrolling battle zones – and all without the intervention or even supervision of a human pilot. That may sound like the far-off future, but it’s already arriving thanks to autonomous flight systems that may one day make pilots an optional extra. We recently caught up with Professor Ella Atkins, the director of U-M’s Autonomous Aerospace Systems (A2SYS) Lab, and asked her about this remarkable technology and its implications.

Ella Atkins, professor of aerospace engineering, shows off one of her quadcopters at the new M-Air advanced robotics facility on North Campus of the University of Michigan in Ann Arbor, MI on March 28, 2018.

“If you look back at the history of aviation,” says Aitkins, “there used to be three people in the front of major commercial transport aircraft. Along with the pilot and the co-pilot was an engineer, who looked after the engines and other flight systems. At that time, there were planes running out of fuel or otherwise having mismanaged systems because the engineer either wasn’t paying attention or made a calculation error. Now, you hardly ever hear of that because the computers handle that sort of problem really well.

“Then came instruments and radios to help guide people through the sky. Back to the 1920s or 1930s, people would get lost in bad weather or at night without clear land markers, so a number of navigation aids were brought into play followed by air traffic control.

“Along with all of this came autonomy. That’s progressed to the point where the pilot mainly supervises the software that controls the plane. Now, we’re going to the next step where the software backs up the pilot to make sure decisions are safe, whether it’s commercial, military, hobby, giant aircraft with several hundred people, or tiny unmanned aircraft. To do this, we study a lot of different topics in emergency flight management, in looking at backup data streams, and understanding whether they agree and what to do when they don’t agree.”

What is autonomy?

Essentially, an autonomous machine is one that can, either independently or as part of a data network, carry out tasks without human supervision or intervention to one degree or another. More than that, a full-blown autonomous system can override the human.

“The notion of automation is that there is some kind of data or service being provided, but the human is still overseeing, is still supervising, or managing, all of the different automation functions” says Atkins.

With autonomy, the machine can say no to the human. It’s capable of making authoritative decisions and being independent.

Ella Atkins

There’s a sliding scale between automation and autonomy. At one end of the scale, the machine has no say about what is happening. In the middle of the scale, the machine has some control, but is mainly an adviser to the pilot. At full autonomy, the machine has full control and can even override the pilot’s decisions – if a pilot is involved at all.

The case for autonomy

But why have autonomous aircraft? According to Atkins, autonomy holds the promise of making air travel safer and expanding the capabilities of aircraft by augmenting onboard decision systems so they can make rational decisions in the face of unexpected or unusual events. This is done by using selected computer models and algorithms to address the complex problems raised by such a task. 

Atkins controls a quadcopter as it takes off inside M-Air on U-M’s North Campus. Photo: Levi Hutmacher

“What we do at the A2SyS Lab is a variety of projects that contribute to helping the next generation air and space vehicles be more capable and safer through more autonomy.” says Atkins. “There’s a lot of the community that thinks that autonomy is dangerous and scary and that it will invade our privacy. That’s not my goal.

“There are two classic examples of why we would want autonomy. One is in commercial transport. We need a plane to say ‘no.’ If a hijacker tries to fly into a skyscraper, there’s never an excuse for that. The second is once we get little drones that fly beyond the line of sight, but are often piloted by very inexperienced persons, we need them to be able to fly themselves safely.”

How does autonomy work?

In terms of aircraft, autonomy is a matter of equipping vehicles with an extensive suite of sensors, such as computer vision cameras, radar, lidar, and others, combined with constantly updated maps, GPS navigation, computers that can process enormous amounts of data in real time, and the software that gives them enough artificial intelligence to become autonomous rather than just highly automated. In addition, the aircraft needs data links to communicate with outside systems, such as air traffic control, without going through a pilot.

One of the things that I have worked on is the reactive software and flight management system that prevents a plane from crashing,” says Atkins.

“It’s similar to the anti-crash automation that we have in some of our cars. If you translate that to the aircraft, it should never hit a mountain or a building.”

An example of why this is so important is the famous US Airways Flight 1549 that ditched in the Hudson River on January 15, 2009, when a A320 jetliner struck a flock of geese on takeoff from New York’s LaGuardia airport, damaging the engines and forcing pilots Chesley Sullenberger and Jeffrey Skiles to make a water landing, saving all 155 passengers.

“The engines knew they were having trouble,” says Atkins. “Engines are really advanced pieces of equipment. They have these things called FADEC – Full Authority Digital Engine Controllers – that have microcontrollers that take in lots of information on temperatures and pressures to try to optimize the performance of the engine. They were sending error messages back to the flight management system computer saying you’re getting low thrust even though we have this high throttle setting.

“The question was, what to do? One of the things you find in Sullenberger’s testimony is that he said, ‘I didn’t know if I could get back to LaGuardia. I just didn’t have the time to do those calculations myself and the computer didn’t do them for me, so I did the best I could by landing in the Hudson River.’

“It’s similar to the anti-crash automation that we have in some of our cars. If you translate that to the aircraft, it should never hit a mountain or a building.”

An example of why this is so important is the famous US Airways Flight 1549 that ditched in the Hudson River on January 15, 2009, when a A320 jetliner struck a flock of geese on takeoff from New York’s LaGuardia airport, damaging the engines and forcing pilots Chesley Sullenberger and Jeffrey Skiles to make a water landing, saving all 155 passengers.

“The engines knew they were having trouble,” says Atkins. “Engines are really advanced pieces of equipment. They have these things called FADEC – Full Authority Digital Engine Controllers – that have microcontrollers that take in lots of information on temperatures and pressures to try to optimize the performance of the engine. They were sending error messages back to the flight management system computer saying you’re getting low thrust even though we have this high throttle setting.

“The question was, what to do? One of the things you find in Sullenberger’s testimony is that he said, ‘I didn’t know if I could get back to LaGuardia. I just didn’t have the time to do those calculations myself and the computer didn’t do them for me, so I did the best I could by landing in the Hudson River.’

“The reality is any computer can calculate the glide path of an A320. In fact, if there had been two elements, the aircraft might have landed. One was emergency flight planning for the aircraft to immediately, within the first few seconds after engine failure, calculate that trajectory back to LaGuardia. That’s something my research group did as one of the first things I ever did in academia. 

“The second thing is the data link. Air traffic control relied solely on voice to get information about what was going on. If you listen to the voice cockpit recording of Sullenberger talking to air traffic control, New York is known for their high-quality air traffic controllers, but they just didn’t get it right away. When Sullenberger said ‘we had a bird strike, and we’ve lost engines,’ it took nearly 20 seconds. By then, the plane was coming down. It was too slow. Time-critical emergency landing needs a direct data link so that the engine data gets from the FADEC to the flight management system immediately, and then the emergency landing plan gets to the air traffic controller milliseconds later.”

THE FUTURE

“The reality is any computer can calculate the glide path of an A320. In fact, if there had been two elements, the aircraft might have landed. One was emergency flight planning for the aircraft to immediately, within the first few seconds after engine failure, calculate that trajectory back to LaGuardia. That’s something my research group did as one of the first things I ever did in academia. 

“The second thing is the data link. Air traffic control relied solely on voice to get information about what was going on. If you listen to the voice cockpit recording of Sullenberger talking to air traffic control, New York is known for their high-quality air traffic controllers, but they just didn’t get it right away. When Sullenberger said ‘we had a bird strike, and we’ve lost engines,’ it took nearly 20 seconds. By then, the plane was coming down. It was too slow. Time-critical emergency landing needs a direct data link so that the engine data gets from the FADEC to the flight management system immediately, and then the emergency landing plan gets to the air traffic controller milliseconds later.”

A multicopter preparing to demonstrate autonomous roof shingle nailing in M-Air.

“What we’re going to find is that these vehicles, although they look like helicopters, are going to become fully autonomous because we’re not going to have the well-trained pilots present today in our commercial jet cockpits.”

So, like many other technical advances, the human factor comes into play as well as the engineering factor. According to Atkins, not only will the public need to be persuaded to accept autonomous systems, pilots will need a much better understanding of how such systems work as well. Even Aerospace engineers will need to consider the computer sciences in aircraft designs that integrate classical aerodynamics, structural, and propulsion elements with the advanced sensing, decision, and communication systems required for autonomous flight. 

Whatever the future of autonomy in aerospace is, it will certainly be interesting.


MEDIA CONTACT

Michigan Aerospace logo

Michigan Aerospace Engineering

Communications Team