The Michigan Engineer News Center

Mayday Machine

Immediately, the plane’s systems set to work testing out its various control surfaces, such as the flaps and rudder, to discover its new limitations.| Long Read

Somewhere over northern New Mexico, a pilot and co-pilot felt a shudder as something struck the airplane. Immediately, the plane’s systems set to work testing out its various control surfaces, such as the flaps and rudder, to discover its new limitations. Then it mapped updated instructions to the pilots’ hand controls that would allow them to maneuver in spite of the damage.

Whatever it was had taken out one side of the tail. With damage to the horizontal stabilizer and elevator, it would be harder to control the altitude and descent rate, This meant that the pilots would have to land at a much higher speed than normal, and they needed to start planning that landing now.

The pilot called up the Emergency Landing Planner on the flight computer. It quickly assessed the nearby runways, possible flight paths, the weather, the risk that the plane might pose to people on the ground and how quickly help could arrive.

Within a few seconds, it had ranked many promising routes, with Clovis Municipal Airport in New Mexico at the top. Cannon Air Force Base, sporting a much longer runway, drew the pilots’ eyes. But the weather was too poor there — the wind was blowing across the runway, which spelled trouble for a plane with a damaged tail.

Clovis offered a headwind, allowing the plane to travel slower with respect to the ground as it came in to land. The pilots hoped it would be enough as they turned the plane toward the smaller airport.

With this choice, odds are good that these pilots would land the plane in relative safety, but in truth they were safe the whole time. This is one of the scenarios that five teams of professional airline pilots faced as they flew in a simulator at the NASA Ames Research Center at Moffett Field, Calif.

A group of researchers in the Intelligent Systems Division was testing their Emergency Landing Planner software — a type of software first developed by a researcher now at U-M. Although NASA is best known for space exploration, the National Aeronautics and Space Administration has always played a strong role in atmospheric flight innovations.

The team, led by David Smith, hopes that the planner will help pilots find the best landing site and route to take, potentially saving lives. The test was encouraging, with the pilots saying that such a tool would be welcome in their cockpits, Smith reported.

“It’s allowing people to make faster decisions and take more information into account,” he said. “If you can make the right decision quickly, it helps a lot.”

In its current iteration, the planner only acts as a guide, a bit like a GPS route planner in a car. It doesn’t choose the route or fly the plane. Still, it demonstrates that a computer can assess a complex and unexpected situation, a skill that previously set human operators apart from machines.

Those who defend our reliance on pilots often point to unexpected emergency scenarios as the reason why we need humans in the cockpit. Sure, a drone can handle routine flight, but can it come up with a way to save the day when it encounters a problem that isn’t in the plan?

This kind of software challenges that view. Expanded to the point where it could choose the route and load it into the autopilot system, such an emergency lander would represent a computer capable of handling a midair crisis — a machine to call on in a “mayday” situation. And then, would we still need pilots?

View from the cockpit

Right now, with pilots at the controls, we are enjoying a period of unprecedented safety. In the United States, statistics from 2008 to 2012 put the odds of dying on any flight at one in 45 million. To give that number some perspective, if you flew on three commercial flights every day, you could expect to experience one fatal crash in 40,000 years.

And yes, modern pilots deserve some credit. Patrick Smith, first officer for a commercial airline as well as an author and columnist on aviation, says the autopilot is overrated. “Millions of people out there think that planes are programmed to fly themselves and pilots are sitting back,” he said. “It’s one of the most misunderstood and exaggerated aspects of commercial aviation.”

State-of-the-art automation can handle all physical parts of routine flight, but pilots tell the plane what to do and handle any changes to the plan that may arise from weather, traffic at the airport or other circumstances.

Currently, pilots receive flight plans from a dispatcher for the airline, which the pilots review before the plane takes off. At the gate, the pilots fire up the plane’s electronics and automated systems — among these, the flight management system. They plug in an outline of the flight, including points that the plane will pass by on its route, the sequence of climbs that will take it to cruising altitude, the descent at the destination airport and winds and weather along the way.

Then, the pilots fly the plane through takeoff until they hand off control to the flight management system. Patrick Smith compares it to cruise control on a car.

“Cruise control frees the driver from certain tasks at certain times, but it can’t drive your car from L.A. to New York. Automation can’t fly a plane from LA to New York either,” said Smith. Even with the autopilot on, he added, both pilots often become completely occupied with tasks such as updating the route to avoid a storm or make changes ordered by air traffic control.

While planes are capable of auto-landing, Patrick Smith says it is rarely used. “More than 99 percent of landings are performed by hand,” he said. Unless he can’t see the runway, it’s easier to fly a successful landing than program one in.

An imperfect balance

The current safety record in aviation represents a substantial change from the 1970s, when over 30 passenger flights on U.S. carriers ended in fatal accidents. In the decade from 2004 to 2013, that number was just four.

Much of the credit for this improvement goes to computer-driven systems on airplanes, known collectively as flight deck automation, that handle aspects of flight for the pilots. These systems allow pilots to perform higher-level planning tasks, such as anticipating challenges like bad weather. Sensors and software that interprets the readings can alert pilots to issues such as a potential stall or mechanical problem. In some cases, the automation even handles the problem for the pilot.

However, automation has also introduced new challenges that played a role in a number of mishaps, most recently the Colgan Air and Air France crashes of 2009 and the ill-fated Asiana Airlines landing last year (see graphic). In all these cases, due to a combination of poor feedback, insufficient training and unusual conditions, the pilots lost awareness of the plane’s status with fatal results.

“The flight crews failed to either notice or understand what the systems were doing and why they were acting the way they did,” said Nadine Sarter, a professor of Industrial and Operations Engineering at U-M.

In most cases, pilot error is blamed for such accidents. But Sarter argues that plane crashes almost always result from a combination of factors involving the systems as well as the pilots. Also, she suggests that pilots get short shrift when it comes to accident reporting.

“Lo and behold, 75 percent of all aviation accidents are attributed to human error,’” she said. “What we need to look at as well, however, is the figure that shows, for every year, the number of incidents that did not turn into an accident because a human got involved. That is the fair comparison.”

Firm numbers aren’t available, but the Aviation Safety Reporting System, an incident reporting system run by NASA, offers some insights. Anyone involved in aviation operations is encouraged to report safety-related incidents. The program is confidential and anonymous, and the reports can be used to identify and study safety concerns. On average, the database receives more than 6,700 incident reports per month, including submissions from private pilots.

Enlarge
IMAGE:  A brief summary of three incidents involving planes and automation.

Some reports describe straightforward human errors, such as setting the wrong altitude, but others reveal human pilots reacting to automation that is behaving in an unexpected way — for instance, the autothrust system isn’t maintaining the right speed or instruments go down. The pilots may change the plan, for example postponing the landing until they can either get the equipment working or make do without it. These incidents typically end with an undramatic landing because the pilots took the right corrective action.

Many pilots have seen instruments giving conflicting information, requiring judgment calls, and GPS or on-board computers failing. They remain skeptical of staking passenger’s lives on the performance of automation. Many passengers, who see their own computer monitors freeze in grayscale or throw up the “blue screen of death,” are equally wary.

Different solutions

Sarter maintains that the problem is not the humans or the automation alone but rather the way that humans and automation interact. “I used to call it John Wayne automation — it’s strong and silent,” she said. “It’s not a team player. We haven’t turned it into something that truly collaborates with the pilot.”

Enlarge
IMAGE:  A pilot looks out the window at a plane.

In particular, she is interested in determining how much power and independence to give the automation. The best balance would provide gains in efficiency and precision while also keeping pilots in the loop so that they can quickly and effectively take over from the automation when necessary.

This transition of control is where current automation often creates problems. “In some cases, it does not provide proper feedback to help pilots realize that they need to get involved,” said Sarter.

In the case of Asiana Airlines, for instance, the pilot at the controls didn’t realize that he had disabled the autothrust system, setting the engines to idle. He and the monitoring pilot knew they were descending too quickly but couldn’t determine why.

The warning that the plane was moving too slowly sounded 11 seconds before impact, when it was already too late. The pilots could not gain enough speed to avoid crashing into the seawall. While this doesn’t excuse the pilots for failing to notice the thrust mode, it does reveal a path toward safer automation that gives clearer indications of system modes and sounds warnings earlier.

Communication between the automation and the pilots is another of Sarter’s interests. Because pilots are increasingly overloaded with visual and auditory information, she develops new ways to communicate through the sense of touch, using patterns of vibrations on the skin.

“Such ‘vibrotactile’ feedback is well suited for presenting spatial information, such as the location and movement of surrounding aircraft, and early notifications of problems that do not yet require highly disruptive auditory alerts,” she said.

Poor communication between automation and pilots also contributed to the crash of Colgan Air Flight 3407. The automation did not give visual or auditory warnings that the plane was approaching a dangerously slow speed. Instead, the autopilot abruptly handed control back to the pilots with a stall warning. Startled, the pilot took the wrong action, raising the nose of the plane, which ensured that it would stall.

Ella Atkins, a U-M associate professor of aerospace engineering who develops autonomous flight systems, including early emergency landing planners, pointed out that the pilot flying Colgan 3407 was exhausted and had failed tests. While these factors spurred new regulations, her trust in the competence of the pilots is limited. “I want that automation up front that slaps them on the wrist if they try to kill me,” she said.

Atkins is working on a new system that would monitor both the pilot and the autopilot for actions that could create a dangerous situation. This system could then intervene.

“It is a watchdog that only takes short-term action to mitigate risk, offering a quick ‘No, you can’t do that,’ then restoring full control to the crew,” said Atkins.

Already, technologies like this are finding their way onto airplanes. During Air France Flight 447, the Airbus A330 that crashed into the Atlantic was equipped with anti-stall technology that would have leveled the plane off if all the sensors had been working — regardless of pilot input.

Unfortunately, after reverting to manual piloting when the airspeed sensor froze over, the plane could only sound the stall alarm. Neither the more experienced co-pilot nor the captain realized until the final seconds that their younger partner had been holding the plane in a stall.

Not all losses of airspeed data end this way. Also in 2009, two Airbus A330 planes experienced the same failure, but their pilots followed the appropriate procedures for keeping the plane under control until the airspeed sensors unfroze a minute or two later. It didn’t make the news when these flights landed safely at their destinations.

While the Colgan and Air France pilots failed to save the situation, in both cases it was the automated systems that failed first. Many researchers are looking for ways to make the automation more robust.

For instance, a technique called sensor fusion could have allowed the Airbus A330 computer to compensate for the missing sensor data through estimates based on other readings. Then, the flight management system could set more conservative limits for maneuvering just as the successful pilots did.

Self-flying planes

As a developer of fully autonomous drones, Atkins believes the day that computers can fly more safely than human pilots isn’t far away. In contrast to Patrick Smith, she says automation is already capable of flying from L.A. to New York, provided that it doesn’t encounter failures, air traffic control instructions or other disruptions that current technology relies on humans to handle.

Innovations that could give the automation more flexibility are backlogged as the Federal Aviation Administration (FAA), which proposes and enforces aviation regulations, decides how to certify software that doesn’t always do exactly the same thing.

One such technology is the adaptive controller — the software that figured out the new rules for flying the damaged virtual airplane in the NASA experiment. David Smith said that the pilots found it to be particularly helpful. The FAA, however, is not a fan.

“Adaptive control, because of its maturity, has gotten to the point where that clash with the FAA has happened,” said Atkins.

The controller relies on sensors to tell it how the plane responded to each tiny maneuver, and those sensors can give different readings in the same situation. This means that the controller can make mistakes if it gets enough bad sensor data, she explained. For that reason, the FAA deems it unreliable and won’t certify it.

This annoys Atkins because programs that offer the flexibility to adapt to different situations are inherently unreliable by this definition. Instead, she wants to see these systems measured by whether they are as safe as humans.

Even for the Emergency Landing Planner, which does not have this perceived reliability problem, David Smith estimates that it could take 20 years just to make it into the cockpit systems of private planes, though it could be available as a tablet application much sooner. Commercial aircraft would take even longer. But he is optimistic that in a decade or so, airline operations centers might use it to provide guidance to pilots in distress.

Still, companies like Amazon are beginning to push the boundaries of autonomous flight. At a recent conference, Atkins heard representatives of the delivery drone initiative report that they were collaborating with NASA on how to get drone technology certified for civilian use.

Enlarge
IMAGE:  Responses to our Facebook poll concerning airplane automation.

Regulatory paralysis

They face an uphill battle. At present, commercial drones are banned as the FAA claims control of the airspace all the way to the ground. Atkins believes that eventually, public and private property owners will be allowed to make decisions about their own airspaces.

The other challenge is that the FAA is presently defining unmanned aerial vehicles as “remotely piloted” — meaning there has to be a person on the ground in control of the aircraft. In spite of their differences, Atkins, Sarter and Patrick Smith all agree that remote piloting is particularly dangerous.

The pilots on the ground have all the disadvantages of data link delays and slower reaction time without the benefit of being able to use their senses to gain additional information. Atkins would rather leave the decisions to the on-board computer.

Then there’s the catastrophe that could follow a loss of data link. At the very least, David Smith noted, these aircraft would need an autonomous emergency lander that places the safety of people and property on the ground as its highest priority.

After we are used to little drones in the sky, Jim Bagian, a professor of Industrial & Operations Engineering at U-M and former astronaut, suggested that we would be more likely to accept cargo drones flying into airports, carrying packages between cities, well before fully automated passenger flights.

In 2009, DIY Drones creator Chris Anderson claimed that FedEx founder Fred Smith said he wanted autonomous aircraft for this purpose, but FedEx declined comment on whether the company was actively pursuing such automation.

Only after the technology has been accepted by the FAA and proven in cargo planes will the FAA, the public and airlines be likely to even begin considering phasing out passenger pilots. By then, it may be a matter of activating autonomous function in existing airplanes, since large cargo planes are often special versions of the leading passenger models.

A question of motive

But why the interest in getting rid of pilots? Bagian pointed out that top salaries are a major expense, since senior first officers for major airlines can earn over $200,000 per year and some captains claim salaries over $400,000.

“Airlines might be saying, ‘Our biggest avoidable cost next to fuel is the pilots,’” he said.

On routes flown by these top earners, a fully automated plane could save the airline over $2 million a year. That’s not even counting higher revenue from the extra passenger space with an exceptional view.

He sees it as a political problem for the airlines rather than a genuine safety risk. “They want the FAA or somebody to say, ‘We think this is safe,’” he said. “So now it’s not that they’re a money-grubbing, mercenary, no-good commercial enterprise … It’s been to the Congress and the FAA. Everybody approved it, and they’re just doing what’s approved.”

Atkins noted that because of the way the public tends to react, a cargo drone crashing into a populated area could put a company like FedEx out of business. We, and the juries we form, find it easier to forgive when we believe that a human did everything that he or she could to save the situation. We’re even more forgiving when that human is among the dead.

Still, we stand to gain more than just cheaper fares. Atkins says that busy airports could be even busier with a machine at the controls. “Separation and density of traffic on approach and from departure at major airports is really constrained by human reaction times,” she said.

Human pilots need enough time to see and coordinate with other aircraft. With the precision offered by sensors and automated flight, more runways could fit into the airport’s grounds, and planes could arrive and depart at shorter intervals. “This has the potential to reduce delays and increase fuel efficiency and time efficiency throughout the entire air transportation system,” said Atkins.

An uncertain future

Bagian projects that it is unlikely we could see the first autonomous passenger flights before 2035. Atkins estimates that the first cargo drone to fly in and out of airports could be in service by 2030. But a lot would have to happen first.

Right now, with even small, autonomous, commercial drones forbidden from flying outdoors over uninhabited land, we as a society are missing out on opportunities to gain from autonomous technology and discover its limitations. Atkins particularly likes agricultural uses, such as crop dusting, pest detection and moisture monitoring.

The latter applications could reduce pesticide and water use, while the former would make it unnecessary for humans to risk their lives doing the stunt-like maneuvers needed to fly a crop duster efficiently. And, when a crash occurred, the drone would likely go down in a field where it wouldn’t harm anyone. Researchers like Atkins could use data from these drones to find out how often they crash as well as how to prevent those failures.

When it comes to computer reliability, Atkins points out that existing planes rely on computers for critical functions that keep them in the sky. To ensure that a computer crash won’t lead to a plane crash, these aircraft typically have three or more computers handling the same functions.

A true weakness is computer vision — the automation can’t make out obstacles on the runway nearly as well as humans can. This field is rapidly improving, thanks in part to driverless cars, but it’s not yet ready for prime time.

Other challenges will have to be tackled by the entire field. For autonomous planes to fly seamlessly in and out of airports, voice communications must be replaced with data links, and every airplane would have to be equipped with a device that would tell its location to other aircraft.

“Having an air system of autonomous flight would require basically rebuilding the entire civil aviation infrastructure all over again,” said Patrick Smith.

Atkins holds that the cost is exaggerated, and an industry that represents over a trillion dollars a year in economic activity can support the multi-billion-dollar investment. It should save money and time in the long run.

Still, Sarter and Patrick Smith doubt that this justification is enough. To them, it certainly doesn’t serve the public’s interest to give up the people on the plane who know how it flies and face the same life-and-death stakes as the passengers.

No matter the side of the argument, it always comes back to safety. Perfect safety is impossible, so the question that Atkins and Bagian are interested in answering is when is automation good enough? Would it be enough when automation could save as many lives as pilots do, or would we demand a higher standard?

“It’s just like Dumbo’s feather. The mother says, ‘Hold the feather, Dumbo. That will let you fly.’ So is having a pilot Dumbo’s feather?” Bagian asked. Not yet, but the day isn’t far away. And then, what will we choose?

This story is a feature article in the fall issue of The Michigan Engineer magazine

Portrait of Kate McAlpine

Contact

Kate McAlpine
Senior Writer & Assistant News Editor

Michigan Engineering
Communications & Marketing

(734) 763-4386

3214 SI-North

Electricity, eel-style: Soft power cells could run tomorrow’s implantables

Device generates over 100 volts from saltwater | Short Read