Automation Nation

On a cold February night in 2009, a turboprop commuter plane out of Newark was only a few miles from Buffalo when the “stick shaker” suddenly triggered. The plane had slowed to 135 knots after the crew had lowered the landing gear and extended the flaps, and the plane threatened to enter an aerodynamic stall. (That’s not when engines stop working, but when the wings cannot maintain lift.)

The automatic pilot disengaged, as it should have, and the pilot seized his shuddering control yoke, dragging it back to raise the nose and increase altitude—a fatal mistake. The plane needed speed, not height. Dipping the nose might have sacrificed a few hundred feet but provided the velocity to recover. The pilot’s reaction only further slowed the plane: The automatic stall-avoidance system activated and tried to pull the yoke forward, but the pilot fought it and ensured a stall would happen. It took only a few seconds for the plane to roll right and left, pitch up and then down for good, plummeting into a suburban home and killing 50 people.

When the National Transportation Safety Board (NTSB) investigated the crash, it identified another cause besides the pilot’s immediate response to the trouble. In the minutes leading up to it, the pilot and copilot showed “a significant breakdown of their monitoring responsibilities” and missed “explicit cues” of impending danger. At the critical moment, the pilot reacted incorrectly; but in the preceding 30 seconds, pilot and copilot failed to pay attention to the flight instruments showing airspeed and pitch attitude. The NTSB simulation of the final moments of the flight—which you can download at the Wikipedia page about the crash—indicates that, as the plane rapidly lost speed on the final approach, they didn’t seem to notice. Not until the stick shaker activated and the automatic pilot deactivated did the crew realize the problem, and it startled the pilot without warning. To prevent future disasters like this one, the NTSB recommended tighter monitoring procedures during flights and more training for pilots in monitoring skills.

Nicholas Carr takes that 2009 crash as a prime exhibit of a perilous trend. In previous books—The Big Switch (2008) and The Shallows (2010)—he examined digital innovations often hailed and hyped in the tech world and sounded a contrary judgment. (The title of his widely read 2008 Atlantic essay, “Is Google Making Us Stupid?” indicates his angle.)  

The Glass Cage applies a similar skepticism to a broader development. To Carr, a deeper cause was to blame for the slack supervision in the cockpit that night, and it applies more widely than we realize. It is simple: The crew relied too much on the automatic pilot. They relaxed their awareness because the automatic pilot handled so many things for them so well, as it does on every flight. Computers in aviation have become so advanced, in fact, that pilots typically control a plane only briefly, for a minute or two on takeoff and landing. Computers maintain speed, stability, and direction, scan for nearby aircraft, adjust cabin pressure, and alter flight paths. Pilots don’t turn a plane, they tell a computer to do it. Yet for all the gains in efficiency and cost (cockpits no longer need navigators and radio operators), automation creates a risk in that the better it works, the more a human operator slackens his effort and the more his skills decline.  

This is the warning of The Glass Cage. Carr emphasizes the airline industry because, while pilots seem like an extreme case, they “have been out in front of a wave that is now engulfing us.” The cockpit reveals dramatically a deterioration that subtly affects us all, more or less: When you use a calculator too much, arithmetical skills slip; a GPS dulls your sense of direction; computer-aided design software mars the hand’s ability to draw; digital cameras loosen the “discipline of perception.” If we don’t exert motor skills, they lag. Moreover, our concentration changes. As computers increasingly shoulder our labor, we suffer automation complacency (“when a computer lulls us into a false sense of security”) and automation bias (“when people give undue weight to the information coming through their monitors”) as a natural consequence. 

It’s a common complaint, as mathematics teachers who object to calculators and English teachers who deplore spell-check well know, and Carr’s summations come off as strong and precise, but familiar:

As we grow more reliant on applications and algorithms, we become less capable of acting without their aid—we experience skill tunneling as well as attentional tunneling. That makes the software more indispensable still. Automation breeds automation. 

What makes The Glass Cage nonetheless fresh and powerful are the high-stakes, newly relevant cases Carr invokes. The aviation episodes touch deeply when we ponder how split-second responses needed to keep us alive may be dulled by digital advances we otherwise extol. A few months after the Buffalo crash, an Air France flight from Rio to Paris hit a storm, and the plane’s sensors, wrapped in ice, mis-recorded flight speed, leading the automatic pilot to shut off. The copilot panicked and yanked the stick backward to gain altitude, even as the stall warning blared. If he had released the stick, the plane probably would have leveled off and accelerated; but according to investigators cited by Carr, the crew had reached a “total loss of cognitive control of the situation.” Another pilot seized the controls as the plane dropped 30,000 feet in 180 seconds. 

“This can’t be happening,” he said, as the first pilot responded, “But what is happening?”

If they had been in at least partial control of the plane during the storm, they wouldn’t have been so unnerved when the sensors malfunctioned. In other incidents of overreliance Carr recounts, the opposite problem occurs: not a quick emergency disorienting human operators, but a slow, unobtrusive mistake putting them to sleep. In 1995, when a GPS antenna wire on an ocean liner came loose and gave inaccurate readings for 30 hours (!), only one person realized it, a mate who couldn’t spot a location buoy the ship should have passed. But he didn’t report it because he trusted the GPS more than his own eyes. And others didn’t wake up to the fact until the ship foundered on a sand bar near Nantucket.

Another, less acute, example of inattention potentially takes place every time someone enters a doctor’s office. As a patient describes his or her symptoms, the doctor or nurse taps them into a computer, and software identifies patterns and warning signs in the process of pinning down the problem. The patient’s history is readily available, too, which can be assimilated to diagnosis and treatment. The practice is the result of 10 years of technological advances and federal programs aiming to streamline record-keeping and improve care. In 2004, President Bush created the Health Information Technology Adoption Initiative, which would deliver millions of dollars to physicians and hospitals for the digitization of medical records. In 2009, President Obama added $30 billion to the kitty.  

“A frenzy of investment ensued,” Carr writes, “as some three hundred thousand doctors and four thousand hospitals availed themselves of Washington’s largesse.”   

Five years later, enthusiasm has waned. Systems were supposed to share information, but proprietary formats and conventions block it, leaving “critical patient data locked up in individual hospitals and doctors’ offices.” Advocates predicted that costs would drop, but they rose sharply, in part because the software automatically recommends tests and procedures that the doctor alone wouldn’t perform (because they are unnecessary). Automation also promised to enrich a patient’s history, enabling physicians to load detailed, individualized information about each visit. But instead, doctors’ notes have become more generic, often made up of the same phrases, copied-and-pasted again and again, so that we end up with (in the words of one researcher) “increased stereotyping of patents.” Additionally, software that was designed to warn physicians against errors—for instance, signaling a dangerous combination of drugs—has proven to highlight so many false or irrelevant dangers that doctors suffer “alert fatigue” and ignore the function altogether.

Finally, we have evidence of doctor-patient relations becoming more impersonal, not less. With physicians tasked with transferring a patient’s self-description to a screen, the predictable happens: Attention divides. A study in Israel charted doctors looking at screens, and not at patients, 25-55 percent of the time, while a Veterans Administration study found patients and doctors agreeing that electronic note-taking makes a consultation “feel less personal.” Worse, Carr adds, a physician depending too much on the machine loses the empathy and intuition necessary to the art of medicine, especially in complicated cases such as those in which patient statements aren’t entirely trustworthy.

The program demonstrates a typical gain-and-loss pattern for automation that is all-too-often unappreciated. Because the benefits outweigh the drawbacks and tend to be more tangible as well—compare the immediate result of using a GPS with the long-term effect of losing mapping skills—emphasizing the harm sounds pessimistic and unimaginative, especially when influential voices echo claims such as Google’s Michael Jones’s assertion that Google tools have given people a 20-point IQ boost. There is a long tradition of automation zeal, and Carr provides revealing examples, including Oscar Wilde’s prediction that “while Humanity will be amusing itself, or enjoying cultivated leisure .  .  . or making beautiful things, or reading beautiful things, or simply contemplating the world with admiration and delight, machinery will be doing all the necessary and unpleasant work.”   

Nicholas Carr’s warnings run against that pleasing vision, which puts him in a minority of culture-watchers. Wouldn’t life be wonderful if we didn’t have to work so hard and could be saved from human error? Well, of course. But there’s no getting rid of the need for someone to monitor the machines, and if his attention lags, and he doesn’t maintain his own skills, problems will occur. “An ignorant operator is a dangerous operator,” Carr insists. 

The future he paints is a dicey one: We may soon reach a point at which automation—in hazardous settings from cockpits to battle zones—allows mistakes to happen less frequently but more catastrophically, because humans are unprepared to resume control. The technophile’s solution is to augment the automation, thereby decreasing the very toil that keeps humans sharp. Better to think more about the human subject, Carr advises—whether it is a pilot flustered at a critical moment or a young cashier who can’t make change after punching the wrong key.

Mark Bauerlein, professor of English at Emory University, is the author, most recently, of The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future.

Related Content