
W.J. Astore
When the Challenger blew up thirty years ago this January, I was a young Air Force lieutenant working an exercise in Cheyenne Mountain Command Center near Colorado Springs, Colorado. I remember the call coming in to the colonel behind me. I heard him say something like, “Is this real world?” In other words, is this really happening, or is it part of the exercise? The answer at the other end was grim, our exercise was promptly cancelled, and we turned on the TV and watched the explosion.
Our initial speculation that day was that an engine had malfunctioned (the explosion appeared to have occurred when the shuttle’s engines were reaching maximum thrust). But it turned out the shuttle had a known technical flaw that had not been adequately addressed. Something similar would happen to the Columbia in 2003: a known technical flaw, inadequately addressed, ended up crippling the shuttle.
When I taught a course on “technology and society” at the collegiate level, I had my students address the non-technical causes of the Challenger and Columbia disasters. Here is the question I put to them in the course syllabus:
NASA lost two space shuttles: the Challenger in 1986 and the Columbia in 2003. Tragically, both these accidents were preventable. Both had clear technical causes. In 1986, faulty O-rings on the solid rocket boosters allowed gas to escape, leading to an explosion of the center fuel tank. In 2003, insulation foam that detached from the shuttle upon liftoff damaged the heat insulation tiles that protect the shuttle from the intense heat of reentry, leading to internal explosions as the Columbia reentered the atmosphere.
Both accidents also highlighted wider issues involving risk management, institutional culture, and control of highly complex machinery. Before each accident, NASA engineers had warned managers of preexisting dangers. In the case of the Challenger, it was the risk of launching in low temperatures, as shown by previous data of gas leakage at O-ring seals when the air temperature was below sixty degrees Fahrenheit. In the case of the Columbia, visual data suggested the shuttle had sustained damage soon after liftoff, a fact that could have been confirmed by cameras and/or a space walk. In both cases, managers overruled or disregarded the engineers’ concerns, leading to catastrophe.
Question: What do you think were the key non-technical factors that interacted with the technical flaws? What lessons can we learn from these accidents about controlling complex technical systems?
I wanted my students to focus on issues such as group think, on management concerns about cost and schedule and how those might cloud judgment, on the difficulty of managing risk, on the possibilities of miscommunication among well-intentioned people operating under stress.
I ended the lesson with a quote from Richard Feynman, the Nobel Prize winning scientist who had served on the Challenger board of inquiry after the accident. Feynman’s honest assessment of the critical flaws in NASA’s scheme of management was shunted to an appendix of the official report. It’s available in his book, “What Do You Care What Other People Think?”
This is what Feynman had to say:
“For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.”
It was a devastating conclusion – a much needed one then, and arguably even more needed today.