n5wd wrote:In a 2003 interview with The Times, he recalled that NASA tried to blackball him from the industry, leaving him to spend 17 years as a forensic engineer and a lecturer on engineering ethics.
The obit on NPR said that he, in fact, did leave Thiokol shortly after the Challenger disaster, and after a couple of years of reflection, he and his wife would travel the country at their own expense, offering to lecture at any engineering school that would have them, talking about the ethics of engineering and how important it was to stand up for what you truly believed.
During the course of several telephone meetings between NASA flight managers and Thiokol's engineering managers, NASA said that they would defer to the judgement of Thiokol's engineers. But implied in their choice of words was a threat that Thiokol could be replaced if they couldn't properly design a booster, and that delays in the launch were very expensive to NASA, not to mention a PR nightmare. During the final meeting at the Thiokol end, with NASA managers teleconferencing in, Boisjoly and another engineer who actually designed the O-rings in question adamantly refused to come around and say that it was safe to launch. But in the end, they also were faced with an implied threat—that if they didn't fall into line behind their managers and green-light the launch, their days at Thiokol would be numbered. So on polling the final committee vote, both Boisjoly and the other engineer remained silent, afraid for their careers, but also pretty certain that the outcome would be disastrous. With their remaining silent, the Thiokol managers greenlighted the launch, and Boisjoly's predictions were borne out.
It wasn't too much later that Boisjoly resigned, although the other engineer stayed on to try and be part of redesigning the solid booster motors to avoid future launch explosions. Until the day Boisjoly died, both he and the other guy have been very vocal about A) making sure that the record of managerial malfeasance and incompetence during the Challenger disaster never gets buried, and B) making sure that ongoing manned missions are held to a much higher standard of engineering excellence, with more of an emphasis on crew safety.......even when it costs money.
The subsequent Columbia disaster may or may not have been part of that institutional fecklessness. After the Apollo 1 crew of Grissom, White, and Chaffee were burned to death during testing on the launch pad, Astronaut Col. Frank Borman told the investigating committee that it was a "failure of imagination" that killed those three men. Institutionally, NASA had had so much success that they were sure they had the tiger by the tail, and it just never even occured to them that they might have designed an inherently dangerous system by using a pure oxygen environment in a spacecraft. They had gotten so used to doing so many things right, that their imaginations never permitted them to dream up what could go wrong.
NASA was well aware that the big auxilliary fuel tank was shedding pieces of ice and foam insulation during launch every single time. They had been observing the phenomenon on film, in detail, on nearly every launch. But each time it happened, it was mere chance that none of those falling pieces hit the spacecraft itself. They were operating on the principle that, if it hadn't happened yet, it probably wouldn't. (Condition White) Consequently, it never really occured to anyone to
wonder and
worry (the failure of imagination) about what might happen if one of those pieces of ice struck the orbiter at Mach Schnell during a launch, and how they would get the crew home if it were damaged. The actual damage to Columbia's wing was severe enough that it would have been easily observable if anyone had bothered to look at it. A simple unscheduled EVA could have made the final determination.
Nobody imagined that it would be necessary.
I love the idea of the exploration of space, and despite the current trend at NASA toward unmanned flights, I think it is important to keep humans at the forefront of that exploration to the degree that it is technically feasible to do so. The problem is that when an institution gets real good at doing something, it tends to act with a great deal of hubris, which may not be the case at the level of the individual engineer/scientist. Most people acknowledge that riding rockets into space is not without risk, and a certain amount of risk is acceptable. But, these things are "risks" exactly because we don't know what we don't know. When you
know what you don't know, then you have an opportunity to examine your lack of knowledge with a critical eye, using your
imagination to try and figure out what might go wrong, and then make a plan for dealing with it. When a risk is known but can't be helped, then you can make a decision as to whether it is acceptable or not. But you can not adjudicate the acceptability of risks of which you are not aware; and the only way to
gain awareness is to
imagine what could happen.
“Hard times create strong men. Strong men create good times. Good times create weak men. And, weak men create hard times.”
― G. Michael Hopf, "Those Who Remain"
#TINVOWOOT