- #1
- 352
- 88
Ken Natton said:Sensors malfunctioning and giving false readings are not a problem. In systems where failures would have serious consequences, the usual solution is to employ something called triple redundancy. You don’t measure it once, you measure it three times. If two sensors agree and one disagrees you believe the two that agree. Techniques like this do tend to reduce the chances of serious failures to acceptable levels.
jarednjames said:You're joking right? You don't believe they just have the one sensor doing the job? That would be madness.
The number of sensors is unimportant. A computer can probably fly a plane more reliably than a human pilot, but it would be unthinkable to not allow the human pilot to override the responses made by the computer.
I don't know the specifics of a nuclear power plant, but for something truly critical, it would be more likely for a computer program to take no actions when confronted by conflicting inputs than to act based on majority opinion. Most people just don't have enough faith in computers to allow them to make decisions - an act that's much different than merely responding.
And, even in the role of just responding, humans just aren't going to accept not being able to override the computer's responses. Most computer programs are written to respond to anticipated situations; not to new, unique situations that might crop up for some reason.
For example, a satellite normally isn't allowed to make unlimited thruster firings to control satellite attitude when it thinks it's tumbling wildly out of control. For one thing, there's only a few things that could suddenly send a satellite tumbling wildly out of control - a collision that would destroy your satellite and the computer that would do the thruster firings or firing your thrusters wildly for no good reason at all or turning electromagnets on and off at random times. It doesn't matter how many sensors say the satellite is tumbling out of control - it's such a low probability event for a still living computer that the sensor readings have to be wrong, regardless of how many sensors are saying the same thing. In fact, to do any kind of large maneuver, a human has to manually disable the satellite's safety switches before firing the thrusters.
(Admittedly, I did once see the safety switches cause a satellite to be allowed to tumble out of control for a real attitude control anomaly, but, predictably, the attitude control anomaly was caused by an error that took some real brains and creativity to commit, even if they exercised poor judgement in how they applied their brains and creativity. In this case, inhibiting one method of maneuvering the satellite, but allowing a different method just prevented the satellite from countering a rather creative operator error.)
And, for Chernobyl, the engineers in charge of the test stayed, but the day shift crew that had gotten a detailed briefing on how the test should be run and what to expect had already gone home. The crew on duty were pretty much just following the direction of the engineers and hoping that some of the unusual things happening were normal for the test they were running. In fact, the power down sequence started during changeover between the swing shift and the night shift and wasn't that an incredibly exciting way to start a work shift. In other words, they'd deferred responsibility to engineers that were focused on their own test; not on the operations (and, for the record, the engineers' test ran perfectly).
Last edited: