Can computer control systems be relied upon for critical processes?

  • Thread starter BobG
  • Start date
  • Tags
    Computers
In summary, the use of multiple sensors in critical systems, such as in airplanes and nuclear power plants, helps to reduce the chances of serious failures. However, in situations where human input is necessary, overrides may be implemented to allow for manual control. In other cases, expert software or computer programs may be more reliable and efficient in handling tasks compared to human operators.
  • #1
BobG
Science Advisor
Homework Helper
352
88
Ken Natton said:
Sensors malfunctioning and giving false readings are not a problem. In systems where failures would have serious consequences, the usual solution is to employ something called triple redundancy. You don’t measure it once, you measure it three times. If two sensors agree and one disagrees you believe the two that agree. Techniques like this do tend to reduce the chances of serious failures to acceptable levels.

jarednjames said:
You're joking right? You don't believe they just have the one sensor doing the job? That would be madness.

The number of sensors is unimportant. A computer can probably fly a plane more reliably than a human pilot, but it would be unthinkable to not allow the human pilot to override the responses made by the computer.

I don't know the specifics of a nuclear power plant, but for something truly critical, it would be more likely for a computer program to take no actions when confronted by conflicting inputs than to act based on majority opinion. Most people just don't have enough faith in computers to allow them to make decisions - an act that's much different than merely responding.

And, even in the role of just responding, humans just aren't going to accept not being able to override the computer's responses. Most computer programs are written to respond to anticipated situations; not to new, unique situations that might crop up for some reason.

For example, a satellite normally isn't allowed to make unlimited thruster firings to control satellite attitude when it thinks it's tumbling wildly out of control. For one thing, there's only a few things that could suddenly send a satellite tumbling wildly out of control - a collision that would destroy your satellite and the computer that would do the thruster firings or firing your thrusters wildly for no good reason at all or turning electromagnets on and off at random times. It doesn't matter how many sensors say the satellite is tumbling out of control - it's such a low probability event for a still living computer that the sensor readings have to be wrong, regardless of how many sensors are saying the same thing. In fact, to do any kind of large maneuver, a human has to manually disable the satellite's safety switches before firing the thrusters.

(Admittedly, I did once see the safety switches cause a satellite to be allowed to tumble out of control for a real attitude control anomaly, but, predictably, the attitude control anomaly was caused by an error that took some real brains and creativity to commit, even if they exercised poor judgement in how they applied their brains and creativity. In this case, inhibiting one method of maneuvering the satellite, but allowing a different method just prevented the satellite from countering a rather creative operator error.)

And, for Chernobyl, the engineers in charge of the test stayed, but the day shift crew that had gotten a detailed briefing on how the test should be run and what to expect had already gone home. The crew on duty were pretty much just following the direction of the engineers and hoping that some of the unusual things happening were normal for the test they were running. In fact, the power down sequence started during changeover between the swing shift and the night shift and wasn't that an incredibly exciting way to start a work shift. In other words, they'd deferred responsibility to engineers that were focused on their own test; not on the operations (and, for the record, the engineers' test ran perfectly).
 
Last edited:
Computer science news on Phys.org
  • #2


BobG said:
The number of sensors is unimportant. A computer can probably fly a plane more reliably than a human pilot, but it would be unthinkable to not allow the human pilot to override the responses made by the computer.

The current Airbus software overrides the pilots input if it detects the pilot is doing something that would endanger the aircraft (approaching stall for example). As far as I'm aware, there aren't overrides and there are no need for it.

It depends on what you're doing. In some cases you need to be able to take control, in other there's just no requirement for it and it can prove the better option. A simple example would be my little fan heater, which when too hot cuts the power to the element to prevent fire. In a case like this (although not strictly a computer) you don't want the user to be able to override. It's no different in a lot of scenarios involving people and potential for disaster.
I don't know the specifics of a nuclear power plant, but for something truly critical, it would be more likely for a computer program to take no actions when confronted by conflicting inputs than to act based on majority opinion. Most people just don't have enough faith in computers to allow them to make decisions - an act that's much different than merely responding.

As I understand it, nuclear plants require human input every so often otherwise warnings kick in.

There was another thread here where they were discussing human diagnosis vs expert software and the results showed the expert software was better at diagnosing the illness than the doctors alone and it was better at diagnosing the illness better than doctors working with the expert software (or panel of doctors). I'll try to dig it up.

Obviously, you need human operators to put some thought into things as you say, but for the majority of tasks the computers can handle things just fine, if not better than humans could.
 
  • #3


jarednjames said:
The current Airbus software overrides the pilots input if it detects the pilot is doing something that would endanger the aircraft (approaching stall for example). As far as I'm aware, there aren't overrides and there are no need for it.


As I understand it, nuclear plants require human input every so often otherwise warnings kick in.

There was another thread here where they were discussing human diagnosis vs expert software and the results showed the expert software was better at diagnosing the illness than the doctors alone and it was better at diagnosing the illness better than doctors working with the expert software (or panel of doctors). I'll try to dig it up.

Obviously, you need human operators to put some thought into things as you say, but for the majority of tasks the computers can handle things just fine, if not better than humans could.

I could believe this just based on simple probability. It wouldn't work on an episode of "House" where, out of many conflicting symptoms, you can be guaranteed that the least likely and most devastating illness must be what the patient is suffering from.

Computers may be accurate 99.99% of the time, but you need the human that's accurate 99% of the time to make the final decision about whether to trust the computer or not.

You're not talking just a simple matter of logic in this. (For one thing, who can you sue or at least use as a scapegoat if computers are given total control?)

Consumer products are the exception. The consumer is assumed to have no training what so ever. In fact, you're better off assuming the product is being used by a three-year-old (and then put a warning on the box that the product shouldn't be used by anyone under four-years-old).

And management problems almost always trump software, regardless of how good or bad the software is: http://www.businessweek.com/globalbiz/content/oct2006/gb20061005_846432.htm . Okay, granted, that's straying off topic to whole different topic and has nothing to do with the software actually in the planes, but it's not totally irrelevant. A well managed project will get you software that flows together into one seemingly seamless package. With a poorly managed project, you can start identifying which team designed which software module and, inevitably, there's the one module that looks like it must have been outsourced to Bzrkstffn. All of the software may work just fine, but it's the sort of thing that destroys confidence in the user when it comes to trusting his software.
 
Last edited by a moderator:
  • #4


BobG said:
Computers may be accurate 99.99% of the time, but you need the human that's accurate 99% of the time to make the final decision about whether to trust the computer or not.

Why?

If a computer is accurate 99.99% of the time and a human only 99% of the time, that means the computer is wrong 0.01% of the time and the human 1%. Which is pretty good odds for the computer for a start.

This means there will be a significant percentage of calls the human will over ride the computer on where the human is wrong and the computer is right.

If the human over rides every computer decision (and either agrees or disagrees with the computers call), he will be disagreeing with the computer too much and will disagree on correct calls far more than the wrong ones (assuming he spots the wrong ones at all).

Remember, the number of incorrect calls from the computer is small to begin with. If the user spots them all, that only accounts for a small proportion of his 1% error rate, leaving a lot of good calls to be overridden incorrectly.

You are then left with a system where the odds of the user a) thinking there's a mistake and b) overriding it incorrectly are greater than if you just left the computer to get on with it.

Let's imagine there are 10,000 calls to shut down the reactor each day. We know the computer makes 1 bad call a day. So one needs to be over ridden but the rest actioned.
The user only works with these 10,000 calls, so we also know he makes 100 bad calls a day. That means the computer will be saying shut down 9999 times a day correctly, but, assuming the user picks up the 1 bad call, that leaves 99 calls the user over rides the computer on that are wrong. 99 calls the user will have made wrongly to keep the plant operating when it should be shut down. This is in comparison to the 1 bad call the computer would make to shut the plant down. So do you want 1 bad call from a computer or 99 bad calls from a user?

Is it really a better system then?

I understand humans have issues with trusting computers and that they like to put the blame on computers first, but it's not something we can afford with matters of this nature.
 
Last edited:
  • #5


jarednjames said:
Why?

If a computer is accurate 99.99% of the time and a human only 99% of the time, that means the computer is wrong 0.01% of the time and the human 1%.

Since the human need only take over in that .01% of the time, that reduces the risk to (1% of 0.01%) = 0.0001%.

Not really, but the gist is right, compared to your idea that they're mutually exclusive probabilties.
 
  • #6


DaveC426913 said:
Since the human need only take over in that .01% of the time, that reduces the risk to (1% of 0.01%) = 0.0001%.

Not really, but the gist is right, compared to your idea that they're mutually exclusive probabilties.

You're working on the assumption that the human can accurately gauge when they are in the .01%.
 
  • #7


This seems like the wrong thread for a discussion of statistical mechanics of humans vs. computers.

Let me add, some of the safety systems are not electronic sensors, but like a Blowout preventer, they are physically actuated systems. This discussion is off topic, even by P&WA standards, as it's now divorced from the reality of reactor design.
 
  • #8


NeoDevin said:
You're working on the assumption that the human can accurately gauge when they are in the .01%.

Agreed. That's why I said it's not quite right. But the other way isn't either. The two systems are not freely independent; they are interdependent.
 
  • #9


DaveC426913 said:
Agreed. That's why I said it's not quite right. But the other way isn't either. The two systems are not freely independent; they are interdependent.

I disagree.

If the human and the computer perform the "calculation" independently, their conclusions are independent.

I do acknowledge my numbers are a tad over exagerated, but they're nowhere near as low as you presented.

Perhaps this should have its own thread.
 
  • #10


jarednjames said:
I disagree.

If the human and the computer perform the "calculation" independently, their conclusions are independent.

I do acknowledge my numbers are a tad over exagerated, but they're nowhere near as low as you presented.

Perhaps this should have its own thread.

"What if God or Evo or another Mentor was watching us..."

Man I hated that song, but I love the mentors!

OK...

I see it this way: computers are fundamentally less adaptable at this point, and unforseen complications throw them in a way that it won't a human. That is a persistent "x" factor.
 
  • #11


Borek said:
Your wish is granted.

I can change the subject or do some additional editing if necessary.

*bows* Holy master of the green spangles... blessed be your thread split. :wink:
 
  • #12


nismaratwork said:
I see it this way: computers are fundamentally less adaptable at this point, and unforseen complications throw them in a way that it won't a human. That is a persistent "x" factor.

Oh I completely agree, there's always something that will throw a computer, no matter how good you make the software.

But naturally, you need humans trained and capable of noticing when the computer has been thrown.

For me personally, for something like a nuclear plant I think you should have triple redunancy style systems, but if any process disagrees with the rest it is flagged for human intervention (plus perhaps a random sample chosen to be checked by a human).
 
  • #13


jarednjames said:
Oh I completely agree, there's always something that will throw a computer, no matter how good you make the software.

But naturally, you need humans trained and capable of noticing when the computer has been thrown.

For me personally, for something like a nuclear plant I think you should have triple redunancy style systems, but if any process disagrees with the rest it is flagged for human intervention (plus perhaps a random sample chosen to be checked by a human).

I prefer mechanical fail safes as a backup, such as burst-membranes for shockwaves, or melting a coupling, etc. Computer control, human control, and mechanical backups... AFAIK this is implemented in the USA.

On the other hand, if it could be made to work, clearly computers driving ALL vehicles would be preferable to humans, because it wouldn't take much to beat our safety records in cars. Still, that seems to be as far away as ever...
 
  • #14


nismaratwork said:
I prefer mechanical fail safes as a backup, such as burst-membranes for shockwaves, or melting a coupling, etc. Computer control, human control, and mechanical backups... AFAIK this is implemented in the USA.

And the UK if my knowledge serves.

My previous post referred solely to the computer side of things and the tasks they perform. Overall, I'd want mechanical systems that can't be overridden (so fixed all or nothing devices), mechanical systems we operate manually (emergency stuff, used before the former are required) and then computer controls that handle day to day running.
On the other hand, if it could be made to work, clearly computers driving ALL vehicles would be preferable to humans, because it wouldn't take much to beat our safety records in cars. Still, that seems to be as far away as ever...

Our current tech could do it, but as far as I'm aware it's only recently there's been more work focussed in this area.

I'd say there are a lot more factors involved than just the computers though.
 
  • #15


jarednjames said:
And the UK if my knowledge serves.

My previous post referred solely to the computer side of things and the tasks they perform. Overall, I'd want mechanical systems that can't be overridden (so fixed all or nothing devices), mechanical systems we operate manually (emergency stuff, used before the former are required) and then computer controls that handle day to day running.


Our current tech could do it, but as far as I'm aware it's only recently there's been more work focussed in this area.

I'd say there are a lot more factors involved than just the computers though.

I don't know, you COULD have smart "roads" and cars, but having a computer navigate a car alone has been a rather abject failure... just ask DARPA. The system has to be at least as safe as a human in regular performance, and react equally well to an accident, and have a failsafe. That's a LOT to ask for, and if the technology exists, I haven't heard of a valid demonstration of it all working.
 
  • #16


nismaratwork said:
and if the technology exists, I haven't heard of a valid demonstration of it all working.

Precisely.

They are only know attempting it and so the systems don't exist as of yet. There are plenty of tests being carried out (Google it) and you'll see the results.

They have cars that can drive thenselves, but they need a lot of work and refinement.
 
  • #17


jarednjames said:
Precisely.

They are only know attempting it and so the systems don't exist as of yet. There are plenty of tests being carried out (Google it) and you'll see the results.

They have cars that can drive thenselves, but they need a lot of work and refinement.

I agree, but how does that translate into the technology existing yet? I assume much of the challenge lies in the software, and that's quite the technological challenge in and of itself.
 
  • #18
Okay, so now you’ve given this its own thread I can throw my twopenn'orth in and for once, you’re right in my territory. I started working as a controls engineer in the early 1980s when automation was still largely done by relay cabinets. In the mid 1980s PLCs were coming into common use – they had been around for a while by then but that is roughly when they started to really take off. At first our customers were really suspicious of them and we had the really illogical situation of having to have hard wired back-up systems for anything that was PLC controlled, which utterly defeated the advantage of the PLC. But we soon got past that and we started to see large installations that were entirely computer controlled. But it was always clear that emergency stop circuits and other safety circuits had, by law, to be hard wired.

Not any more. You can now get safety rated PLCs and you can even connect large emergency stop circuits on Ethernet. And it is all guaranteed, perfectly safe. I saw a demonstration by one prominent industrial control computer manufacturer that involved one hundred discs each controlled by an individual servo control axis. Each disc had a small aperture and behind each disc was a flashing LED. With the axes all connected on some conventional comms protocol, all of the axes were mis-synced, and random flashes of light appeared in unpredictable locations. They then switched to a connection on industrial Ethernet and suddenly all axes were perfectly synced with a wall of flashing lights all absolutely together. Oh yes, and part of the point was that this Ethernet network was simultaneously supporting several other connections between various other communicating devices. Industrial Ethernet uses the concept of an ‘ambulance’ lane, where urgent messages can bypass the general traffic.

I can anticipate that this will not convince you. I can only say that the question you have titled this thread with seems out-of-date to me.
 
  • #19


nismaratwork said:
I agree, but how does that translate into the technology existing yet? I assume much of the challenge lies in the software, and that's quite the technological challenge in and of itself.

The setup I've seen (well my favourite) consisted of a radar device on the top of the car constantly scanning the environment and providing feedback with the computer could react to.

The technology to do this exists - you can see videos of it in action.

However, the systems (I'm talking about useable not just massive radar and computer systems bolted to cars) don't exist. They haven't scaled it down to the point it's practical.

The software is the key though, getting it to react correctly to given situations.
 
  • #20
Ken Natton said:
I can anticipate that this will not convince you. I can only say that the question you have titled this thread with seems out-of-date to me.

Completely agree with what you've said there.

I'd also add that a computer doing a task a million times will be consistent where a human will not (or has a greater risk of fouling up). Not to mention that computers can react a heck of a lot faster.

Frankly, considering the premise of this thread is that a computer is right 99.99% of the time and a human only 99% then I'd say it's self defeating. If in that situation, it's obvious the computer is far more trustworthy in providing an answer than a human, given the computer is far less likely to make a mistake.
 
  • #21
More trustworthy within rigid confines of its programming, see Watson's notable failures which no human would make. Automation of major systems can't allow for that kind of inflexibility, so sub-systems are automated instead.

I'd add, when someone tells you that ANYTHING is perfectly safe, duck.
 
  • #22
nismaratwork said:
I'd add, when someone tells you that ANYTHING is perfectly safe, duck.

Oh yeah. It's the worst thing anyone can say to me.

I see that as a personal challenge.
 
  • #23
jarednjames said:
Not to mention that computers can react a heck of a lot faster.

False. Even at two billion operations per second - enough to count every man woman and child on the face of the planet many times over in the time it took you to read this - my Windoze computer will happily let me grow old and die while it tries to do the Herculean task of deleting a couple of files.
 
  • #24
jarednjames said:
Oh yeah. It's the worst thing anyone can say to me.

I see that as a personal challenge.

:biggrin:

Well then, Ken has challenged you! Avant!
 
  • #25
DaveC426913 said:
False. Even at two billion operations per second - enough to count every man woman and child on the face of the planet many times over in the time it took you to read this - my Windoze computer will happily let me grow old and die while it tries to do the Herculean task of deleting a couple of files.

Someone doesn't like MS... :wink:

Come to the *nix side... the water's fine...
 
  • #26
nismaratwork said:
Come to the *nix side... the water's fine...

I'm tempted. But don't you need to know what you're doing? A system with Linux is not really a low maintenance / low learning curve tool for someone who doesn't want computer admin as a hobby is it?
 
  • #27
DaveC426913 said:
I'm tempted. But don't you need to know what you're doing? A system with Linux is not really a low maintenance / low learning curve tool for someone who doesn't want computer admin as a hobby is it?

It depends how nerdy you are, but if you still remember DOS it's really not a leap. In fact, with some modern packages it doesn't even take that much. Really, you just have to keep up with security updates, but unlike MS they actually work.

There is also the joy of 'uptime', but I admit, I wouldn't have learned unix/linux on lark, it was for network administration and... stuff.

Still, I'm sure there are folks here who would be happy and able to walk you through it... *nix users are often quite... well... religious about it.
 
  • #28
Try something like Ubuntu in a VM.

It's true there are a lot of areas where you can totally mess up (for example, as soon as I got my *nix account in my lab, I broke X Windows and spent the rest of the day fixing it...:rolleyes:...luckily I had backed up xorg.conf), modern Linux distributions are actually pretty user-friendly. I dual boot my laptop with Windows 7 and OpenSUSE 11.4, and if it weren't for needing Windows for AutoCAD and SolidWorks I'd go *nix full time.
 
  • #29
Oh, and just being able to set up your own shells and BNCs is worth it frankly...

edit: LEGAL ones.
 
  • #30
jhae2.718 said:
Try something like Ubuntu in a VM.

It's true there are a lot of areas where you can totally mess up (for example, as soon as I got my *nix account in my lab, I broke X Windows and spent the rest of the day fixing it...:rolleyes:...luckily I had backed up xorg.conf), modern Linux distributions are actually pretty user-friendly. I dual boot my laptop with Windows 7 and OpenSUSE 11.4, and if it weren't for needing Windows for AutoCAD and SolidWorks I'd go *nix full time.

Can you not run them in a VM? My ubuntu setup let's me run them fine through one.
 
  • #31
jarednjames said:
Can you not run them in a VM? My ubuntu setup let's me run them fine through one.

I plan on doing a complete move to Linux over the summer, so I'll be doing that.
 
  • #32
jhae2.718 said:
I plan on doing a complete move to Linux over the summer, so I'll be doing that.

I will warn I have trouble on my laptop with it (desktop runs them fine).

So as long as the computer is up to the task (at least dual core) you should be fine.

I've also found XP runs better than 7 in a VM, but that's down to personal choice.
 
  • #33
nismaratwork said:
I'd add, when someone tells you that ANYTHING is perfectly safe, duck.

jarednjames said:
Oh yeah. It's the worst thing anyone can say to me.

I see that as a personal challenge.

nismaratwork said:
:biggrin:

Well then, Ken has challenged you! Avant!


Okay, I’ll rise to that challenge. Well, I have to begin by making a concession. By a literal definition of the term ‘perfect’, I overstated the case when I said ‘perfectly’ safe. As you pointed out nismar, nothing is ever ‘perfectly’ safe. I’m sure that none of us are about to get involved in a pointless discussion about the definition of the word ‘perfect’, but I am going to contend that what I meant by saying that computer controlled safety circuits are ‘perfectly safe’ was ‘safe within reasonable limits’. On that basis I can rise to the challenge to defend that assertion.

If a formula 1 racing driver is killed in a crash during a race or during practice, there are inevitable cries that motor racing is unacceptably dangerous and should be banned. Then someone with a calmer head points out the simple truth that far more people are killed participating in some other apparently much more innocuous activity than are racing cars. Nobody seriously doubts that motor racing is dangerous. But most rational people accept the risks fall well within the bounds of acceptable levels.

Similarly, all industrial processes carry some level of risk. If you are going to fill a plant with machinery that whizzes round at great speed, with all manner of pushing, pulling, stamping, crushing, whirring, whizzing motions there are going to be significant dangers. We can draw the line of acceptable risk at absolutely no accident whatever, but then we had better close every industrial process in the world right now. Alternatively, we can accept the reality that we have to draw the line of acceptable risk somewhere above zero, and recognise that does mean that some will have to pay the price with their life, with their limbs or otherwise with their general health and well-being.

But that does not, of course, mean that when an industrial accident occurs we just say ‘meh, acceptable risk’. Modern industrial organisations employ significant numbers of people whose responsibility it is to monitor safety standards and ensure that all processes are kept as safe as they possibly can be. When an industrial accident involving significant injury occurs, investigations into what occurred with a particular view to investigating if anyone bypassed the safety standards in any way are mandatory. And even when, as is commonly the case, it is found that the only person who bypassed the safety measures was the victim of the accident, question are asked about what could have been done to have made it impossible for that person to have bypassed the safety measures.

And of course it is not left to the personal judgement of a control engineer like me whether or not the fundamental design is ‘perfectly safe’. These days, not only do we have to perform risk assessments before the design phase, we also have to produce documentation after the fact demonstrating what measures were implemented to mitigate those risks. And on the matter of emergency stop circuits and other safety circuits, there are clear rules supported by the weight of law.

So, having lived some years with the accepted wisdom that safety circuits and emergency stop circuits should be hard wired, I, like my colleagues, was very sceptical when representatives of PLC manufacturers first started to talk to us about safety PLCs. They had to work hard to convince us to take the notion seriously. But ultimately, their strongest argument was that the safety authorities had reviewed them a deemed them to meet all the existing safety standards.

So in answer to the question ‘can computers be trusted’ the answer is they already are in a wide variety of situations, and invariable prove themselves to be fully worthy of that trust. And when I say computer controlled safety systems are perfectly safe, feel free to duck, but it is clear, there is no rational basis to do so.

I did warn you that you were on my territory.
 
Last edited:
  • #34
For the record, I trust computers more than I trust humans.

I don't believe anything is perfectly safe, but I agree with Ken that it is "within reasonable limits".
 
  • #35
No they can't. We should go back to using pencils, paper and slide rules.



On another note this thread is a bit odd, computers do exactly what they are told to do. Trust and trustworthiness implies that computers can be dishonest.
 
Back
Top