Can computer control systems be relied upon for critical processes?

  • Thread starter BobG
  • Start date
  • Tags
    Computers
In summary, the use of multiple sensors in critical systems, such as in airplanes and nuclear power plants, helps to reduce the chances of serious failures. However, in situations where human input is necessary, overrides may be implemented to allow for manual control. In other cases, expert software or computer programs may be more reliable and efficient in handling tasks compared to human operators.
  • #36
xxChrisxx said:
No they can't. We should go back to using pencils, paper and slide rules.

Slide rule to me is a foreign concept. One of those things I see but have never thought to have a go.
On another note this thread is a bit odd, computers do exactly what they are told to do. Trust and trustworthiness implies that computers can be dishonest.

That was actually something I was going to put in my previous post.

I was going to comment on the fact computers can't lie to you. They tell you exactly what they're supposed to.

The real question is can we trust the programmers to create suitable software capable of the job and is the hardware up to the task.
 
Computer science news on Phys.org
  • #37
xxChrisxx said:
Trust and trustworthiness implies that computers can be dishonest.
No it doesn't. Can cheap brake pads on your car be trusted? Does that make them dishonest?

It simply means that, at some point the behaviour of a system is more complex (more possible or onforseen outcomes) than it seems on the surface. Computers are particularly prone to this.
 
  • #38
DaveC426913 said:
No it doesn't. Can cheap brake pads on your car be trusted? Does that make them dishonest?

Trusted to do what?

This isn't trust, it's expectation of function. Cheap pads will stop your car. Not as well as good ones, but you get what you pay for. 'Working as intended' is the phrase that sums it up.

A computer works blindly, so any lack of confidence comes from whoever designs the system. It's not the computers fault per se. Trust is a legitimate word to use in this context I suppose.

It just seems odd to me, as computers are just big calculators, and I'd never use the phrase 'I do/don't trust my calculator'.
 
Last edited:
  • #39
xxChrisxx said:
'I [STRIKE]do/[/STRIKE]don't trust my calculator'.

You would if you had my last one.
 
  • #40
jarednjames said:
You would if you had my last one.

A bad/good workman? :biggrin:


I keep thinking about this and wanting to add more regarding what Dave said about complexity. Complex systems of course introduce more possibility of errors. However we can't just take 'do I trust a computer to do this job' at face value. The computers ability to do a job must be compared to a humans ability to do a job.

I would 'trust' a human more in a scenario where something unexpected is likely to happen. As our brains are naturally more adaptable than a computers hard programming.

I would 'trust' a computer more to control something that requires that processing of vast amounts of data, or something that needs to be done quickly.

I assume this thread was born from a discussion of the Fukushima thing, or some other disaster where compter control is used extensively. Computer control adds a level of safety that simply could not be achieved by a human, so even if they fail every now and again, it's certainly safer to put computers in control with a man in control of the computer.
 
Last edited:
  • #41
xxChrisxx said:
A bad/good workman? :biggrin:

I wish it was, and I'm normally the first to go there.

Unfortunately this was a true glitch. You could put the same thing in and get different answers each time (and we're talking simple operations).
I keep thinking about this and wanting to add more regarding what Dave said about complexity. Complex systems of course introduce more possibility of errors. However we can't just take 'do I trust a computer to do this job' at face value. The computers ability to do a job must be compared to a humans ability to do a job.

I would 'trust' a human more in a scenario where something unexpected is likely to happen. As our brains are naturally more adaptable than a computers hard programming.

I would 'trust' a computer more to control something that requires that processing of vast amounts of data, or something that needs to be done quickly.

I assume this thread was born from a discussion of the Fukushima thing, or some other disaster where compter control is used extensively. Computer control adds a level of safety that simply could not be achieved by a human, so even if they fail every now and again, it's certainly safer to put computers in control with a man in control of the computer.

Completely agree. I would add to your second point that I'd trust a computer more with repetitive tasks as well - particularly those involving hundreds / thousands / millions of repetitions.
 
  • #42
xxChrisxx said:
Trusted to do what?

This isn't trust, it's expectation of function. Cheap pads will stop your car. Not as well as good ones, but you get what you pay for. 'Working as intended' is the phrase that sums it up.

A computer works blindly, so any lack of confidence comes from whoever designs the system. It's not the computers fault per se. Trust is a legitimate word to use in this context I suppose.

It just seems odd to me, as computers are just big calculators, and I'd never use the phrase 'I do/don't trust my calculator'.

The choice between 'trusted' and 'expectation of function' is semantic. The fact remains that distrusting an inanimate object does not imply any element of dishonesty.
 
  • #43
Ken Natton said:
Okay, I’ll rise to that challenge. Well, I have to begin by making a concession. By a literal definition of the term ‘perfect’, I overstated the case when I said ‘perfectly’ safe. As you pointed out nismar, nothing is ever ‘perfectly’ safe. I’m sure that none of us are about to get involved in a pointless discussion about the definition of the word ‘perfect’, but I am going to contend that what I meant by saying that computer controlled safety circuits are ‘perfectly safe’ was ‘safe within reasonable limits’. On that basis I can rise to the challenge to defend that assertion.

If a formula 1 racing driver is killed in a crash during a race or during practice, there are inevitable cries that motor racing is unacceptably dangerous and should be banned. Then someone with a calmer head points out the simple truth that far more people are killed participating in some other apparently much more innocuous activity than are racing cars. Nobody seriously doubts that motor racing is dangerous. But most rational people accept the risks fall well within the bounds of acceptable levels.

Similarly, all industrial processes carry some level of risk. If you are going to fill a plant with machinery that whizzes round at great speed, with all manner of pushing, pulling, stamping, crushing, whirring, whizzing motions there are going to be significant dangers. We can draw the line of acceptable risk at absolutely no accident whatever, but then we had better close every industrial process in the world right now. Alternatively, we can accept the reality that we have to draw the line of acceptable risk somewhere above zero, and recognise that does mean that some will have to pay the price with their life, with their limbs or otherwise with their general health and well-being.

But that does not, of course, mean that when an industrial accident occurs we just say ‘meh, acceptable risk’. Modern industrial organisations employ significant numbers of people whose responsibility it is to monitor safety standards and ensure that all processes are kept as safe as they possibly can be. When an industrial accident involving significant injury occurs, investigations into what occurred with a particular view to investigating if anyone bypassed the safety standards in any way are mandatory. And even when, as is commonly the case, it is found that the only person who bypassed the safety measures was the victim of the accident, question are asked about what could have been done to have made it impossible for that person to have bypassed the safety measures.

And of course it is not left to the personal judgement of a control engineer like me whether or not the fundamental design is ‘perfectly safe’. These days, not only do we have to perform risk assessments before the design phase, we also have to produce documentation after the fact demonstrating what measures were implemented to mitigate those risks. And on the matter of emergency stop circuits and other safety circuits, there are clear rules supported by the weight of law.

So, having lived some years with the accepted wisdom that safety circuits and emergency stop circuits should be hard wired, I, like my colleagues, was very sceptical when representatives of PLC manufacturers first started to talk to us about safety PLCs. They had to work hard to convince us to take the notion seriously. But ultimately, their strongest argument was that the safety authorities had reviewed them a deemed them to meet all the existing safety standards.

So in answer to the question ‘can computers be trusted’ the answer is they already are in a wide variety of situations, and invariable prove themselves to be fully worthy of that trust. And when I say computer controlled safety systems are perfectly safe, feel free to duck, but it is clear, there is no rational basis to do so.

I did warn you that you were on my territory.

No arguments here, and no desire or need to bicker over "perfect". Thanks for taking the challenge so well!
 
  • #44
DaveC426913 said:
The choice between 'trusted' and 'expectation of function' is semantic. The fact remains that distrusting an inanimate object does not imply any element of dishonesty.

"...Other times he would accuse chestnuts of being lazy..." (Dr. Evil)
 
  • #45
xxChrisxx said:
computers do exactly what they are told to do

Of course, what we actually tell them to do may not be what we intend to tell them to do. Any programmer can tell lots of stories about this, from personal experience. :wink:
 
  • #46
xxChrisxx said:
No they can't. We should go back to using pencils, paper and slide rules.



On another note this thread is a bit odd, computers do exactly what they are told to do. Trust and trustworthiness implies that computers can be dishonest.

I don't recall asking for BSOD once, or my old pentium 90 to burn. :wink:
 
  • #47
jtbell said:
Of course, what we actually tell them to do may not be what we intend to tell them to do. Any programmer can tell lots of stories about this, from personal experience. :wink:

Skynet! iRobot! DOOOOooooooOOOOooooOOOoooom! :wink:

And yes, I'm kidding!
 
  • #48
xxChrisxx said:
...so even if they fail every now and again, it's certainly safer to put computers in control with a man in control of the computer.
Yes but this point was brought up before (when dealing with probability of error).

If you put a (fallible) person in charge of the computer, then they may override the computer's decision when it should not be overridden.

Put another way: if a computer always said "I am failing now." then it would be easy for a human to know when to step in. The issue comes when the (1% fallible) human erroneously thinks the computer is failing (making a bad decision).

Put a third way: if you put a less reliable system (1% failure) in charge of a more reliable system (.01% failure), then the whole system is only as reliable as the less reliable system (1% failure). So no, not necessarily safer.
 
Last edited:
  • #49
This meme that 'a computer can only do what its programmer tells it to do' is fallacious. It is ignorant of the phenomenon of emergent behaviour.
 
  • #50
DaveC426913 said:
This claim that 'a computer can only do what its programmer tells it to do' is fallacious. It is ignorant of the phenomenon of emergent behaviour.

...And it shows a real lack of Assimovian training...

Seriosly, fiction or no that's one hell of a loophole to plug, and that's just ONE for a highly sophisticated AI.

I'd add, when a computer fails it doesn't always crash... how much worse a failure that misleads rather than prompts and override?

edit: Still, they seem to do the job in quite a few situations, and you can put redundancy into place that is not possible with humans.

One practical concern: in a highly automated society, those electronics had best be WELL shielded, or suddenly an EMP of even crude design becomes a weapon like no other. Hell, it already is.
 
  • #51
DaveC426913 said:
Put a third way: if you put a less reliable system (1% failure) in charge of a more reliable system (.01% failure), then the whole system is only as reliable as the less reliable system (1% failure). So no, not necessarily safer.

Argument for arguments sake.

Closed loop control with a man on the override button is distinctly safer than manual control of a complex system. Automated control gives the man less opportunity to make errors becuase he has less actions.

Risk = probability * number of events * outcome.

If he has to do 1000 operations manually witha 1% error = 10 errors.
Or 100 operations with the computer taking control there = 1 error.
 
  • #52
xxChrisxx said:
Argument for arguments sake.

Closed loop control with a man on the override button is distinctly safer than manual control of a complex system. Automated control gives the man less opportunity to make errors becuase he has less actions.

Risk = probability * number of events * outcome.

If he has to do 1000 operations manually witha 1% error = 10 errors.
Or 100 operations with the computer taking control there = 1 error.

Chernobyl.
 
  • #53
xxChrisxx said:
Argument for arguments sake.

Closed loop control with a man on the override button is distinctly safer than manual control of a complex system. Automated control gives the man less opportunity to make errors becuase he has less actions.

Risk = probability * number of events * outcome.

If he has to do 1000 operations manually witha 1% error = 10 errors.
Or 100 operations with the computer taking control there = 1 error.
Agreed. There's an interplay. I was just pointing out that's it's not as ideal as a human overriding a device only when the device announces it is failing.
 
  • #54
nismaratwork said:
Chernobyl.

Nothing in the world is truly idiot proof. We should also make a distinction between error and blunder.
 
  • #55
DaveC426913 said:
This meme that 'a computer can only do what its programmer tells it to do' is fallacious. It is ignorant of the phenomenon of emergent behaviour.

Not arguing with you, but could you give some examples before I rush off into the wide expanse that is Google? (I'm really interested in this sort of thing.)
 
  • #56
jarednjames said:
Not arguing with you, but could you give some examples before I rush off into the wide expanse that is Google? (I'm really interested in this sort of thing.)
Um.

Can Conway's Game of Life be trusted to generate patternless iterations that do not lend themselves to analysis and comparison to life?

Does the programmer, when he writes the half dozen or so lines it requires to invoke CGoL be held accountable for the behaviour of "[URL and http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" ?

Is it meaningful to say that this computer program is "only doing what its programmer told it to do"?


If so, then the principle can be scaled up to cosmic proportions. The universe exhibits predictable and trustworthy behaviour at all times because it is only doing what the laws of physics allow it to do.


The game can also serve as a didactic analogy, used to convey the somewhat counter-intuitive notion that "design" and "organization" can spontaneously emerge in the absence of a designer. For example, philosopher and cognitive scientist Daniel Dennett has used the analogue of Conway's Life "universe" extensively to illustrate the possible evolution of complex philosophical constructs, such as consciousness and free will, from the relatively simple set of deterministic physical laws governing our own universe.
http://en.wikipedia.org/wiki/Conway's_Game_of_Life#Origins
 
Last edited by a moderator:
  • #57
xxChrisxx said:
Nothing in the world is truly idiot proof. We should also make a distinction between error and blunder.

Fair enough, but blunder is in our nature as well, and once we stop trusting our systems (a form of DaveC's example) we're in trouble. It's not a simple thing, even with mechanical safeties.

@DaveC: there is another thing: you can't yet hack people, but you can hack a computer. That is something that undermines all of this.
 
  • #58
DaveC426913 said:
Um.

Can Conway's Game of Life be trusted to generate patternless iterations that do not lend themselves to analysis and comparison to life?

Does the programmer, when he writes the half dozen or so lines it requires to invoke CGoL be held accountable for the behaviour of "[URL and http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" ?

Is it meaningful to say that this computer program is "only doing what its programmer told it to do"?


If so, then the principle can be scaled up to cosmic proportions. The universe exhibits predictable and trustworthy behaviour at all times because it is only doing what the laws of physics allow it to do.


http://en.wikipedia.org/wiki/Conway's_Game_of_Life#Origins

Thanks.

@Nismar: People can be bribed.
 
Last edited by a moderator:
  • #59
nismaratwork said:
@DaveC: there is another thing: you can't yet hack people...
Of course you can. Consider the essence of hacking. Anything you can do to a computer, could be done to a human easily enough.

Alter his programming? Sure. Give him alcohol. (With the same input, we now get different output.)

Insert a pernicious subprogram? Sure. Shower him with propoganda, changing his political values (his output may change to something covert that does not benefit the system, and may hurt it.)
 
  • #60
DaveC426913 said:
(With the same input, we now get different output.)

And usually a far more reliable and honest output than you'd get otherwise. :wink:
 
  • #61
jarednjames said:
And usually a far more reliable and honest output than you'd get otherwise. :wink:

That's because this alteration breaks other subprograms, such as Inhibitions and ThingsBestLeftUnsaid. :smile:
 
  • #62
DaveC426913 said:
That's because this alteration breaks other subprograms, such as Inhibitions and ThingsBestLeftUnsaid. :smile:

:smile:
 
  • #63
DaveC426913 said:
Of course you can. Consider the essence of hacking. Anything you can do to a computer, could be done to a human easily enough.

Alter his programming? Sure. Give him alcohol. (With the same input, we now get different output.)

Insert a pernicious subprogram? Sure. Shower him with propoganda, changing his political values (his output may change to something covert that does not benefit the system, and may hurt it.)

It's not the same, not as easy, not as reliable... just ask the CIA and every military in the modern world... people are too variable.

Yeah, stick them with amphetamines and barbituates, or versed and scopalamine and you'll get something (who knwos what), and you can go 'Clockwork Orange' on them, but really it's not that simple.

In a few minutes many people here could insert a routine into these forums to cause a temporary breakdown, or gain administrative privelages. There is no equivalent for humans that isn't M.I.C.E, takes time, and has uncertain outcomes.

*bribery is under MICE
 
  • #64
nismaratwork said:
It's not the same, not as easy, not as reliable... just ask the CIA and every military in the modern world... people are too variable.

Yeah, stick them with amphetamines and barbituates, or versed and scopalamine and you'll get something (who knwos what), and you can go 'Clockwork Orange' on them, but really it's not that simple.

In a few minutes many people here could insert a routine into these forums to cause a temporary breakdown, or gain administrative privelages. There is no equivalent for humans that isn't M.I.C.E, takes time, and has uncertain outcomes.

*bribery is under MICE

But you're bifurcating bunnies and missing the point.

Simply put, humans are, like computers, susceptible to alterations in their expected tasks.


(I just heard on the news about a Washington Airport Tower Controller that "crashed" without a "failover system" in place. :biggrin:
http://www.suite101.com/content/air-traffic-controller-sleeps-while-jets-race-toward-white-house-a361811 )
 
Last edited by a moderator:
  • #65
DaveC426913 said:
But you're bifurcating bunnies and missing the point.

Simply put, humans are, like computers, susceptible to alterations in their expected tasks.


(I just heard on the news about a Washington Airport Tower Controller that "crashed" without a "failover system" in place. :biggrin:
http://www.suite101.com/content/air-traffic-controller-sleeps-while-jets-race-toward-white-house-a361811 )

Oh, don't get me wrong, humans fail, but consider what Stuxnet did compared to what it would take human agents to accomplish.

Hacking is a big deal, it affords precise control, or at least a range of precision options that can be covertly and rapidly implemented from a distance. A person can fall asleep (ATC), or be drunk, or even crooked, but they will show signs of this and a good observer can catch it. It is far easier to program something malicious than it is to induce a human to commit massive crimes in situ, with no hope of escape.

edit: "bifurcating bunnies" :smile: Sorry I forgot to aknowledge that. Ever see a show called 'Father Ted'? Irish program, and one episode involves a man who is going to LITERALLY split hares...
*he doesn't, the bunnies live to terrorize a bishop
 
Last edited by a moderator:
  • #66
nismaratwork said:
It is far easier to program something malicious than it is to induce a human to commit massive crimes in situ, with no hope of escape.
It's just a matter of scale. Same principle, different effort. Doesn't change the things that need to be in-place to prevent it (like having http://news.yahoo.com/s/ap/20110324/ap_on_bi_ge/us_airport_tower" !:eek:).
 
Last edited by a moderator:
  • #67
DaveC426913 said:
It's just a matter of scale. Same principle, different effort. Doesn't change the things that need to be in-place to prevent it (like having http://news.yahoo.com/s/ap/20110324/ap_on_bi_ge/us_airport_tower" !:eek:).

Call me impressed by scale. :-p


Still... ATC's are stupidly overworked...
 
Last edited by a moderator:
  • #68
This is not really apropos of anything that is currently being said, but a thought relating to this topic did occur to me, relating really to this issue of ‘trust’ and BobG’s original question which was about trusting the computer to the point of making no provision for human override. And what I was just remembering is that all this computer technology is usually attributed as a spin-off of the space race, and the point is that there was significant computer control on the Apollo missions. Doubtless BobG would point out that the missions were flown by human intelligence. But there were significant and vital systems that were computer controlled. A former boss of mine from many years ago, when we were first getting to grips with computer controlled systems, if one of us was a little too insistent with the objection ‘but what if it fails?’ would point out that if one of those working on the Apollo missions had said ‘but what if it fails?’ the answer would have been ‘it musn’t fail’.

And, in point of fact, the issue with industrial control systems is not actually just one of safety. The key issue really is reliability. Industrial plants usually calculate their efficiency in terms of actual output against projected capacity, and in the West certainly, for the most part, efficiencies well in excess of 90% are what is expected. If computer control systems were that unreliable, or that prone to falling over, production managers would have no compunction whatever about depositing them in the nearest skip. The major imperative to use computer control systems of course is reduced labour costs. But they would not have found such widespread use if they were anything like so vulnerable to failure as some contributors to this thread seem to believe they are.
 
Back
Top