The steps of Chernobyl and effects of radiation

In summary, the Chernobyl disaster was caused by a poorly designed reactor and the actions of the operators. The operators attempted to run an experiment on the reactor, bypassing safety systems and causing a buildup of a neutron poison. This led to a fast release of energy and a steam explosion that blew open the reactor. The reactor was graphite moderated, which contributed to the release of radiation into the atmosphere. The disaster had devastating effects on both the immediate area and surrounding countries, and proper safety protocols were not followed.
  • #71
Morbius said:
I sincerely hope that operators today I better than the boneheads at Three Mile Island that just about killed an entire industry with their stupidity.

I'm not sure it is their fault. After all, TMI proved finally the robustness of the design of a nuclear power plant in the West: the worst accident in Western history for decades didn't make one single victim - something another industry cannot claim.
The counter example was Chernobyl, showing that if you make a stupid design and do stupid things with it, that things really can be very sour, but not as sour as some fantasies claimed.

So objectively, TMI should have reassured the public that nuclear power, at least in the west, is rather safe. It didn't. I think several factors played a role at the same time. There was an ideological movement in the '70-ies (that became later partly the green movement) that instrumentised their ideological battle (which was anarchist inspired) against everything which was technological, state-driven and with links or origins in the military industrial complex. Nuclear power was of course a perfect target.
But there was also the obscure side of the nuclear industry, partly with its link to the military, and also a kind of justified mistrust of the public with their scientists and leaders who had over-sold several aspects of technology, instead of giving a more moderate and accurate view on things.

All this made that the confidence of the public was lost.
 
Engineering news on Phys.org
  • #72
vanesch said:
After all, TMI proved finally the robustness of the design of a nuclear power plant in the West: the worst accident in Western history for decades didn't make one single victim - something another industry cannot claim.
vanesch,

I agree whole heartedly. One of the problems for the nuclear industry is its safety record.

Without a "modest number" of accidents [ whatever that means ], the public doesn't see the safety.

For example, take the airline industry. There's a crash every few years, every million or so flights.
People get an idea what the risk in flying is.

For the nuclear industry, there were no accidents for many years. However, the anti-nukes
successfully promulgated the idea that there was a BIG, BAD accident just waiting to happen.

Since the public hadn't experienced an actual accident - the public perception of an accident was
formed by the wild fantasies of the anti-nukes. The nuclear industry couldn't dispel that with evidence
since there hadn't been an accident.

Then along comes Three Mile Island. Because of the above, it scared many people for a week.
The analysis of the actual consequences that came later did not percolate into the public's mind.
How many people have actually read the Rogovin Report?

So the public came away with the perception that nuclear accidents can happen, and that we got
"lucky" and "dodged a bullet" with Three Mile Island.

So not having any accidents, or having just a single accident doesn't help "normalize" the public's
perception of the risks of nuclear power. The fantasies can run free.

Additionally, I believe it was Henry Kendall of the UCS that rejected the argument that Three Mile
Island shows how safe nuclear power is. His statement was that you don't prove how safe nuclear
power is by having accidents.

My take is somewhere in the middle. The accident at Three Mile Island bolsters the safety of the
things that worked, and detracts from the things that didn't work. Clearly, the wisdom of having a
containment building is shown by Three Mile Island, especially in comparison to Chernobyl.

So much of the equipment performed well - containment building and systems that served to mitigate
the accident - were all shown to be worthy of the confidence we place in them.

The things that didn't work, were the stuck valve and the operators. So the valves and operators
were shown to be less reliable than what one would have hoped for.

On balance; I agree with you, Three Mile Island was a non-event as far as public safety.

Unfortunately, I don't think that's how the public perceives it.

Dr. Gregory Greenman
Physicist
 
  • #73
What keeps coming up is that Morbius is calling the operators stupid etc but they clearly were not stupid enough to fail the operator training that was in place so is the underlying reason of the accidents not the single fact that the oeprators were incompetent but rather that the system which allowed them to become operators was incompetent? And as such that is why certain measures have since been introduced to stop them over-riding safety mechanisms? Apologies if that what was you were getting at all along, but the language used obscured it if that was the case.

Incidentally in my country, which I'm pretty sure is actually the first country to introduce nuclear regulation, operators are not tested by the regulator. The training is intense but in no way comparable to a technicial university course - otherwise we would never find anyone to fulfil roles, which doesn't sound very good! The exception is the marine nuclear power training where it does indeed exceed or equal university level training. Also it is common here for ex-naval staff to go into regulation over the regular industry type jobs - probably a power thing!
 
  • #74
Had an interesting course on Human Error, and the below article I found really to be an eye opener. It's a bit of a read, but worth it to give a deeper view for "finger pointers" to consider. its a document from a human factors speciallist in three parts:

Thoughts on the New View of Human Error Part I: Do Bad Apples Exist?
by Heather Parker, Human Factors Specialist, System Safety, Civil Aviation, Transport Canada
The following article is the first of a three-part series describing some aspects of the “new view” of human error (Dekker, 2002). This “new view” was introduced to you in the previous issue of the Aviation Safety Letter (ASL) with an interview by Sidney Dekker. The three-part series will address the following topics:

Thoughts on the New View of Human Error Part I: Do Bad Apples Exist?
Thoughts on the New View of Human Error Part II: Hindsight Bias
Thoughts on the New View of Human Error Part III: “New View” Accounts of Human Error
http://www.tc.gc.ca/CivilAviation/publications/tp185/4-06/Pre-flight.htm#HumanError
Before debating if bad apples exist, it is important to understand what is meant by the term “bad apple.” Dekker (2002) explains the bad apple theory as follows: “complex systems would be fine, were it not for the erratic behaviour of some unreliable people (bad apples) in it, human errors cause accidents—humans are the dominant contributor to more than two-thirds of them, failures come as unpleasant surprises—they are unexpected and do not belong in the system—failures are introduced to the system only through the inherent unreliability of people.”
The application of the bad apple theory, as described above by Dekker (2002) makes great, profitable news, and it is also very simple to understand. If the operational errors are attributable to poor or lazy operational performance, then the remedy is straightforward—identify the individuals, take away their licences, and put the evil-doers behind bars. The problem with this view is that most operators (pilots, mechanics, air traffic controllers, etc.) are highly competent and do their jobs well. Punishment for wrongdoing is not a deterrent when the actions of the operators involved were actually examples of “right-doing”—the operators were acting in the best interests of those charged to their care, but made an “honest mistake” in the process; this is the case in many operational accidents.

Can perfect pilots and perfect AMEs function in an imperfect system?
This view is a more complex view of how humans are involved in accidents. If the operational errors are attributable to highly competent operational performance, how do we explain the outcome and how do we remedy the situation? This is the crux of the complex problem—the operational error is not necessarily attributable to the operational performance of the human component of the system—rather the operational error is attributable to, or emerges from, the performance of the system as a whole.
The consequences of an accident in safety-critical systems can be death and/or injury to the participants (passengers, etc.). Society demands operators be superhuman and infallible, given the responsibility they hold. Society compensates and cultures operators in a way that demands they perform without error. This is an impossibility—humans, doctors, lawyers, pilots, mechanics, and so on, are fallible. It should be the safety-critical industry’s goal to learn from mistakes, rather than to punish mistakes, because the only way to prevent mistakes from recurring is to learn from them and improve the system. Punishing mistakes only serves to strengthen the old view of human error; preventing true understanding of the complexity of the system and possible routes for building resilience to future mistakes.
To learn from the mistakes of others, accident and incident investigations should seek to investigate how people’s assessments and actions would have made sense at the time, given the circumstances that surrounded them (Dekker, 2002). Once it is understood why their actions made sense, only then can explanations of the human–technology–environment relationships be discussed, and possible means of preventing recurrence can be developed. This approach requires the belief that it is more advantageous to safety if learning is the ultimate result of an investigation, rather than punishment.
In the majority of accidents, good people were doing their best to do a good job within an imperfect system. Pilots, mechanics, air traffic controllers, doctors, engineers, etc., must pass rigorous work requirements. Additionally, they receive extensive training and have extensive systems to support their work. Furthermore, most of these people are directly affected by their own actions, for example, a pilot is onboard the aircraft they are flying. This infrastructure limits the accessibility of these jobs to competent and cognisant individuals. Labelling and reprimanding these individuals as bad apples when honest mistakes are made will only make the system more hazardous. By approaching these situations with the goal of learning from the experience of others, system improvements are possible. Superficially, this way ahead may seem like what the aviation industry has been doing for the past twenty years. However, more often than not, we have only used different bad apple labels, such as complacent, inattentive, distracted, unaware, to name a few; labels that only seek to punish the human component of the system. Investigations into incidents and accidents must seek to understand why the operator’s actions made sense at the time, given the situation, if the human performance is to be explained in context and an understanding of the underlying factors that need reform are to be identified. This is much harder to do than anticipated.
In Part II, the “hindsight bias” will be addressed; a bias that often affects investigators. Simply put, hindsight means being able to look back, from the outside, on a sequence of events that lead to an outcome, and letting the outcome bias one’s view of the events, actions and conditions experienced by the humans involved in the outcome (Dekker, 2002). In Part III, we will explore how to write accounts of human performance following the “new view” of human error.
Part II:
Hindsight Bias
Have you ever pushed on a door that needed to be pulled, or pulled on a door that needed to be pushed—despite signage that indicated to you what action was required? Now consider this same situation during a fire, with smoke hampering your sight and breathing. Why did you not know which way to move the door? There was a sign; you’ve been through the door before. Why would you not be able to move the door? Imagine that because of the problem moving the door, you inhaled too much smoke and were hospitalized for a few days. During your stay in the hospital, an accident investigator visits you. During the interview, the investigator concludes you must have been distracted, such that you did not pay attention to the signage on the door, and that due to your experience with the door, he cannot understand why you did not move the door the right way. Finally, he concludes there is nothing wrong with the door; that rather, it was your unexplainable, poor behaviour that was wrong. It was your fault.
The investigator in this example suffered from the hindsight bias. With a full view of your actions and the events, he can see, after the fact, what information you should have paid attention to and what experience you should have drawn from. He is looking at the scenario from outside the situation, with full knowledge of the outcome. Hindsight means being able to look back, from the outside, on a sequence of events that lead to an outcome you already know about; it gives you almost unlimited access to the true nature of the situation that surrounded people at the time; it also allows you to pinpoint what people missed and shouldn’t have missed; what they didn’t do but should have done (Dekker, 2002).
Thinking more about the case above, put yourself inside the situation and try to understand why you had difficulty exiting. In this particular case, the door needed to be pulled to exit because it was an internal hallway door. Despite a sign indicating the need to pull the door open (likely put there after the door was installed) the handles of the door were designed to be pushed—a horizontal bar across the middle of the door. Additionally, in a normal situation, the doors are kept open by doorstops to facilitate the flow of people; so you rarely have to move the door in your normal routine. In this particular case, it was an emergency situation, smoke reduced your visibility and it is likely you were somewhat agitated due to the real emergency. When looking at the sequence of actions and events from inside the situation, we can explain why you had difficulty exiting safely: a) the design of the door, b) the practice of keeping the fire doors open with doorstops, c) the reduced visibility, and d) the real emergency, are all contributing and underlying factors that help us understand why difficulty was encountered.
According to Dekker (2002), hindsight can bias an investigation towards conclusions that the investigator now knows (given the outcome) that were important, and as a result, the investigator may assess people’s decisions and actions mainly in light of their failure to pick up the information critical to preventing the outcome. When affected by hindsight bias, an investigator looks at a sequence of events from outside the situation with full knowledge of the events and actions and their relationship to the outcome (Dekker, 2002).
The first step in mitigating the hindsight bias is to work towards the goal of learning from the experience of others to prevent recurrence. When the goal is to learn from an investigation, understanding and explanation is sought. Dekker (2002) recommends taking the perspective from “inside the tunnel,” the point of view of people in the unfolding situation. The investigator must guard him/herself against mixing his/her reality with the reality of the people being investigated (Dekker, 2002). A quote from one investigator in a high-profile accident investigation states: “…I have attempted at all times to remind myself of the dangers of using the powerful beam of hindsight to illuminate the situations revealed in the evidence. Hindsight also possesses a lens which can distort and can therefore present a misleading picture: it has to be avoided if fairness and accuracy of judgment is to be sought.” (Hidden, 1989)
Additionally, when writing the investigation report, any conclusions that could be interpreted as coming from hindsight must be supported by analysis and data; a reader must be able to trace through the report how the investigator came to the conclusions. In another high-profile accident, another investigator emphatically asked: “Given all of the training, experience, safeguards, redundant sophisticated electronic and technical equipment and the relatively benign conditions at the time, how in the world could such an accident happen?” (Snook, 2000). To mitigate the tendency to view the events with hindsight, this investigator ensured all accounts in his report clearly stated the goal of the analyses: to understand why people made the assessments or decisions they made—why these assessments of decisions would have made sense from the point of view of the people inside the situation. Learning and subsequent prevention or mitigation activities are the ultimate goals of accident investigation—having agreement from all stakeholders on this goal will go a long way to mitigating the hindsight bias.
Dekker, S., The Field Guide to Human Error Investigations, Ashgate, England, 2002.
Dekker, S., The Field Guide to Understanding Human Error, Ashgate, England, 2006.
Hidden, A., Investigation into the Clapham Junction Railway Accident, Her Majesty’s Stationery Office, London, England, 1989.
Snook, S. A., Friendly Fire: The Accidental Shootdown of U.S. Black Hawks over Northern Iraq, Princeton University Press, New Jersey, 2000.
 
  • #75
Part III

part III
“New View” Accounts of Human Error
The “old view” of human error has its roots in human nature and the culture of blame. We have an innate need to make sense of uncertainty, and find someone who is at fault. This need has its roots in humans needing to believe “that it can’t happen to me.” (Dekker, 2006)
The tenets of the “old view” include (Dekker, 2006):
Human frailties lie behind the majority of remaining accidents. Human errors are the dominant cause of remaining trouble that hasn’t been engineered or organized away yet.
Safety rules, prescriptive procedures and management policies are supposed to control this element of erratic human behaviour. However, this control is undercut by unreliable, unpredictable people who still don’t do what they are supposed to do. Some bad apples keep having negative attitudes toward safety, which adversely affects their behaviour. So not attending to safety is a personal problem; a motivational one; an issue of mere individual choice.
The basically safe system, of multiple defences carefully constructed by the organization, is undermined by erratic people. All we need to do is protect it better from the bad apples.
What we have learned thus far though, is that the “old view” is deeply counterproductive. It has been tried for over two decades without noticeable effect (e.g. the Flight Safety Foundation [FSF] still identifies 80 percent of accidents as caused by human error); and it assumes the system is safe, and that by removing the bad apples, the system will continue to be safe. The basic attribution error is the psychological way of describing the “old view.” All humans have a tendency, when examining the behaviour of other people, to overestimate the degree to which their behaviour results from permanent characteristics, such as attitude or personality, and to underestimate the influence of the situation.
“Old view” explanations of accidents can include things like: somebody did not pay enough attention; if only somebody had recognized the significance of this indication, of that piece of data, then nothing would have happened; somebody should have put in a little more effort; somebody thought that making a shortcut on a safety rule was not such a big deal, and so on. These explanations conform to the view that human error is a cause of trouble in otherwise safe systems. In this case, you stop looking any further as soon as you have found a convenient “human error” to blame for the trouble. Such a conclusion and its implications are thought to get to the causes of system failure.

“Old view” investigations typically single out particularly ill-performing practitioners; find evidence of erratic, wrong or inappropriate behaviour; and bring to light people’s bad decisions, their inaccurate assessments, and their deviations from written guidance or procedures. They also often conclude how frontline operators failed to notice certain data, or did not adhere to procedures that appeared relevant only after the fact. If this is what they conclude, then it is logical to recommend the retraining of particular individuals, and the tightening of procedures or oversight.

Why is it so easy and comfortable to adopt the “old view”? First, it is cheap and easy. The “old view” believes failure is an aberration, a temporary hiccup in an otherwise smoothly-performing, safe operation. Nothing more fundamental, or more expensive, needs to be changed. Second, in the aftermath of failure, pressure can exist to save public image; to do something immediately to return the system to a safe state. Taking out defective practitioners is always a good start to recovering the perception of safety. It tells people that the mishap is not a systemic problem, but just a local glitch in an otherwise smooth operation. You are doing something; you are taking action. The fatal attribution error and the blame cycle are alive and well. Third, personal responsibility and the illusions of choice are two other reasons why it is easy to adopt this view. Practitioners in safety-critical systems usually assume great personal responsibility for the outcomes of their actions. Practitioners are trained and paid to carry this responsibility. But the flip side of taking this responsibility is the assumption that they have the authority, and the power, to match the responsibility. The assumption is that people can simply choose between making errors and not making them—independent of the world around them. In reality, people are not immune to pressures, and organizations would not want them to be. To err or not to err is not a choice. People’s work is subject to and constrained by multiple factors.

To actually make progress on safety, Dekker (2006) argues that you must realize that people come to work to do a good job. The system is not basically safe—people create safety during normal work in an imperfect system. This is the premise of the local rationality principle: people are doing reasonable things, given their point of view, focus of attention, knowledge of the situation, objectives, and the objectives of the larger organization in which they work. People in safety-critical jobs are generally motivated to stay alive and to keep their passengers and customers alive. They do not go out of their way to fly into mountainsides, to damage equipment, to install components backwards, and so on. In the end, what they are doing makes sense to them at that time. It has to make sense; otherwise, they would not be doing it. So, if you want to understand human error, your job is to understand why it made sense to them, because if it made sense to them, it may well make sense to others, which means that the problem may show up again and again. If you want to understand human error, you have to assume that people were doing reasonable things, given the complexities, dilemmas, tradeoffs and uncertainty that surrounded them. Just finding and highlighting people’s mistakes explains nothing. Saying what people did not do, or what they should have done, does not explain why they did what they did.

The “new view” of human error was born out of recent insights in the field of human factors, specifically the study of human performance in complex systems and normal work. What is striking about many mishaps is that people were doing exactly the sorts of things they would usually be doing—the things that usually lead to success and safety. People were doing what made sense, given the situational indications, operational pressures, and organizational norms existing at the time. Accidents are seldom preceded by bizarre behaviour.

To adopt the “new view,” you must acknowledge that failures are baked into the very nature of your work and organization; that they are symptoms of deeper trouble or by-products of systemic brittleness in the way you do your business. (Dekker, 2006) It means having to acknowledge that mishaps are the result of everyday influences on everyday decision making, not isolated cases of erratic individuals behaving unrepresentatively. (Dekker, 2006) It means having to find out why what people did back there actually made sense, given the organization and operation that surrounded them. (Dekker, 2006)

The tenets of the “new view” include (Dekker, 2006):

Systems are not basically safe. People in them have to create safety by tying together the patchwork of technologies, adapting under pressure, and acting under uncertainty. Safety is never the only goal in systems that people operate. Multiple interacting pressures and goals are always at work. There are economic pressures, and pressures that have to do with schedules, competition, customer service, and public image. Trade-offs between safety and other goals often have to be made with uncertainty and ambiguity. Goals, other than safety, are easy to measure. However, how much people borrow from safety to achieve those goals is very difficult to measure. Trade-offs between safety and other goals enter, recognizably or not, into thousands of little and larger decisions and considerations that practitioners make every day. These trades-offs are made with uncertainty, and often under time pressure. The “new view” does not claim that people are perfect, that goals are always met, that situations are always assessed correctly, etc. In the face of failure, the “new view” differs from the “old view” in that it does not judge people for failing; it goes beyond saying what people should have noticed or could have done. Instead, the “new view” seeks to explain “why.” It wants to understand why people made the assessments or decisions they made—why these assessments or decisions would have made sense from their point of view, inside the situation. When you see people’s situation from the inside, as much like these people did themselves as you can reconstruct, you may begin to see that they were trying to make the best of their circumstances, under the uncertainty and ambiguity surrounding them. When viewed from inside the situation, their behaviour probably made sense—it was systematically connected to features of the their tools, tasks, and environment.

“New view” explanations of accidents can include things like: why did it make sense to the mechanic to install the flight controls as he did? What goals was the pilot considering when he landed in an unstable configuration? Why did it make sense for that baggage handler to load the aircraft from that location? Systems are not basically safe. People create safety while negotiating multiple system goals. Human errors do not come unexpectedly. They are the other side of human expertise—the human ability to conduct these negotiations while faced with ambiguous evidence and uncertain outcomes.

“New view” explanations of accidents tend to have the following characteristics:

Overall goal: In “new view” accounts, the goal of the investigation and accompanying report is clearly stated at the very beginning of each report: to learn. Langauge used: In “new view” accounts, contextual language is used to explain the actions, situations, context and circumstances. Judgment of these actions, situations, and circumstances is not present. Describing the context, the situation surrounding the human actions is critical to understanding why those human actions made sense at the time.

Hindsight bias control employed: The “new view” approach demands that hindsight bias be controlled to ensure investigators understand and reconstruct why things made sense at the time to the operational personnel experiencing the situation, rather than saying what they should have done or could have done. Depth of system issues explored: “New view” accounts are complete descriptions of the accidents from the one or two human operators whose actions directly related to the harm, including the contextual situation and circumstances surrounding their actions and decisions. The goal of “new view” investigations is to reform the situation and learn; the circumstances are investigated to the level of detail necessary to change the system for the better. Amount of data collected and analyzed: “New view” accounts often contain significant amounts of data and analysis. All sources of data necessary to explain the conclusions are to be included in the accounts, along with supporting evidence. In addition,“new view” accounts often contain photos, court statements, and extensive background about the technical and organizational factors involved in the accidents. “New view” accounts are typically long and detailed because this level of analysis and detail is necessary to reconstruct the actions, situations, context and circumstances. Length and development of arguments (“leave a trace”): “New view” accounts typically leave a trace throughout the report from data (sequence of events), analysis, findings, conclusion and recommendations/corrective actions. As a reader of a “new view” account, it is possible to follow from the contextual descriptions to the descriptions of why events and actions made sense to the people at the time, to in some cases, conceptual explanations. By clearly outlining the data, the analysis, and the conclusions, the reader is made fully aware of how the investigator drew their conclusions. “New view” investigations are driven by one unifying principle: human errors are symptoms of deeper trouble. This means a human error is a starting point in an investigation. If you want to learn from failures, you must look at human errors as:

A window on a problem that every practitioner in the system might have; A marker in the system’s everyday behaviour; and An opportunity to learn more about organizational, operational and technological features that create error potential.

Reference: Dekker, S., The Field Guide to Understanding Human Error, Ashgate, England, 2006.
Thoughts on the New View
 

Similar threads

Replies
23
Views
4K
Replies
6
Views
4K
Replies
3
Views
2K
Replies
2
Views
2K
Replies
17
Views
7K
Replies
37
Views
5K
Replies
22
Views
15K
Replies
6
Views
2K
Replies
47
Views
5K
Back
Top