- #1
Panda
- 197
- 0
1st a bit of Background. I can't go into too much detail as there is huge commercial implications to the project, but I'll outline what were trying to do in general terms. And please remember I'm a rocket scientist not a mathematician so please go easy on me.
My department hypothesised that when designing vehicles to operate in hostile environments, rather then using specially designed components and subsystems for the whole of a vehicle some could be every day off the shelf stuff which didn't even need modification. We did some tests and showed that you could apply rules of thumb on how certain items would survive.
We have since been doing more exhaustive testing where we take a component or subsystem, anything from a solenoid pack from a toy car to a network hub, which does a similar job to the specially designed units in our vehicles and subject them to a well established test that replicates a specific environment, and then see how they fail.
This is where the problems start. The test set up means that whilst there are a dozen parameters that may affect failure, we can only control the nominal intensity of test and the number of events in the test.
We know that two identical components will also vary according to a number of parameters most of which we can't control.
So the data we get out for each component is the number of events it experienced at a nominal intensity before a fault condition occurred. If the fault can be rectified we will continue testing, otherwise we get a new unit.
We then want calculate the statistical likelihood that during a mission life where N events will be experienced by the device it is likely to survive or it will probably fail. The implications being that we can send 5 vehicles costing £500K each rather than one costing £50M and get the same probability of success.
When we try and analyse the data because every test is different from every other test we end up with massive uncertainties in the results. Analysis indicates that failures are randomly distributed according to event intensity rather than cumulative and life stress analysis does produce results that roughly fit our data.
The key is, that if this is to work we need to do a small number of tests as the time involved in testing components can easily remove any cost benefit over using a specialist device.
The data we get is a pass/fail after exposure to a number of event of given intensity, rather than the exact intensity required to cause failure.
Apart from employing a full time Statistician, What do we need to do to determine what the individual component reliability is likely to be based on a small number of tests with poor control over parameters?
My department hypothesised that when designing vehicles to operate in hostile environments, rather then using specially designed components and subsystems for the whole of a vehicle some could be every day off the shelf stuff which didn't even need modification. We did some tests and showed that you could apply rules of thumb on how certain items would survive.
We have since been doing more exhaustive testing where we take a component or subsystem, anything from a solenoid pack from a toy car to a network hub, which does a similar job to the specially designed units in our vehicles and subject them to a well established test that replicates a specific environment, and then see how they fail.
This is where the problems start. The test set up means that whilst there are a dozen parameters that may affect failure, we can only control the nominal intensity of test and the number of events in the test.
We know that two identical components will also vary according to a number of parameters most of which we can't control.
So the data we get out for each component is the number of events it experienced at a nominal intensity before a fault condition occurred. If the fault can be rectified we will continue testing, otherwise we get a new unit.
We then want calculate the statistical likelihood that during a mission life where N events will be experienced by the device it is likely to survive or it will probably fail. The implications being that we can send 5 vehicles costing £500K each rather than one costing £50M and get the same probability of success.
When we try and analyse the data because every test is different from every other test we end up with massive uncertainties in the results. Analysis indicates that failures are randomly distributed according to event intensity rather than cumulative and life stress analysis does produce results that roughly fit our data.
The key is, that if this is to work we need to do a small number of tests as the time involved in testing components can easily remove any cost benefit over using a specialist device.
The data we get is a pass/fail after exposure to a number of event of given intensity, rather than the exact intensity required to cause failure.
Apart from employing a full time Statistician, What do we need to do to determine what the individual component reliability is likely to be based on a small number of tests with poor control over parameters?