Why Do MCNP5 Statistical Tests Fail Despite Increasing Particle Numbers?

In summary: variance...
  • #1
Van525
3
0
Can anyone tell me how I can solve the problem of non-verification of statistical tests done by MCNP5 (relative error, VOV, figure of Merite, slope). I tried to increase the number of particles generated in order to hope to verify the tests but it did not work.
 
Engineering news on Phys.org
  • #2
The way to get the statistics depends on the details of the problem. For example, the nature of problems involving heavy shielding has one set of methods. Problems involving kcode have a different set of methods.

So, if you can post your code, that might help. If you cannot (maybe there are proprietary or confidential items in it) then at least can you describe the general nature of your problem?
 
  • #3
Here can you find my output file. You can see for all my tally 5, the statistical tests are not completed for 100.000.000 particles.
 

Attachments

  • note.txt
    281.9 KB · Views: 101
  • #4
Sigh. A post that contained absolutely none of the description of your input, just quoted more detail about the stat tests that failed.

Nope. If you can't post your input, and you can't describe the basic nature of the problem, not going to help you.
 
  • Like
Likes Alex A
  • #5
The problem I have at the moment is the unacceptability of the basic statistical tests done by MCNP5.
The idea of my simulation is to place tally5 at different angles to the direction of emission of the primary X-ray beam. The tally I define are all of 1cm radius (in order to simulate a point detector (CZT)) and all placed at 2m from the source. It is logical to increase the number of particles as much as possible in order to hope to accept statistical tests (relative error function of the root number of particle history etc. ) but in my case some tests are not accepted by MC. Would you have any clues to solve this problem? Thank you very much

Here an example of the statistical test (VOV and PDF) of tally 85 who are not accepted.
===================================================================================================================================

results of 10 statistical checks for the estimated answer for the tally fluctuation chart (tfc) bin of tally 85

tfc bin --mean-- ---------relative error--------- ----variance of the variance---- --figure of merit-- -pdf-
behavior behavior value decrease decrease rate value decrease decrease rate value behavior slope

desired random <0.05 yes 1/sqrt(nps) <0.10 yes 1/nps constant random >3.00
observed random 0.00 yes yes 0.27 yes yes constant random 2.66
passed? yes yes yes yes no yes yes yes yes no

===================================================================================================================================
 
  • #6
If you are throwing 100'000'000 particles at a problem and your tallies are not statistically significant it's probably in the design. If your source is 2m from your detector and isotropic, and your detector is 1cm radius then only... ~ 2 * Pi / (4 * Pi * 2000^2) = 1/ 8'000'000 of the area is covered. So almost all of the computer time is wasted.
 
  • #7
Well, maybe it's a language problem. I will give it one more try.

When I say please post your problem I mean, please describe the system you are trying to solve. What material is between your source and detector? Are you doing a source calculation or a kcode calculation? That sort of thing. Just saying your stats are bad does not let me help you. I need to know about the system you want to analyze.

You have now posted some vague hints. It seems like you have an x-ray source and some detectors.

For MCNP 6.2, the user manual section 3.3.6 is where you want to start reading.

When you have a source there are a couple reasons the stats may be bad.

One is simple geometry. As you get farther from the source many of the particles go somewhere besides the detector. You can deal with this through a variety of biasing methods. You make your source point in the direction of the detector. Then you adjust the normalizing of the detector to account for the biasing. For example, a spherical source could be adjusted to only send particles in a tiny cone. Then you adjust for the relative area of the cone versus the full sphere.

Another method for geometry is the DXTRAN sphere. Before you use this one you should read the manual VERY carefully. You can get misleading results very easily if you do things wrong.

There are also a variety of detector tallies that are semi-deterministic. Again, you need to read the manual on these VERY carefully.

Another reason you may be getting bad stats is shielding. If there is a lot of material between your source and detector then nearly all the particles get absorbed before they get to your detector. There are a bunch of things you can do in that case. The simplest is adjusting the importance of various parts of the system. The goal is to have the number of particles at your detector be larger. There are several methods with additional levels of sophistication, but also additional effort required.

In very broad outline, importance adjusting goes like so. Consider a source, shielding material, and a detector on the other side.

source | shielding | shielding| shielding| shielding| detector

By the time particles get through there are very few left. Most get absorbed. The basic idea is you set the importance higher and higher as you get closer to the detector.

source | shielding | shielding| shielding| shielding| detector
imp:1 imp:1 imp:2 imp:4 imp:8 imp:16

The idea is, when importance changes from 1 to 2, the code will change one particle of weight 1 to two particles of weight 1/2. The ideal is to keep the number of particles roughly constant across each layer. That way you can get good stats at the detector.

For simple systems you can apply this by hand. For more complicated systems there are some automated utilities in MCNP. They are fairly complicated and require a lot of careful reading of the user manual.
 

FAQ: Why Do MCNP5 Statistical Tests Fail Despite Increasing Particle Numbers?

Why do MCNP5 statistical tests fail despite increasing particle numbers?

MCNP5 statistical tests can fail due to several reasons such as insufficient particle numbers for certain tallies, improper variance reduction techniques, or inherent statistical fluctuations. Even with increased particle numbers, these factors can still cause test failures.

How does variance reduction affect MCNP5 statistical tests?

Variance reduction techniques are designed to improve the efficiency of Monte Carlo simulations. However, improper application of these techniques can lead to biased results or increased variance in certain regions, causing statistical tests to fail.

Can geometry and material definitions impact MCNP5 statistical tests?

Yes, inaccuracies or complexities in geometry and material definitions can lead to improper particle tracking and scoring, which affects the statistical reliability of the results, potentially causing test failures.

What role do tally settings play in MCNP5 statistical test failures?

Tally settings, including the type of tally, energy bins, and spatial bins, can significantly impact the statistical quality of the results. Incorrect or suboptimal settings can lead to insufficient sampling and increased variance, causing statistical tests to fail.

How can one improve the reliability of MCNP5 statistical tests?

To improve the reliability of MCNP5 statistical tests, one should ensure accurate geometry and material definitions, use appropriate variance reduction techniques, optimize tally settings, and run simulations with sufficiently high particle numbers to achieve the desired statistical precision.

Back
Top