Monte Carlo Simulation vs ML Models

In summary: We don't average the outputs of the MC SPICE simulations -- we use them to see what the worst-case performance can be.For example, when my first simulations of a particular anti-alias filter for an audio application showed that my passband ripple was too large, I explored other polynomials for the filter and ended up picking one that had fewer terms and was not as sharp at cutoff, but had much better passband ripple performance.There have been other times when I was not happy with what my simulations were showing me about another analog filter circuit, so I opted instead to do simple filtering with the analog front end, and digitize the waveform to implement the rest of the filtering digitally.
  • #1
fog37
1,569
108
Hello,

I have become familiar with ML and a number of ML models (supervised and unsupervised). I would like to now learn about Monte Carlo simulations since they are so ubiquitous in many fields.

When would we choose to do a Monte Carlo simulation instead of building a ML model (supervised, unsupervised, reinforcement learning)?
What kind of problems are more suitable for a MC simulation?

Thank you!
 
Technology news on Phys.org
  • #2
Others will give better answers, but I can give you an example where Monte Carlo simulation is very important. When EEs design analog circuits, you can get some very subtle interactions between slight variations in the values of the resistors, capacitors, inductors, transformers, etc. There are tolerances associated with the parameters for each of those components (and also temperature coefficients for those values), and the worst-case performance of the circuit will not necessarily be when all components are at their upper tolerance limit (or lower) at the same time.

So we use Monte Carlo SPICE simulations to vary the values of the components randomly within their tolerance bands, and look at the families of plots for the simulations to see if the topology we are using and the tolerances we are specifying will meet the overall performance specs for the circuit.

I can tell you from personal experience that it is very non-trivial to design a high-performance analog filter with specifications you want (filter passband ripple, cutoff frequency, stopband ripple, etrc.) when you are varying so many component values all at once.

Good times. :smile:
 
  • Like
Likes fog37
  • #3
berkeman said:
Others will give better answers, but I can give you an example where Monte Carlo simulation is very important. When EEs design analog circuits, you can get some very subtle interactions between slight variations in the values of the resistors, capacitors, inductors, transformers, etc. There are tolerances associated with the parameters for each of those components (and also temperature coefficients for those values), and the worst-case performance of the circuit will not necessarily be when all components are at their upper tolerance limit (or lower) at the same time.

So we use Monte Carlo SPICE simulations to vary the values of the components randomly within their tolerance bands, and look at the families of plots for the simulations to see if the topology we are using and the tolerances we are specifying will meet the overall performance specs for the circuit.

I can tell you from personal experience that it is very non-trivial to design a high-performance analog filter with specifications you want (filter passband ripple, cutoff frequency, stopband ripple, etrc.) when you are varying so many component values all at once.

Good times. :smile:
Thank you. I see how your are trying to optimize something (some variable) that depends on many other input variables and use randomization to get there.

I guess ML models are made for prediction (regression or classification) or clustering. Reinforcement learning is about making the right decisions inside a particular environment after some trial and error learning.

So would MC simulations instead be about running a model many many times using several inputs to which we randomly assign specific values at every different iteration with the resulting of getting many many generally different outputs which we will then average out together?
 
  • #4
fog37 said:
So would MC simulations instead be about running a model many many times using several inputs to which we randomly assign specific values at every different iteration with the resulting of getting many many generally different outputs which we will then average out together?
We don't average the outputs of the MC SPICE simulations -- we use them to see what the worst-case performance can be. For example, when my first simulations of a particular anti-alias filter for an audio application showed that my passband ripple was too large, I explored other polynomials for the filter and ended up picking one that had fewer terms and was not as sharp at cutoff, but had much better passband ripple performance. There have been other times when I was not happy with what my simulations were showing me about another analog filter circuit, so I opted instead to do simple filtering with the analog front end, and digitize the waveform to implement the rest of the filtering digitally.

Without using MC simulations, I doubt I would have seen how bad the passband ripple could get for that first circuit, and my first prototypes may have worked just fine. But if you are designing a circuit for volume production, all those tolerance variations will show up in some of the production units, which can cause performance problems in the field.
 
  • #5
fog37 said:
So would MC simulations instead be about running a model many many times using several inputs to which we randomly assign specific values at every different iteration with the resulting of getting many many generally different outputs which we will then average out together?
That is the idea, except the data is not always averaged. A complicated MC simulation can generate a massive amount of detailed data. What you do with that data depends on what question you are asking about the problem. You could be looking for averages, variation, extreme examples, etc.
It is often true that simple, standard analysis problems become much more difficult with the addition of a couple of "if .. then" conditions in the problem statement. The analysis can be a lot trickier than a simple MC model, where all you have to do is to add the "if ... then" condition into the simulation code. There have been threads in this forum where the mathematical analysis of a problem required a long, complicated, discussion and the easiest way to determine when the analysis was correct was to verify it with a relatively simple MC simulation.
 
  • Like
Likes fog37 and berkeman

FAQ: Monte Carlo Simulation vs ML Models

What is the difference between Monte Carlo Simulation and Machine Learning Models?

Monte Carlo Simulation is a statistical technique that uses random sampling to model and analyze complex systems. It is used to estimate the likelihood of different outcomes by running multiple simulations. On the other hand, Machine Learning Models are algorithms that learn from data and make predictions or decisions without being explicitly programmed. They can be used for tasks such as classification, regression, and clustering.

When should I use Monte Carlo Simulation over Machine Learning Models?

Monte Carlo Simulation is typically used when the underlying system is too complex to be modeled analytically or when there is uncertainty in the input parameters. It is particularly useful for risk analysis, optimization, and decision-making under uncertainty. Machine Learning Models, on the other hand, are more suitable for tasks that involve pattern recognition, prediction, and classification based on historical data.

Can Monte Carlo Simulation be used in conjunction with Machine Learning Models?

Yes, Monte Carlo Simulation can be used in conjunction with Machine Learning Models to evaluate the uncertainty in the predictions made by the models. By running multiple simulations with varying input parameters or assumptions, Monte Carlo Simulation can provide a more comprehensive understanding of the potential outcomes and associated risks of the Machine Learning Models.

Which approach is more computationally intensive, Monte Carlo Simulation or Machine Learning Models?

Monte Carlo Simulation is generally more computationally intensive than Machine Learning Models because it involves running multiple simulations to estimate the likelihood of different outcomes. The number of simulations required can vary depending on the complexity of the system and the level of accuracy desired. Machine Learning Models, on the other hand, typically require training on large datasets but can make predictions quickly once trained.

Are there any limitations to using Monte Carlo Simulation or Machine Learning Models?

Both Monte Carlo Simulation and Machine Learning Models have their limitations. Monte Carlo Simulation can be computationally expensive and may not always be feasible for real-time decision-making. It also relies on assumptions about the underlying system, which may introduce bias or uncertainty. Machine Learning Models, on the other hand, require large amounts of high-quality data for training and may not perform well when faced with novel or unforeseen situations. It is important to carefully consider the strengths and limitations of each approach when deciding which to use for a particular problem.

Similar threads

Back
Top