- #1
Dario56
- 290
- 45
- TL;DR Summary
- Discussion about least squares method to find the parameters of the physical model
Common method of fitting the experimental data to the model is using the least squares method.
The goal is to find parameters such that sum of squares of differences between experimental and model values (objective function) are minimized.
Objective function is commonly differentiable with respect to the model parameters. Therefore, gradient methods such as Levenberg-Marquardt can be used to obtain the parameters.
However, as with every numerical method, there is no certanity that parameters obtained are the actual values which correspond to their physical meaning and interpretation. In another words, they might give objective function values close to zero, but can be quite far off from the actual values.
I have a few questions:
1. Does a global minimum of objective function actually give real values of parameters? The thing is, why should real values necessarily minimize sum of squares? It's just the method we're using to obtain them, but I don't see the reason that certain combination of parameters (which minimize objective function) should reflect something physical. It's not completely unreasonable, of course, but I'm pondering whether the meaningful link between math and physical interpretation of parameters can be made such that we can infer that global minimum reflects actual values.
If we pressume that global minimum does reflect real values, we can move towards the second question.
2. If global minimum is the measure of success, we need to pick the parameters with the lowest value of objective function. We want to try finding the global minimum with plethora of initial estimates to increase the chance of finding it. After the algorithm converges towards the minima for every initial estimate, we take the minimum value out of all the minima algorithm converged towards. When we take these parameter values and they look reasonable, is there any way to quantify the confidence in these values (how much are we confident that this minimum is in fact global)?
The goal is to find parameters such that sum of squares of differences between experimental and model values (objective function) are minimized.
Objective function is commonly differentiable with respect to the model parameters. Therefore, gradient methods such as Levenberg-Marquardt can be used to obtain the parameters.
However, as with every numerical method, there is no certanity that parameters obtained are the actual values which correspond to their physical meaning and interpretation. In another words, they might give objective function values close to zero, but can be quite far off from the actual values.
I have a few questions:
1. Does a global minimum of objective function actually give real values of parameters? The thing is, why should real values necessarily minimize sum of squares? It's just the method we're using to obtain them, but I don't see the reason that certain combination of parameters (which minimize objective function) should reflect something physical. It's not completely unreasonable, of course, but I'm pondering whether the meaningful link between math and physical interpretation of parameters can be made such that we can infer that global minimum reflects actual values.
If we pressume that global minimum does reflect real values, we can move towards the second question.
2. If global minimum is the measure of success, we need to pick the parameters with the lowest value of objective function. We want to try finding the global minimum with plethora of initial estimates to increase the chance of finding it. After the algorithm converges towards the minima for every initial estimate, we take the minimum value out of all the minima algorithm converged towards. When we take these parameter values and they look reasonable, is there any way to quantify the confidence in these values (how much are we confident that this minimum is in fact global)?
Last edited: