- #1
BillKet
- 313
- 29
Hello! I have a function of several variables (for this questions I assume it is only 2 variables), ##y = f(x_1,x_2)##. I want to learn this function using simulated data (i.e. generated triplets ##(x_1,x_2,y)##) and then use that function to get ##y## from measured ##(x_1,x_2)##. There is no theoretically motivated formula for this function, so I would like to use a general function for the fit. I tried a multi dimensional linear interpolator, and it seems that the fit works quite well (I could also try a neural network). However, I am not sure how to do the error propagation when using real data. For the uncertainties on ##(x_1,x_2)##, assuming they are independent, I can do normal error propagation (i.e. ##(\frac{\partial f}{\partial x_1})^2\times Var(x_1) + (\frac{\partial f}{\partial x_2})^2\times Var(x_2)##). However, for the parameters of the f, I don't really get any uncertainty associated with them. But I feel like I should somehow account for the fact that the model is not perfect i.e. beside the measurement error on ##(x_1,x_2)##, I should add some model uncertainty. Can someone advise me if that is the case, and what would be the best way to add that extra model uncertainty? Thank you!