Uncertainties from linear interpolation

In summary: So it is faster to just do piecewise linear interpolation (I don't need the actual fit function, I just want to use it to get the values at a few (x1,x2) points).In summary, the conversation discusses using simulated data to learn a function with no theoretically motivated formula. The speaker wants to use a general function for the fit and has tried a multi-dimensional linear interpolator and a neural network. They are unsure how to account for model uncertainty when using real data, but are advised to use a statistics package such as R to obtain confidence intervals for the coefficients. The speaker also mentions using a piecewise linear interpolator instead of a polynomial due to the large amount of data points.
  • #1
BillKet
313
29
Hello! I have a function of several variables (for this questions I assume it is only 2 variables), ##y = f(x_1,x_2)##. I want to learn this function using simulated data (i.e. generated triplets ##(x_1,x_2,y)##) and then use that function to get ##y## from measured ##(x_1,x_2)##. There is no theoretically motivated formula for this function, so I would like to use a general function for the fit. I tried a multi dimensional linear interpolator, and it seems that the fit works quite well (I could also try a neural network). However, I am not sure how to do the error propagation when using real data. For the uncertainties on ##(x_1,x_2)##, assuming they are independent, I can do normal error propagation (i.e. ##(\frac{\partial f}{\partial x_1})^2\times Var(x_1) + (\frac{\partial f}{\partial x_2})^2\times Var(x_2)##). However, for the parameters of the f, I don't really get any uncertainty associated with them. But I feel like I should somehow account for the fact that the model is not perfect i.e. beside the measurement error on ##(x_1,x_2)##, I should add some model uncertainty. Can someone advise me if that is the case, and what would be the best way to add that extra model uncertainty? Thank you!
 
Physics news on Phys.org
  • #2
With some statistics packages, you can get confidence intervals for the coefficients. For the R statistics package, you can use the confint function.
 
  • #3
FactChecker said:
With some statistics packages, you can get confidence intervals for the coefficients. For the R statistics package, you can use the confint function.
Thank you (I actually never used R, I am doing this in python)! For that R package, can the confidence interval be obtained using a built-in linear interpolator? In principle there are ways to get confidence intervals in python if you define a function yourself, I just didn't manage to do it while using the available packages for n-dimensional linear interpolation (I wouldn't really want to write that function myself).
 
  • #4
FactChecker said:
With some statistics packages, you can get confidence intervals for the coefficients. For the R statistics package, you can use the confint function.
Actually I am not even sure if a normal fitting package would work. For example, thinking about the 1D case, if I have 3 points from the simulation, the linear interpolator would be basically a sum of 2 straight lines, connected at a point. But each of those 2 lines are the result of a fit to 2 points, so the fit wouldn't have any uncertainty associated to it. Am I missunderstanding how linear interpolation works?
 
  • #5
BillKet said:
There is no theoretically motivated formula for this function, so I would like to use a general function for the fit.
This part confuses me. If you don't care what sort of function, how about a polynomial. A first order polynomial perfectly fits 2 points, a second order polynomial perfectly fits 3 points, etc.
 
  • #6
BillKet said:
I want to learn this function using simulated data (i.e. generated triplets (x1,x2,y))
I don’t understand this part. How can you simulate data without already having ##f##? If you already have ##f## then what do you mean about learning it?

The usual approach is that we have some model in mind: ##y=f(\mathbf{x}, \mathbf{\theta})## where ##\mathbf{x}## is your list of independent variables and ##\mathbf{\theta}## is some list of parameters. Then you would acquire some real data, ##(y,\mathbf{x})##, use that to learn ##\mathbf{\theta}## and then possibly predict unmeasured values of ##y##.
 
Last edited:
  • #7
BillKet said:
Am I missunderstanding how linear interpolation works?
Not only that, you are misunderstanding what it is. When you find the best fitting straight line for a set of points you are doing linear regression, not interpolation.
 
  • #8
BillKet said:
Thank you (I actually never used R, I am doing this in python)! For that R package, can the confidence interval be obtained using a built-in linear interpolator?
I don't know if you noticed that my post had a link (see this). If you read the example in that link, that is exactly what they do. They apply the R linear regression, lm, and then apply confint to the result.
 
  • #9
pbuk said:
Not only that, you are misunderstanding what it is. When you find the best fitting straight line for a set of points you are doing linear regression, not interpolation.
I am not sure what you mean. I am not trying to fit a line to all the data points. I am trying to fit a sum of line segments to consecutive pair of points. I assume that was linear interpolation, but I might be wrong (in particular I am using this Python package right now.
 
  • #10
DaveE said:
This part confuses me. If you don't care what sort of function, how about a polynomial. A first order polynomial perfectly fits 2 points, a second order polynomial perfectly fits 3 points, etc.
You are right. In principle a polynomial would work, too. The reason I am doing piecewise linear interpolation is because I have few hundred of thousands points, so I would need a polynomial of that order. That might not be impossible to implement, but there are linear interpolation packages already available, so that was more of a convenience issue (also if I change the number of points I fit to, I don't need to change anything for a linear interpolator, but for a polynomial fit I need to change the order).

Also, assuming I fit a second degree polynomial to 3 data points, that still doesn't answer my question of how do I estimate the uncertainty for a new point. I know my function is not a second degree polynomial (I don't know what it is), so most probably the prediction for the new point will be wrong and I would like to have a way to estimate that.
 
  • #11
Dale said:
I don’t understand this part. How can you simulate data without already having ##f##? If you already have ##f## then what do you mean about learning it?

The usual approach is that we have some model in mind: ##y=f(\mathbf{x}, \mathbf{\theta})## where ##\mathbf{x}## is your list of independent variables and ##\mathbf{\theta}## is some list of parameters. Then you would acquire some real data, ##(y,\mathbf{x})##, use that to learn ##\mathbf{\theta}## and then possibly predict unmeasured values of ##y##.
Hello! So I am using a (quite complex) software that generates, for each y, the ##(x_1,x_2)## pairs (I don't know the actual function for this either, hence why I need to use that numerical software which is quite time consuming). Then what I would like to do is the inverse problem: given ##(x_1,x_2)## (from my simulation), predict ##y## (also from the simulation, which I used as input to generate ##(x_1,x_2)##). This inverse function should be very complicated so I want something to approximate it as well as possible, hence why I am trying to use something general, such as piecewise linear interpolation, or a neural network. But I am not sure how to estimate the error on new data points after fitting this function.
 
  • Like
Likes Dale
  • #12
FactChecker said:
I don't know if you noticed that my post had a link (see this). If you read the example in that link, that is exactly what they do. They apply the R linear regression, lm, and then apply confint to the result.
But what I need to do is not linear regression, it is piecewise linear interpolation. Can that be done with that R package, too? (I need to look a bit more into understanding the details of the code)
 
  • #13
BillKet said:
I am not sure what you mean. I am not trying to fit a line to all the data points. I am trying to fit a sum of line segments to consecutive pair of points. I assume that was linear interpolation, but I might be wrong (in particular I am using this Python package right now.
Ah, this is piecewise linear interpolation: the word piecewise is critical. With piecewise interpolation the fit is perfect and there is no uncertainty in the coefficients.
 
  • Like
Likes FactChecker
  • #14
BillKet said:
But what I need to do is not linear regression, it is piecewise linear interpolation. Can that be done with that R package, too? (I need to look a bit more into understanding the details of the code)
IMO, if you use interpolation rather than regression, you are implicitly saying that there are no errors in any of the numbers or in the model parameters.
 
  • Like
Likes DaveE
  • #15
pbuk said:
Ah, this is piecewise linear interpolation: the word piecewise is critical. With piecewise interpolation the fit is perfect and there is no uncertainty in the coefficients.
Ah sorry about that, haven't used it before. So there is no way to predict uncertainties for new points (it seems quite a useless model if that is the case, no? or am I missing something)? For example for a neural network, what people do is train several of them on the same training set, and use the spread in their predictions as an estimation of uncertainty for new points (not ideal but better than nothing), given that each one has different initialization of parameters.
 
  • #16
FactChecker said:
IMO, if you use interpolation rather than regression, you are implicitly saying that there are no errors in any of the numbers or in the model parameters.
I am not sure i understand this. I know my function is not linear. But I also have no idea at all what it is, so I thought that linear interpolation would be the most general approach in this case. Is there a way to fit some general function to the data, and still be able to assign uncertainty to new points?
 
  • #17
FactChecker said:
IMO, if you use interpolation rather than regression, you are implicitly saying that there are no errors in any of the numbers or in the model parameters.
And additionally that the function ## y \to (x_1, x_2) ## is not in general smooth at each data point but is linear between them. This is clearly unrealistic.
 
  • Like
Likes FactChecker
  • #18
BillKet said:
So there is no way to predict uncertainties for new points
"new points" could be anywhere. You need to state something about the probabilities, like assuming gaussian data for example. You need to specify a function and fit it using some specified metric, and also assume something about the "new data points" for this to make sense.
 
  • Like
Likes pbuk
  • #19
If you know some bounds on the first or higher derivatives, you can calculate limits on the error of the interpolated value. There are several methods of piecewise interpolation which have different characteristics. Some popular and relatively well-behaved methods are that of spline functions of various orders. If you specify the algorithm that you used, maybe someone can help you with the theory of that numerical method. I am not expert enough to be of any more help.

CORRECTION: Sorry, I forgot that the OP specifies that linear interpolation was used.
 
Last edited:
  • Like
Likes pbuk
  • #20
BillKet said:
so I thought that linear interpolation would be the most general approach
The most general approach to what? It is still unclear to me what you are trying to do.
What are you trying to acccomplish (in simple terms)?
 
  • #21
pbuk said:
And additionally that the function ## y \to (x_1, x_2) ## is not in general smooth at each data point but is linear between them. This is clearly unrealistic.
The function ##y \to (x_1, x_2)## is smooth.
 
  • #22
hutchphd said:
The most general approach to what? It is still unclear to me what you are trying to do.
What are you trying to acccomplish (in simple terms)?
Sorry I'll try to give more details about what I need. For example, in machine learning, when people want to classify cat and dog images, they build a neural network (NN), that takes as input an image and outputs a 0 or 1. One can think of this NN as a function from a ##n \times n## (this is the number of pixels) space to 0 and 1. Of course there is no way to write down an analytical form for this function, hence why they use some general approximation, in this case a neural network. This is basically what I need, too, except that the input space is not that big (and it is not an image) and the output is a continuous variable, but in 1D. As in the cats and dogs case, I know nothing about the mapping function, but I would like to fit a function to my data (which in the above case was a neural network), that can provide y given ##(x_1,x_2)## as well as possible. Of course I can also use a neural network, but I was wondering if there is something simpler, that would be easier to understand mathematically (hence why I was thinking about a linear interpolator).
 
  • #23
BillKet said:
Hello! So I am using a (quite complex) software that generates, for each y, the ##(x_1,x_2)## pairs (I don't know the actual function for this either, hence why I need to use that numerical software which is quite time consuming). Then what I would like to do is the inverse problem: given ##(x_1,x_2)## (from my simulation), predict ##y## (also from the simulation, which I used as input to generate ##(x_1,x_2)##). This inverse function should be very complicated so I want something to approximate it as well as possible, hence why I am trying to use something general, such as piecewise linear interpolation, or a neural network. But I am not sure how to estimate the error on new data points after fitting this function.
This doesn't make any sense. Are you saying that every y defines a unique ordered pair ## (x_1(y), x_2(y)) ##? If this is the case, how do you know that any given ## (x_1, x_2) ## can be mapped to a ## y ## at all, or if it can that it can be mapped to a unique ## y ##?

I can't remember ever coming across a data set like this before so in order that anyone can understand what you are talking about I think you need to explain more exactly what this "numerical software that is quite time consuming" is, what ## x_1, x_2 \text{ and } y ## represent and provide some sample data.
 
  • #24
pbuk said:
This doesn't make any sense. Are you saying that every y defines a unique ordered pair ## (x_1(y), x_2(y)) ##? If this is the case, how do you know that any given ## (x_1, x_2) ## can be mapped to a ## y ## at all, or if it can that it can be mapped to a unique ## y ##?

I can't remember ever coming across a data set like this before so in order that anyone can understand what you are talking about I think you need to explain more exactly what this "numerical software that is quite time consuming" is, what ## x_1, x_2 \text{ and } y ## represent and provide some sample data.
Maybe I didn't explain the problem well. Here is a simplified version. Say I have a particle starting at a point ##x## (assume it is 1D, not 3D), then this particle is guided by some electrostatic fields to a different point. For this final point I know the 3D location and also the time it took to get there. For a given initial position, there is a unique combination of final position and time that the particle can have. Now given a new final point (not in the training set) i.e. a new final position and time it took the particle to get there, there is a unique starting point. I could try to run the simulation many times until I match that final position and time, but that would take very long. Instead I want to find an approximate mapping from this final position and time to the initial position.
 
  • Like
Likes pbuk
  • #25
Thank you, I understand now.

For the 1D initial position there exists an unknown function ## x_0 \to (x, y, z, t) ##, and for the 3D initial position a function ## (x_0, y_0, z_0) \to (x, y, z, t) ##. You have a number of sample data points mapping initial positions to final positions and times, and you want to construct a model that enables you to estimate the initial position given the final position and time.

For a 1D initial position I think piecewise linear interpolation (or possibly cubic spline) would be fine, but for 3D there are a number of more complicated approaches that would probably provide better results: I would consider a convolutional neural network or perhaps tricubic interpolation (you could treat the final position as a vector field and separately model the time as a separate scalar field).

Is this an undergraduate research project? Have you looked at what techniques the relevant research group are using?
 
  • #26
pbuk said:
Thank you, I understand now.

For the 1D initial position there exists an unknown function ## x_0 \to (x, y, z, t) ##, and for the 3D initial position a function ## (x_0, y_0, z_0) \to (x, y, z, t) ##. You have a number of sample data points mapping initial positions to final positions and times, and you want to construct a model that enables you to estimate the initial position given the final position and time.

For a 1D initial position I think piecewise linear interpolation (or possibly cubic spline) would be fine, but for 3D there are a number of more complicated approaches that would probably provide better results: I would consider a convolutional neural network or perhaps tricubic interpolation (you could treat the final position as a vector field and separately model the time as a separate scalar field).

Is this an undergraduate research project? Have you looked at what techniques the relevant research group are using?
Actually in my case, the output of the function (of the inverse problem) is just one number (so it is actually 1D). However, even if I am to use piecewise linear interpolation, I still don't have a way to estimate the uncertainty on new data points. I know that any point not in the training set will be close to the ones I use for training, and the more points I use for training, the better the linear interpolation approximation would be, but I am not sure how to quantify that. Maybe I can generate some extra points not in the training set, get the RMSE of those points and use that somehow (on top of the error associated to the actual measurements of the final position and time)?
 
  • #27
BillKet said:
I am to use piecewise linear interpolation, I still don't have a way to estimate the uncertainty on new data points.
Since interpolation is very computationally inexpensive you should just use leave-one-out cross validation. You leave the first data point out, interpolate the rest, and calculate the error for the left out point. Then repeat for the next point and so on. That will give you an estimate of the distribution of residuals. You can even try different interpolation schemes to see which is better in this sense.

This approach may not be feasible for a neural net, depending on how expensive it is to train the net.
 
  • Like
  • Informative
Likes Twigg, hutchphd, pbuk and 1 other person
  • #28
Dale said:
Since interpolation is very computationally inexpensive you should just use leave-one-out cross validation. You leave the first data point out, interpolate the rest, and calculate the error for the left out point. Then repeat for the next point and so on. That will give you an estimate of the distribution of residuals. You can even try different interpolation schemes to see which is better in this sense.

This approach may not be feasible for a neural net, depending on how expensive it is to train the net.
Thank you! That is a very good idea!
 
  • #29
Dale said:
Since interpolation is very computationally inexpensive you should just use leave-one-out cross validation.
Alternatively you could use a higher-order interpolation e.g. cubic spline and estimate the magnitude of the error as the difference between that and the linear interpolation.

1D inputs makes everything much simpler and more accurate: although it is stating the obvious, ## n ## linear spaced sample points are much denser than ## n ## vertices of a 3D mesh.
 
  • Like
Likes BillKet and Dale
  • #30
pbuk said:
Alternatively you could use a higher-order interpolation e.g. cubic spline and estimate the magnitude of the error as the difference between that and the linear interpolation.
You could also look at using a kriging technique to provide more sophisticated error bounds (I have no practical experience of this).
 
  • #31
I would suggest that you make a simple model and use your techniques to solve it thereby learning your techniques.
 

FAQ: Uncertainties from linear interpolation

What is linear interpolation?

Linear interpolation is a method used to estimate values between two known data points. It assumes that the relationship between the data points is linear, and uses this assumption to calculate an estimated value at a point within the data range.

How is linear interpolation used to calculate uncertainties?

Linear interpolation can be used to estimate uncertainties by calculating the difference between the known data points and the estimated value at a point of interest. This difference can then be used to determine the range of possible values and thus the uncertainty.

What are the limitations of linear interpolation in calculating uncertainties?

Linear interpolation assumes a linear relationship between data points, which may not always be accurate. It also does not take into account any potential errors or uncertainties in the known data points, which can affect the accuracy of the estimated value and uncertainty.

How can I improve the accuracy of uncertainties calculated using linear interpolation?

To improve the accuracy of uncertainties calculated using linear interpolation, it is important to have a sufficient number of data points and to ensure that the data points are evenly spaced. It is also important to consider any potential errors or uncertainties in the known data points and to use multiple interpolation methods to compare results.

Can linear interpolation be used for non-linear data?

No, linear interpolation is only suitable for data that follows a linear relationship. For non-linear data, other interpolation methods such as polynomial or spline interpolation should be used.

Similar threads

Replies
5
Views
1K
Replies
5
Views
2K
Replies
5
Views
2K
Replies
22
Views
3K
Replies
28
Views
2K
Back
Top