Curve fitting with errors-in-variables

In summary: I'm sorry, I cannot provide a summary for this conversation as it is discussing a specific problem and does not have a clear conclusion or resolution.
  • #1
Phy73
3
0
Hi,
I need some directions to target a problem that is bothering me quite a lot, even some links or small explanation if possible, thank you in advance!

I have a huge dataset Ω (unknown) of experimental values {[itex]e_i[/itex]} that should approximate (with noise both in the value and in the variable) a unknown curve f, with the condition that f is not-decreasing.

I get a subset of Ω, called "training subset", that is randomly chosen, so that should reflect the same characteristics of Ω in distribution and other geometrical evaluators. In the following image the subset is Ω' = {[itex]x_1,...,x_{18}[/itex]}, just to explain it with an intuitive drawing:

attachment.php?attachmentid=59535&stc=1&d=1371169045.png

Now, what I like is to find the "best" curve fitting the data, and if possible two curves of confidence, like the orange and the green in the previous image, expressing some percentile confidence that the curve will be between the two curves.

I see some different issues:
  1. what means "best" curve? well, if we have many data with overlapping variable confidence intervals (like between [itex]x_5[/itex] and [itex]x_{15}[/itex]), it means the more probable value; more points I add to Ω' the more "precise" I expect to be the expected curve;
  2. because there are errors in variable, can it make sense to evaluate [itex]f[/itex] only in a finite number of points with some algorithm?
  3. where points are dense, I expect almost to see the distribution of errors and the 2 confidence curves be nearer to the "best" curve, while where points are not dense the curve should be highly imprecise (confidence curves far away); are these 2 cases to be treated separately?
  4. the condition of monotonicity for f implies some limitation of the confidence curves also in absence of non dense intervals, as I expect confidence curves monotonous too (am I right?), but what this means in terms of construction of the "best" curve fitting the data?
  5. the errors of the data can be considered random gaussian errors, with σ to be determined; while the error distribution in the value can be evaluated within a dense interval, how to estimate the error distribution in variable?


I really like to understand how to address this kind of problem, even using some computational algorithm, but I've difficulty trying to find the proper terms to look for, or a hint to show me where to start.

thank you in advance!
 

Attachments

  • CurveFitError.jpg
    CurveFitError.jpg
    22.8 KB · Views: 522
  • CurveFitError_small.png
    CurveFitError_small.png
    48.7 KB · Views: 675
Physics news on Phys.org
  • #2
First, let's consider the problem of fitting curves that are non-decreasing. One search phrase for this is "monotone approximation".

There are families of functions that are non-decreasing. For example, in the real numbers the square of a number is always positive. So if [itex] g(x) [/itex] is any real-valued function then [itex] h(s) = (g(s))^2 [/itex] is non-negative. The integral of a non-negative function is an increasing function of the upper limit of integration. So [itex] f(x) = \int_0^x h(s) dx [/itex] is a non-decreasing function of [itex] x [/itex].

So you can define a family of non-decreasing curves by setting [itex] g(s) [/itex] to be some family of functions defined by parameters - such as [itex] g(s) = As + B [/itex]. Then you can work out [itex]h(s) [/itex] and do the integration. You'll get a family of cubic polynomials, but it won't be as general the family of all cubic polynomials.

To fit such a family to data by least squares, you need a software package that let's you specify the family. I think such packages exist, but I'm not familiar with current curve fitting programs.

If instead of a single function, you want to model the data by splines, there are also methods of creating non-decreasing spline functions. In fact, "non-decreasing spline" might turn out to be a better search phrase than "monotone approximation" because "monotone approximation" can be a very abstract topic.
 
  • #3
Thank you Stephen,
one thing that I like to understand is how to find this "best" curve without a curve family to fit to. Otherwise every choice of the family will inherit the specificity of the family and not the data to be fit (I don't need the curve to be smooth, for example fitting with non-decreasing polinomial will have artifacts and soo on...)

Imagine that there were no "errors", that I just got the points [itex]f(x_i)[/itex] for [itex]x_i \in Ω'[/itex], so I believe that a good approximation is a piecewise-linear function passing by [itex](x,f(x))[/itex], why not? To have other approximations (spline or others) I need some more informations to tell me why am I prefering the spline to the piecewise-linear, am I right?

Now, if the point x is not a point but a probability/distribution of the position, it's like if I'm spreading the value [itex]f(x)[/itex] into an interval (in high school it was like [itex][x_i-\epsilon, x_i+\epsilon][/itex] ); let's assume that measurements error in the variable x are random, so with a gaussian distribution. What this information is giving us for a generic x of the interval? For example, in the interval [itex][x_5,x_{15}][/itex] I expect that the curve is not like a saw-tooth, because the error in x is not allowing this kind of resolution (to have errors-in-variable seems to me a bit like filtering to take out high frequencies from Fourier spectrum).

Maybe the issue of conditions on f (like non-decreasing, but can be others in similar problems) can be addressed later, once understood how to deal with the problem without conditions. For example, in the drawing [itex]f(x_5)>f(x_6)[/itex], but this is normal because we have also errors-in-value (in high school it was like [itex][f(x_i)-\epsilon_f, f(x_i)+\epsilon_f][/itex] ), let's assume again gaussian: how to combine the conditions (non-decreasing f) with the distributions of value in a distribution of position?

Thank you for your support in advance!
 
  • #4
Phy73 said:
one thing that I like to understand is how to find this "best" curve without a curve family to fit to. Otherwise every choice of the family will inherit the specificity of the family and not the data to be fit

If your data has n distinct values of x and you insist on treating each value of x in isolation then there is no justification for creating a curve at the values that are not in the data. There is no justification for estimating the vaule y at each x and then joining the points (x,y) with a straight line.

Unless you only want to estimate y with a function at exactly the values of x that are in the data, you must make some assumption that establishes a relation between values of x that are in the data and those that are not. For example, if you use Fourier analysis, you are using a family of functions, namely those that can be represented as Fourier series. Each term in a Fourier series is a periodic funciton. When you estimate the coefficient of that function you are letting the values of the data at different values x affect the estimate at all other values of x.

Imagine that there were no "errors", that I just got the points [itex]f(x_i)[/itex] for [itex]x_i \in Ω'[/itex], so I believe that a good approximation is a piecewise-linear function passing by [itex](x,f(x))[/itex], why not?

A better question is "why?". You can believe what you want, but such a belief is not a mathematical justification.

To have other approximations (spline or others) I need some more informations to tell me why am I prefering the spline to the piecewise-linear, am I right?

Esstentially no. You need "more information" even to justify a piecewise linear approximation. The "more information", you need consists of what model you assume for how the data is generated, including the errors in measurement. If you don't have such information, you are asking a question that has insufficient information to give a definite mathematical answer. It's like asking to find the sides and angles of a triangle when given only one side and one angle.

Now, if the point x is not a point but a probability/distribution of the position, it's like if I'm spreading the value [itex]f(x)[/itex] into an interval (in high school it was like [itex][x_i-\epsilon, x_i+\epsilon][/itex] ); let's assume that measurements error in the variable x are random, so with a gaussian distribution. What this information is giving us for a generic x of the interval? For example, in the interval [itex][x_5,x_{15}][/itex] I expect that the curve is not like a saw-tooth, because the error in x is not allowing this kind of resolution (to have errors-in-variable seems to me a bit like filtering to take out high frequencies from Fourier spectrum).

To justify a particular filtering method, you still need assumptions or information about how the data is generated. You seem to be afraid to make specific assumptions about this and you are hoping that "math" can provide a specific answer. "Math" doesn't have an answer unless it has more to go on.
 
  • #5
Hi Stephen,
you are right, and your words were very very useful for me. I passed almost all the last days playing with these concepts and finding very interesting things on the topic.

What I would like in this specific problem, is not to limit the family of functions to a subspace of very few dimensions (like in the case of polynimials of degree n), but instead to work with a bigger family (like, continuous functions) and to have the assumptions coming from other sources (like, how the [itex](x_i,f(x_i))[/itex] influences the nearby values [itex](x,f(x))[/itex], or limits on the derivates f', f'',...

I've found something around the name "Bayesian non-parametric models", and tried to play a bit with some ideas... still I don't have absolutely a clear viewpoint of the topic.
...do you have some good ideas of where I can continue to search or get a good summary on the topic? (of course, I'm not requesting you to do, but if you know already the answer and have time...)

Thank you, your reply was VERY useful to me!
 
  • #6
Phy73 said:
I've found something around the name "Bayesian non-parametric models", and tried to play a bit with some ideas... still I don't have absolutely a clear viewpoint of the topic.

I hadn't heard of that type of model before. I'm glad you mentioned it. It looks like the kind of thing that suits your proclivities!

...do you have some good ideas of where I can continue to search or get a good summary on the topic?

We have to decide whether you stated your goal precisely. It's not advisable to use the word "confidence" as synonym for "probability". If your goal is something result like "There is a 95% probability that a future measurement will fall between these two curves" or even if it is the less ambitious "There is a 93% probability that there is a 95% probability that a future measurement will fall between these two curves", you should only consider Bayesian approaches of some sort. Non-Bayesian (frequentist) confidence intervals do not give you such results about "probability". They give you results about "confidence" and the technical meaning of "confidence" doesn't give such guarantees about "probability" (even though laymen's misinterpretations of confidence intervals claim such guarantees). I think what you want is a Bayesian "credible interval", not a "confidence interval".

As to further research on Bayesian non-parametric models, I suggest you search on "Bayeisan nonparameteric models thesis" because in newly developed math, someone's Phd thesis will be the best place to find a summary of current work. Phd candidates are required to present an exposition of the important known results and definitions in their specialty and often these are excellent pieces of writing. I think you must decided on which of the Bayesian non-parametric model scenarios fits your problem. Just glancing at some of the online sources, it appears the models are classified by the parameter space and the method of assigning a prior distribution on it. I haven't found a thesis that specializes on the space of continuous functions using a gaussian process as a prior, but most of the theses I saw mentioned that as one type of model.
 

Related to Curve fitting with errors-in-variables

1. What is curve fitting with errors-in-variables?

Curve fitting with errors-in-variables is a statistical method used to estimate the relationship between two variables when there is uncertainty or error in the measurements of both variables. It is commonly used in scientific research to analyze data and make predictions.

2. How does curve fitting with errors-in-variables differ from regular curve fitting?

In regular curve fitting, the independent variable is assumed to be measured without error, while in curve fitting with errors-in-variables, both the independent and dependent variables have measurement errors. This makes the estimation of the curve more challenging and requires the use of specialized techniques.

3. What types of errors can occur in curve fitting with errors-in-variables?

There are two types of errors that can occur in curve fitting with errors-in-variables: measurement error and specification error. Measurement error refers to errors in the actual measurements of the variables, while specification error refers to errors in the model used to describe the relationship between the variables.

4. How do you account for errors-in-variables in curve fitting?

To account for errors-in-variables, specialized techniques such as the Deming regression, total least squares, and errors-in-variables regression can be used. These techniques take into account the errors in both variables and provide more accurate estimates of the relationship between them.

5. What are the limitations of curve fitting with errors-in-variables?

Curve fitting with errors-in-variables assumes that the errors in the measurements of the variables are random and normally distributed. If this assumption is violated, the results of the curve fitting may be biased. Additionally, the accuracy of the estimates depends on the amount of error in the measurements, so it is important to have as accurate measurements as possible.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
16
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
28
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
536
  • MATLAB, Maple, Mathematica, LaTeX
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
922
  • Set Theory, Logic, Probability, Statistics
Replies
26
Views
2K
Back
Top