Fit to (orthgonal?) polynomial function

In summary, the speaker is seeking advice on a project involving finding a function to fit experimental data that changes with modified parameters. They have decided to try using a polynomial function, but are concerned about changing coefficients and the potential for a single variable polynomial to accurately predict results with multiple parameters. They are considering using orthogonal polynomials, but are unsure if this will solve their problem. They also mention the possibility of using other functions such as Legendre polynomials or trigonometric functions. The speaker is seeking clarification on the appropriateness of using a sum of functions to represent a real-life phenomenon and whether their changes in parameters can be represented as a transformation. They are open to suggestions and resources on how to approach their problem.
  • #1
Tojur
4
0
Hi all. I need some advice in a project I'm into.

I have some experimental (simulation) data and i need to find a function that fits to it. The experimental data behaviour change when I modify some parameters I have. My goal is, from that single function, been able to predict how the experimental data will change, acording to the parameters: I mean, been able to find an analytical expression that represents all the information I have.

For the characteristics of my problem, I've decided to try with a polynomial function. To been able to see how the factors varies, I've done some fits with mathematica. However, every time i change the polynom degree, not only the factor changes of value, but its signs can also change (for example, the factor of the cuadratic part, in a 4-degree polynom fit is positive, and in a 5-degree polynom fit is negative). I'm aware that there is not a unique function that can represent some data. However, this behaviour is a big problem to my goal since this breaks down any truly generalization attempt of my solution.

As far as I suspect, a possible solution for this problem is trying to make the fit in a set of orthogonal functions: particularly, in orthogonal polynomial functions. However i would like to know your opinion on this. In particular two aspects worries me of this aproach: the first one is if making my fit in orthogonal polynomials (or orthogonal functions in general), would really solve my problem of changing cofficients with the degree of the function that i use to make the fit. The second one is that some of this sets (the few I know), like the Legendre polynomials, have even in their high degree polynomials, terms that include cuadratic dependence: i wonder if this would not be a problem, since I believe that the data have a strong dependence on this term (this is more an observational hunch, nothing rigorous)

I hope I make myself clear. Any suggestion or advice would be really apreciated. Also, any bibliography to develop the orthogonal fit would be nice (if it is still a good idea, of course).
 
Physics news on Phys.org
  • #2
Are you using single variable polynomials? If you expect to pedict some result when you vary parameters (plural), how would you expect to do this with a polynomial in one variable? Are you hoping to find how the coefficients of a single variable polynomial depend on the several parameters?
 
  • #3
No, of course not. The function I'm looking for, is a polynom with respect to a single principal variable "x". The dependence with the other variables (that I call parameters) is something that will be inserted in the coefficients of the polynomial expansion (or this is the what Ihope to be able to do later). But before to do that, i need to do some fit with respect to the variable "x" that does not depend in the degree of the polynom; once i have that invariable coefficients, I can analize how those depend in the parameters.

I'm sorry if I wasn't very clear the first time (I hope I have been clear enoguh this time)

Sorry, I misread: " Are you hoping to find how the coefficients of a single variable polynomial depend on the several parameters? " the answer to that is yes :D
 
Last edited:
  • #4
Let me restate your question, as I understand it:

There is a simulation [itex] S(P) [/itex] whose output is a set of data points [itex] (x_i, y_i) [/itex] and whose input is the vector of parameters [itex] P = (p_1,p_2,..p_n) [/itex].

Your goal is to find an approximation of the output of the simulation that has the form:
[itex] y = \sum_{i=1}^M c_i(P) f_i(x) [/itex] where the [itex] f_i [/itex] are a set of functions that do not depend on the parameter vector [itex] P [/itex] and the [itex] c_i [/itex] are a set of functions that may depend on the parameter vector [itex] P [/itex] but do not depend on the variable [itex] x [/itex].

You will consider such an approximation to be successful if small changes in the [itex] P [/itex] vector produce only small changes in the functions [itex] c_i [/itex].

-----

I'm sure most mathematicians would first try a set of functions [itex] f_i [/itex] that are orthogonal since a representation using a non-orthogonal set of functions can be re-written as a representation using orthogonal basis functions.

Is the simulation you mentioned deterministic or stochastic? If the simulation is stochastic then I think fitting a polynomial to the output curve is not a good approach.

Representing a function as a sum of other functions (such as orthogonal polynomials or trigonometric functions) is appropriate when the function represents something that is (or can be conceptualized) as being literally a "sum", i.e. an arithmetic addition of other functions. (This is more specific than saying the function is the "combined effect" of other functions. )

Often a function representing a real-life phenomenon cannot be exactly reprsented by a finite sum of orthogonal polynomials. In such a case, a useful representation will have coefficients that are large on a few of the component polynomials and small on the rest of them.

Without knowing what you are simulating, I can't say whether it is plausible that your output curve is an arithmetic sum of functions. I also don't know if representing it as orthogonal polynomials or orthogonal trigonometric functions (i.e. Fourier analysis) would be useful.

Can the changes you want to perform on the parameters [itex] P [/itex] be visualized as any sort of transformation that couples the change in one component of [itex] P [/itex] to a change in other components of it? For example, if [itex] p_1 [/itex] is the pressure in a pipe and [itex] p_2 [/itex] is the rate of flow of some chemical through the same pipe then plausible changes in the [itex] P [/itex] vector may not be changes that vary [itex] p_1 [/itex] and [itex] p_2 [/itex] in a completely independent manner. The change in [itex] p_2 [/itex] may not be a function only of the change in [itex] p_1 [/itex], but physical constraints ( i.e. adusting various valves ) may imply that a change in [itex] p_2 [/itex] is also not entirely independent of a change in [itex] p_1 [/itex].
 
  • #5
First of all, thanks for taking the time to answer.

Youŕ explanation of my problem is correct. The simulation I'm using is in fact deterministic. The data that i obtain from the simulation is (as far as I assume based in information I have) some kind of hibrid or known functions like the gaussian function, the bessel function (0-order of the first kind), and, from what I see, linear function and probably others; put another way, I would say that my data represent in some manner, transition functions between those, or at least, between functions alike (with similar behaviour). From the series expansion of this functions, I though that my answer could be written as a polynomial expansion. My idea is that depending in the parameters, it would transform in one or in other function, and could predict the midle states that is what my data represents. So, do you think it would be useful to try an orthogonal polynom expansion?

Now, going a step forward, once I have the fits, i will try to find how the expansion coefficients "c", depend in the parameters P. These parameters depend on each other: they are not independent, but this is expected from the characteristics of my problem. In fact, this parameters represent measurable quantities.
 
  • #6
Without knowing the details of your investigation, I can't guess what you should try first.

I agree that the approach you describe may work. However, it won't be simple to see the results of that approach as a blending of known functions unless the functions themselves appear as members of the basis functions [itex] f_i [/itex]. For example, if the [itex] f_i [/itex] are polynomials and the "natural" output of the simulation is a blend of gaussians and bessel functions, it won't be simple to recognize how a sum of polynomials can be represented as a sum of a gaussian plus a bessel function.

If you are convinced that the output of the simulation is a blend of certain functions, you can seek to represent it by letting the [itex] f_i [/itex] be those functions. If you literally want it to be a "mixture" of those functions, you can add the constraint [itex] \sum_{i=1}^M c_i(P) = 1 [/itex]. Suppose you choose functions [itex] f_i [/itex] that are not orthogonal. You must find a way to compute the coefficients [itex] c_i(P) [/itex] when give a particular set of data [itex] (x_i,y_i) [/itex]. One approach is to define the fit as a leas squares problem:

We are given the data set of [itex] N [/itex] points [itex] (x_j,y_j)[/itex]. We are given [itex] M [/itex] known functions [itex] f_i [/itex]. Find the constants [itex] c_i [/itex] that minimize the square errors given by [itex] \sum_{j=1}^N ( y_i - \sum_{i=1}^M c_i f_i(x_i))^2 [/itex].

With orthogonal functions we expect to find a unique solution for the [itex] c_i [/itex] and a solution to the minimization can be estimated by "taking the inner product of the data with each basis function", so to speak. With non-orthogonal functions, there might be more than one solution for the [itex] c_i [/itex] and it may be hard to solve the minimization problem. How hard would it be to solve in the minimization problem with your data?
 
  • #7
Hi. thanks for your suggestion. I didn't answer before because I've reading about the subject. In particular, about fit in orthogonal polynoms, and trying to execute it. My idea is change the normal range of orthogonality of the polynoms (that is from -1 to 1 to chebyshev polynomials for example) to span them over the interval in which is my data. This can be done with a simple variable change. However there is a problem: I want to find the chebyshev polynomial expansion of my discrete set of points

[itex] f(x) = \sum\limits_{k=1}^N c_kT_j(x_k) [/itex]

On the other hand, the discrete condition of orthogonality for chebyshev is

[itex]\sum\limits_{k=1}^m T_i(x_k)T_j(x_k)= \left\{ \begin{array}{c l}
0\ \ i \neq j\\
m/2\ \ i=j \neq 0\\
m\ \ i=j=0
\end{array}\right.[/itex]

only if

[itex]x=cos ( \dfrac{\pi (k-1/2)}{n} ) \ \ \ k=1,2,...n [/itex]

From this condition the coeficients can be found

[itex]c_k=\dfrac{2}{N}\sum\limits_{k=1}^N f(x_k)T_{j-1}(x_k)[/itex]

This seems easy to calculate. The problem is that for do this, I need to determine what is the value of [itex] f(x) [/itex] in the points [itex]x_k [/itex] and it could happen that only a few of my data are evaluated exactly in these points: most of them could be in middle points. I know that if I use a real high degree in the expansion I will found some [itex]x_k [/itex] that corresponds to my data. However my goal is to make the expansion without have to make an extremely large expansion. I have searched for some example to make a fit over discrete data with orthogonal polynomials, but i could find anything (only with continuous functions).

Any idea, advice o useful bibliography (with an example would be great) will help me a lot.
 

Related to Fit to (orthgonal?) polynomial function

What is a "Fit to Orthogonal Polynomial Function"?

A "Fit to Orthogonal Polynomial Function" is a mathematical model used to approximate a relationship between two variables by fitting a polynomial curve to the data points. The term "orthogonal" refers to the use of orthogonal polynomials, which are a set of polynomials that are mutually orthogonal (perpendicular) to each other.

Why are orthogonal polynomial functions used?

Orthogonal polynomial functions are used because they offer several advantages over traditional polynomial functions. They can better represent complex relationships between variables, they are more numerically stable, and they can reduce the effects of multicollinearity in regression models.

How is a fit to orthogonal polynomial function calculated?

The process of calculating a fit to orthogonal polynomial function involves finding the best fitting polynomial curve that minimizes the sum of squared errors between the data points and the curve. This is typically done using a process called least squares regression, which uses mathematical algorithms to iteratively refine the coefficients of the polynomial until the best fit is achieved.

What are some common applications of fit to orthogonal polynomial functions?

Fit to orthogonal polynomial functions are commonly used in fields such as statistics, engineering, and data analysis. They are particularly useful for modeling nonlinear relationships, such as in time series analysis, signal processing, and curve fitting.

Are there any limitations to using fit to orthogonal polynomial functions?

While fit to orthogonal polynomial functions offer many advantages, they also have some limitations. They may not be appropriate for all types of data, and the choice of polynomial degree can greatly affect the results. Additionally, they may not accurately model relationships that are highly nonlinear or have outliers in the data.

Similar threads

Replies
16
Views
2K
Replies
3
Views
2K
Replies
0
Views
1K
Replies
3
Views
1K
Replies
30
Views
2K
Replies
2
Views
1K
Replies
16
Views
1K
Replies
7
Views
1K
Replies
9
Views
2K
Back
Top