Determine optimal vectors for least squares

In summary, to fit a linear model with a set of measurements, the user first calculates the parameters θ using the equation θ=(H^TH)^{-1}H^Tx. Then, they want to choose the subset of N columns from the matrix H (say H_N) that gives the best fit. This can be done using either PCA or order recursive least squares.
  • #1
Pete99
43
0
Hello all,

I have a set of measurements that I want to fit to a linear model with a bunch of parameters. I do this as [itex]\theta=(H^TH)^{-1}H^Tx[/itex], where θ are the parameters and x is the set of measurements. The problem is that I'd like to reduce the number of parameters in the fit. I'd like to chose the subset of N parameters that gives the best fit, such that no other combination of N parameters works better.

Is there any way I can determine what is the best subset of N parameters without having to try all of them? I have seen that with order recursive least squares I can add parameters sequentially to improve the fit, but this approach does not guarantee that the N parameters that I have selected are the best combination.

Thank you very much for any help,
 
Physics news on Phys.org
  • #2
Pete99 said:
Hello all,

I have a set of measurements that I want to fit to a linear model with a bunch of parameters. I do this as [itex]\theta=(H^TH)^{-1}H^Tx[/itex], where θ are the parameters and x is the set of measurements. The problem is that I'd like to reduce the number of parameters in the fit. I'd like to chose the subset of N parameters that gives the best fit, such that no other combination of N parameters works better.

Is there any way I can determine what is the best subset of N parameters without having to try all of them? I have seen that with order recursive least squares I can add parameters sequentially to improve the fit, but this approach does not guarantee that the N parameters that I have selected are the best combination.

Thank you very much for any help,

Hey Pete99.

If you want to choose the bit fit for say N parameters where N <= x but greater than zero, then it sounds like you need to find a way to either project your inputs down to an appropriate sub-space and do least-squares on that, or to do the other thing which is to do least squares and then project the result down from your calculated parameters down to a reduced form.

In other words, this boils down to taking your vector x and projecting it down to some sub-space in the sam way we project say an arbitrary point in R^3 on to a plane that is two-dimensional.

The thing you will have to figure out in terms of the nitty gritty is the actual projection itself and this will depend on specifically what you call an 'optimal' configuration of parameters.

I would start off thinking about doing the least squares and then projecting your parameters down to some sub-space instead of taking your vector and projecting that down before you do least squares.

If you are trying to fit a linear model to data like you would in a statistical analysis though (like a regression), I would not do this method but instead use what is called a Principle Components Analysis.

PCA is a very old technique and well understood and comes as a feature or a library in manys statistical applications. It works by creating a basis of un-correlated variables in the order of the basis vectors that contribute the most variance up until the least amount of variance.

Thus if you want a model for N parameters, you pick the first N basis components of the PCA output and use these basis vectors as your regression model.

I'd strongly recommend you think about using PCA if you are trying to fit some multi-dimensional linear model because the calculation is very quick and you can fit the linear model just as quick and see how good the fit is for yourself.

In R it should take about say 30 minutes to an hour if you are familiar with R and more if you are not, but if you are familiar with the major packages then you could probably just read the documentation for this.
 
  • #3
Thank you chiro for your your detailed response.

Sorry for the notation if it is not very rigorous, and correct me if something I say is directly wrong. I guess I forgot to mention that the vectors (columns from H) that I want to use are already defined and have some physical meaning. As far as I know PCA does not allow me to use these vectors but the idea is exactly what I want to do.

I would like to chose the a subset of N columns from the matrix H (say [itex]H_N[/itex]), such that no other subset of N columns from the matrix H gives a better fit in the least squares sense.

In PCA I would get a set of orthonormal vectors ([itex]v_1,v_2,\dots[/itex]) that I can use to do my fit. And since they are orthogonal, if [itex]H_1=v_1[/itex] is the best "single-vector" that I can use to fit my data, the best "two-vectors" to represent the data will be [itex]H_2=[v_1,v_2][/itex], etc...

In my case, since the vectors in H are not orthogonal, assuming that the best "single-vector" is [itex]H=h_1[/itex], there is no guarantee that the best "two-vectors" will contain [itex]h_1[/itex], but they can be any other two vectors (for instance [itex]H_2=[h_3,h_{24}][/itex]). My problem is that I don't know how to chose these two vectors unless I try all possible combinations of two vectors.
 
  • #4
So what is the criteria exactly? Do you want to say rank some variables over another in the selection process? So for example you always want a model that capture a particular kind of variable even if it doesn't contribute much to the actual regression model?

Also what you can do is to take a variable out when you do the PCA and see what it produces and then look at what has been calculated as part of the output components.

Also there are routines that do find the best fit of variables for a regression given N variables that are exhaustive in contrast to the PCA approach. You might want to look at things like say the step() routine in R and other similar kinds of routines.
 
  • #5
So what is the criteria exactly? Do you want to say rank some variables over another in the selection process? So for example you always want a model that capture a particular kind of variable even if it doesn't contribute much to the actual regression model?

Not exactly. I do want to use the variables that contribute the most to the actual fit. But I want to chose the parameters from a set of parameters that have some physical meaning in my problem.

Let's say that my model has three physical parameters that contribute to the output as [itex]x=[h_1 h_2 h_3][\theta_1 \theta_2 \theta_2]^T[/itex], where h are column vectors and theta are the parameters.

Say that my measurement is the vector [itex]x=[1, 1, 0]^T[/itex]. And that the h vectors in my model are [itex]h_1=[1, 0, 0]^T[/itex], [itex]h_2=[0, 1, 0]^T[/itex], and [itex]h_3=[0.9, 0.9, 0.1]^T[/itex].

If I want to use only 1 parameter from the 3 possible parameters that I have, I would chose [itex]\theta_3[/itex], because the vector [itex]h_3[/itex] is very close to [itex]x[/itex]. This is very easy to find, because I just have to try the three possibilities. However, if I want to use two parameters, the best choice will be [itex]\theta_1[/itex] and [itex]\theta_2[/itex], since the vector x is in the plane formed by h_1 and h_2.

In my problem I have ~25 parameters, and I would like to use no more than ~10 to fit the data (because of restrictions on the processing that I have to do later). My problem is, how can I chose the 8 parameters from the total of 25 parameters that will provide the best fit to my data in the least squares sense.

I am not familiar with R, so I am not sure what step() does, but I will take a look to see if it can help me.
 

Related to Determine optimal vectors for least squares

1. What is the purpose of determining optimal vectors for least squares?

The purpose of determining optimal vectors for least squares is to find the best fit line or plane for a set of data points. This is useful in various fields such as statistics, data analysis, and machine learning.

2. What are the key factors in determining optimal vectors for least squares?

The key factors in determining optimal vectors for least squares are the number of data points, the complexity of the model, and the chosen error metric. These factors affect the accuracy and precision of the resulting optimal vectors.

3. How do you calculate the optimal vectors for least squares?

The optimal vectors for least squares can be calculated using various methods such as the normal equation method or the gradient descent algorithm. These methods involve minimizing the sum of squared errors between the data points and the model.

4. How does the number of data points affect the determination of optimal vectors for least squares?

The number of data points affects the determination of optimal vectors for least squares as it can impact the accuracy of the resulting model. With more data points, the model may be more accurate and less prone to overfitting.

5. What are some applications of determining optimal vectors for least squares?

Determining optimal vectors for least squares has various applications such as linear regression, curve fitting, and data interpolation. It is also used in fields such as economics, engineering, and social sciences.

Similar threads

Back
Top