Optimizing Linear Least Squares with Multiple Unknowns: A Partitioned Approach

In summary, the problem is to find x and w such that the equation minimized is the second norm of a vector. If we do not have the constant component constraints then we can choose x=w=0 and achieve this minimum.
  • #1
slylock
4
0
Hi there,

Could some one please tell me how can i solve this linear least square problem.

x*,w* = argmin[tex]_{x,w}[/tex]|| Gx - Mw ||[tex]^{2}_{2}[/tex]

subject to v[tex]_{k}[/tex] = c[tex]_{k}[/tex]; k = 1 ... p

v[tex]_{k}[/tex] is the vertex
c is the constant

Here x and w are the unknowns.

G - is a 9m x 3n matrix - the coefficients here are known
x - is 3n x 1 matrix of unknown vertices
M - is 9m x 4 matrix the values are known here as well
w - is 4 x 1 matrix of unknown values

I have to solve for x and w. Any help would be really appreciated.
Thanks
 
Physics news on Phys.org
  • #2
How are v and c related to the problem?
 
  • #3
x = v[tex]^{1}[/tex] v[tex]^{2}[/tex] ... v[tex]^{k}[/tex]
c = some constant values ... basically some x values we don't want to find or in other words want them to remain constant.
 
Last edited:
  • #4
As I see it you can partition G based on the components of x that are fixed

so Gx=[G_1 G_2][v x_2] [\tex]

now p=G_1v is a constant vector and the problem becomes

min ||p-[G_2 M][x_2 y] || ... which is standard least square.

Hope the above makes sense. The multiplying vector in all the above expressions is a column vector .. though I have left out the transposes etc.
 
  • #5
Hmmm i don't understand what u mean here. Thing is forget about constants and then solve the equation. Assuming let's say there are no constants then how do you solve the equation. Technically it should give me two normal equations.
 
  • #6
If you forget about constants then there is nothing stopping me from choosing x=w=0 and the minimum of the function is 0.

Why do you say we should get two normal equations?

Also, what in particular didn't make sense in the previous post?
 
  • #7
Why choose zeor for w and x. We have to minimize the equation for w and x and as far as i know the way was to take derivative with respect to each unknown vector and put it equal to zero and then solve the equation. This should give two normal equations. First take derivative with respect to x and put equal to zero and then take derivative with respect to w and put equal to zero. Thats what i think.

I actually was looking for an answer for a generalized case. So that is why i am saying that there could be any arbitrary values for G and M and values of w and x are to be found so you cannot just assume that they have zero value.
 
  • #8
What I was trying to say was that the function to be minimized is the second norm of a vector so its smallest possible value is zero. Now if we do not have the constant component constraints then we can choose x=w=0 and achieve this minimum. So (0,0) is our solution.

Now if we have the constant constraints then the above posted solution seems to be completely general. The only thing I have done is taken the variable components of x (named it x_2) and y and made them a single vector of a higher dimension. And the problem reduces to a standard least square in this higher dimensional space.
 
Last edited:
  • #9
Basically I am saying that the standard least square which is

min || Ax-b || can also be written as

min || A1 x1 + A2 x2 -b || where x1 and x2 are achieved by partitioning the vector x and A1 and A2 are corresponding partitions of columns of A.
 

FAQ: Optimizing Linear Least Squares with Multiple Unknowns: A Partitioned Approach

What is linear least squares?

Linear least squares is a mathematical method used to find the best fit line for a set of data points. It minimizes the sum of the squares of the vertical distance between the data points and the line, hence the name "least squares".

What is the purpose of solving linear least squares?

The purpose of solving linear least squares is to find a mathematical model that best represents the relationship between two or more variables in a given set of data. It is commonly used in statistics, engineering, and other fields to make predictions and analyze data.

What are the assumptions made when using linear least squares?

The main assumptions made when using linear least squares are: 1) the relationship between the variables is linear, 2) the errors in the data are normally distributed, and 3) the errors have equal variance. Violating these assumptions can lead to inaccurate results.

How is linear least squares different from other regression methods?

Linear least squares is a specific type of regression method that is used when the relationship between the variables is linear. Other regression methods, such as polynomial regression or logistic regression, are used for data with non-linear relationships.

What is the process for solving linear least squares?

The process for solving linear least squares involves calculating the sum of the squares of the vertical distance between the data points and the line, finding the slope and intercept of the line that minimizes this sum, and then using these values to create the best fit line for the data.

Similar threads

Replies
11
Views
2K
Replies
9
Views
2K
Replies
1
Views
1K
Replies
2
Views
1K
Replies
2
Views
3K
Replies
7
Views
2K
Replies
4
Views
1K
Back
Top