Best fit curve using Q-R Factorization?

In summary, the conversation discusses using Q-R Factorization to find the best fit curve for a given problem. The process involves minimizing ||r=b-Ax||^2, where A is an mxn matrix, and m>n. This can be achieved by solving the equation R_n x = (Q^T b)_n for x. The method is similar to finding a least squares fit using orthogonal decomposition methods. There is some confusion about using back substution in MATLAB, but the code provided is correct and just needs to be padded with zeros for proper multiplication.
  • #1
Jamin2112
986
12
Best fit curve using Q-R Factorization?

Homework Statement



screen-capture-2-15.png


Homework Equations



screen-capture-4-8.png


The Attempt at a Solution



So ... It's part (a) that is confusing me. I already factored it into Q and R. But does the Q-R Factorization have to do with best-fit lines? (To be fair, I'm working on homework that's due next friday; so this might be something we go over later in the week.)
 
Physics news on Phys.org
  • #2


You will want to find an x that minimizes [tex]||r=b-Ax||^2[/tex] where A is an mxn matrix, and m>n. But A=QR, so r=b-QRx which is equivalent to Q^T = Q^T b - Rx. Observe then that:
[tex]Q^T r = \left[ {\begin{array}{cc}
(Q^Tb)_n - R_n x \\
Q^T b_{m-n} \\
\end{array} } \right] = \left[ {\begin{array}{cc}
u \\
v \\
\end{array} } \right][/tex]
where
[tex]R=\left[ {\begin{array}{cc}
R_n \\
0 \\
\end{array} } \right][/tex]
i.e. R is padded with zeros so that it will multiply.
Also observe that
||r||^2 = r^T r = r^T QQ^T r = u^T u + v^T v. But since v is independent of x we find the minimal value of ||r||^2 when u=0 or when
[tex]R_n x = (Q^T b)_n[/tex].
 
Last edited:
  • #3


Wingeer said:
You will want to find an x that minimizes [tex]||r=b-Ax||^2[/tex] where A is an mxn matrix, and m>n. But A=QR, so r=b-QRx which is equivalent to Q^T = Q^T b - Rx. Observe then that:
[tex]Q^T r = \left[ {\begin{array}{cc}
Q^Tb_n - R_n x \\
Q^T b_{m-n} \\
\end{array} } \right] = \left[ {\begin{array}{cc}
u \\
v \\
\end{array} } \right][/tex]
where
[tex]R=\left[ {\begin{array}{cc}
R_n \\
0 \\
\end{array} } \right][/tex]
i.e. R is padded with zeros so that it will multiply.
Also observe that
||r||^2 = r^T r = r^T QQ^T r = u^T u + v^T v. But since v is independent of x we find the minimal value of ||r||^2 when u=0 or when
[tex]R_n x = Q^T b_n[/tex].

Still a little confused herre. I'm minimizing ||b - QRx||, of course. I know that an orthonormal matrix's inverse is equal to its transpose, I know that an the inverse of an upper triangular matrix is easy to calculate ... how does this all come together?
 
  • #4


It all comes together to solving the equation [tex]R_n x = (Q^T b)_n[/tex] for x as I derived. Observe my mistake in the last equation in the previous post which I have corrected in this post. The solution to that system will be the best fit, in your case x is alpha, beta, gamma. R_n is the part of R without zeros, Q transposed is Q transposed and b is the values for y.
 
  • #5


Here me out, brah.

So this one website gave a simple explanation:

screen-capture-4-9.png





Simple enough. I decided to use it on the following problem.


screen-capture-2-17.png





My code looked like the following.

screen-capture-3-22.png






And of course it isn't working because I can't use the backslash command with a non-square matrix.
 
  • #6


[PLAIN]http://www.docrafts.com/Content/Forum/Member/45338/bump%20hippo.jpg
 
Last edited by a moderator:
  • #7


Hi Jamin2112!

Nice pics! :smile:

The method to use QR decomposition to find a least squares fit is described here:
http://en.wikipedia.org/wiki/Numeri...east_squares#Orthogonal_decomposition_methods

Note that this article is supposedly about linear least squares, but this method applies in your case as well.

[edit] Oh, I see this is exactly what Wingeer described. :blushing:
I presume your problem is to get it to work in matlab? [/edit]

[edit2] Please google "back substution in matlab" or something like that. [/edit2]
 
Last edited:
  • #8


Your code is good. Only thing you have to do is pad the matrices with zeros, so that they can multiply.
 

FAQ: Best fit curve using Q-R Factorization?

What is Q-R Factorization?

Q-R Factorization is a mathematical method used to decompose a matrix into an orthogonal matrix (Q) and an upper triangular matrix (R). It is commonly used to solve systems of linear equations and to find the best fit curve for a set of data points.

How does Q-R Factorization help in finding the best fit curve?

Q-R Factorization helps in finding the best fit curve by decomposing the data points into an orthogonal matrix and an upper triangular matrix. This allows for easier manipulation of the data and ultimately leads to a more accurate and precise best fit curve.

What is the advantage of using Q-R Factorization over other methods for finding the best fit curve?

One advantage of using Q-R Factorization is that it is more numerically stable compared to other methods. This means that it is less prone to errors and can handle larger and more complex data sets with greater accuracy.

Can Q-R Factorization be used for non-linear best fit curves?

No, Q-R Factorization is primarily used for linear best fit curves. Non-linear best fit curves require different methods such as least squares regression or gradient descent.

Are there any limitations to using Q-R Factorization for finding the best fit curve?

One limitation of using Q-R Factorization is that it can only be used for linear best fit curves. It also requires the data points to be well-conditioned, meaning that the points should not be too close together or too spread out. Additionally, Q-R Factorization may not work well for data sets with outliers.

Similar threads

Back
Top