LU decomposition can be performed with or without pivoting, but without pivoting, the process may fail if a leading submatrix has a zero at the top left. If decomposition without pivoting is not possible, row or column reordering is necessary to proceed. The algorithm for LU decomposition with partial pivoting involves swapping rows in the matrices L, U, and P to maintain the correct structure after identifying the pivot row. The discussion also raises questions about the correctness of the swapping process in the algorithm, particularly regarding the integrity of the lower and upper triangular matrices. Overall, understanding when to use pivoting is crucial for successful LU decomposition.
#1
mathmari
Gold Member
MHB
4,984
7
Hey!
We consider the matrix $$A=\begin{pmatrix}1 & -2 & 5 & 0 & 5\\ 1 & 0 & 2 & 0 & 3\\ 1 & 2 & 5 & 4 & 6 \\ -2 & 2 & -4 & 1 & -6 \\ 3 & 4 & 9 & 5 & 11\end{pmatrix}$$ I want to find the LU decomposition.
How do we know if we have to do the decomposition with pivoting or without? :unsure:
Wiki explains that a decomposition without pivoting ($A=LU$) can fail for a non-singular matrix A, and that this is a procedural problem.
It happens when during the procedure we get a sub matrix with a $0$ at the left top.
Decomposition is still possible, but we need to reorder rows or columns to make it happen.
In other words, I believe we simply have to try the LU decomposition procedure without pivoting to find out if it is possible.
And if we find that decomposition without pivoting is not possible, we can reorder the rows and continue the algorithm.
#3
mathmari
Gold Member
MHB
4,984
7
I found the following code for the LU-decomposition with partial pivoting:
Code:
Algorithm LUP-Decomposition(A)
Input matrix A
Output matrices L,U,P: PA=LU
n <- A.rows
L <- new n x n matrix
U <- new n x n matrix
P <- new n x n matrix
#initialization, as before
#for L, U, and P = I
for i <- 0 to n do
for j <- 0 to n do
if i > j then
U[i][j] <- 0
P[i][j] <- 0
else if i == j then
L[i][j] <- 1
P[i][j] <- 1
else
L[i][j] <- 0
P[i][j] <- 0
for k <- 0 to n do # for each equation
p <- 0
for i <- k to n dο #find pivot row
if abs(A[i][k]) > p then
p <- abs(A[i][k])
pivot_row <- i
if p == 0 then
error("singular matrix")
for j <- 0 to n do #swap rows
swap L[k][j] with L[pivot_row][j]
swap U[k][j] with U[pivot_row][j]
swap P[k][j] with P[pivot_row][j]
swap A[k][j] with A[pivot_row][j]
U[k][k] <- A[k][k]
for i <- k + 1 to n dο
L[i][k] <- A[i][k] / U[k][k] # L column
U[k][i] <- A[k][i] # U row
for i <- k + 1 to n do #gauss
for j <- k + 1 to n do #elimination
A[i][j] <- A[i][j] – L[i][k]*U[k][j]
return L, U, P
Are the lines:
swap L[k][j] with L[pivot_row][j]
swap U[k][j] with U[pivot_row][j]
correct? Why do we have to swap also these matrices? After swapping them they are no more a lower and upper matrix respectively, are they? :unsure:I tried to apply that algorithm to a matrix, for example $A=\begin{pmatrix}1 & 2 & 4 \\ 3 & 8 & 14 \\ 2 & 6 & 13\end{pmatrix}$ :
At the initializations we get $L=\begin{pmatrix}1 & 0 & 0 \\ \ell_{21} & 1 & 0 \\ \ell_{31} & \ell_{32} & 1\end{pmatrix}$, $U=\begin{pmatrix}u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33}\end{pmatrix}$ and $P=I_{3\times 3}$.
The maximum element by absolute value at the first column is $3$, at the second row.
Now we swap the first the second lines at all the matrices $L,U,P,A$ and we get:
$$L=\begin{pmatrix} \ell_{21} & 1 & 0 \\ 1 & 0 & 0 \\ \ell_{31} & \ell_{32} & 1\end{pmatrix}, \ U=\begin{pmatrix} 0 & u_{22} & u_{23} \\ u_{11} & u_{12} & u_{13} \\ 0 & 0 & u_{33}\end{pmatrix}, \ P=\begin{pmatrix}0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{pmatrix}, \ A=\begin{pmatrix} 3 & 8 & 14 \\ 1 & 2 & 4 \\ 2 & 6 & 13\end{pmatrix}$$
Then we get $U[1,1]=3$.
We also get $L[2,1]=\frac{a_{21}'}{3}=\frac{1}{3}$, $U[1,2]=a_{12}'=8$ and $L[3,1]=\frac{a_{31}'}{3}=\frac{2}{3}$, $U[1,3]=a_{13}'=14$Is the first loop correct?:unsure:
Seemingly by some mathematical coincidence, a hexagon of sides 2,2,7,7, 11, and 11 can be inscribed in a circle of radius 7. The other day I saw a math problem on line, which they said came from a Polish Olympiad, where you compute the length x of the 3rd side which is the same as the radius, so that the sides of length 2,x, and 11 are inscribed on the arc of a semi-circle. The law of cosines applied twice gives the answer for x of exactly 7, but the arithmetic is so complex that the...
I posted this in the Lame Math thread, but it's got me thinking.
Is there any validity to this? Or is it really just a mathematical trick?
Naively, I see that i2 + plus 12 does equal zero2.
But does this have a meaning?
I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero?
Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation
$$
a^n+b^n=c^n
$$
has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy
"Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...