Proving ##(cof ~A)^t ~A = (det A)I##

In summary: But this result is applicable to A, since we can find this expression in A as well. In summary, we can prove that all non-diagonal elements of ##(cof~A)^t~A## are equal to zero by considering a new matrix B in which the columns are repeated. This result is applicable to A, as the same expression can be found in A.
  • #1
Hall
351
88
Homework Statement
##cof ~A## means the cofactor matrix of A, and ##(cof~ A)^t## means the transpose of cofactor matrix of A (do you call it adjoint of A, well I too used to, but no longer. det A = determinant of A and I is the identity matrix of order compatible with LHS.
Relevant Equations
The idea I would use is to show that all diagonal elements of ##(cof~A)^t~A## is equal to ##det ~A## and rest of all the elements are zero.
i-th column of ##cof~A## =
$$
\begin{bmatrix}
(-1)^{I+1} det~A_{1i} \\
(-1)^{I+2} det ~A_{2i}\\
\vdots \\
(-1)^{I+n} det ~A_{ni}\\
\end{bmatrix}$$

Therefore, the I-th row of ##(cof~A)^t## = ##\big[ (-1)^{I+1} det~A_{1i}, (-1)^{I+2} det ~A_{2i}, \cdots, (-1)^{I+n} det ~A_{ni} \big]##

The I-th -- I-th element of ##(cof~A)^t ~ A## is =
$$
\big[ (-1)^{I+1} det~A_{1i}, (-1)^{I+2} det ~A_{2i}, \cdots, (-1)^{I+n} det ~A_{ni}\big] \times
\begin{bmatrix}
a_{1i}\\
a_{2i}\\
\vdots \\
a_{ni}\\
\end{bmatrix}
= \sum_{k=1}^{n} (-1)^{I+k} a_{ki} det~A_{ki}$$
Well, the RHS is simply a ##det ~A## expanded along the ith column. Therefore, all diagonal elements of ##(cof~A)^t ~A## is equal to ##det~A##.

Now, I would try to prove that all non-diagonal elements are zero. Consider the ##I-j th element## of ##(cof~A)^t~A##
$$
\big[ (-1)^{I+1} det~A_{1i}, (-1)^{I+2} det ~A_{2i}, \cdots, (-1)^{I+n} det ~A_{ni}\big] \times
\begin{bmatrix}
a_{1i}\\
a_{2j}\\
\vdots \\
a_{nj}\\
\end{bmatrix}
= \sum_{k=1}^{n} (-1)^{I+j} a_{kj} det A_{ki}$$

But I'm unable to prove that RHS is equal to zero. Will you help me?

Note: My computer in not making me to write small I and so somewhere where there should be a small I we have a big I.
 
Physics news on Phys.org
  • #2
RHS coincides with the definition of det A, for i=j. How about try n=2 in order to confirm your way ? Say RHS = R_ij, I see
[tex]R_{11}=R_{22}=a_{11}a_{22}-a_{21}a_{21}[/tex]
[tex]R_{12}=R_{21}=0[/tex]
Then you can go to n=3 to find a general rule of cancellation.
 
Last edited:
  • #3
anuttarasammyak said:
[tex]R_{11}=R_{22}=a_{11}a_{22}-a_{21}a_{21}[/tex]
[tex]R_{12}=R_{21}=0[/tex]
Then you can go to n=3 to find a general rule of cancellation.
I showed ##R_{i,j}=0## when ##i\neq j## for ##2\times 2## and ##3\times 3## matrices (for 3 x 3, we shall have three cases for ##i \neq j##). But proving it in general seems unattainable at the moment.
 
  • Like
Likes anuttarasammyak
  • #4
For i ##\neq## j, you may interpret that RHS is determinant of a matrix which has two same columns. Thus it is zero.
 
  • Like
Likes Hall
  • #5
anuttarasammyak said:
For i ##\neq## j, you may interpret that RHS is determinant of a matrix which has two same columns. Thus it is zero.
Yes, I found this proof by Tom Apostol:

For any matrix ##A##
$$
\begin{bmatrix}
a_11 & a_{12}& \cdots &a_{1k} &\cdots& a_{1n}\\
a_{21} & a_{22} & \cdots & a_{2k} &\cdots & a_{2n}\\
\vdots&\vdots&\vdots &\vdots & \vdots &\vdots\\
a_{n1}& a_{n2} & \cdots &a_{nk} &\cdots & a_{nn}\\
\end{bmatrix}
$$
Consider a new matrix B, such that ##B=##
$$\begin{bmatrix}
a_11 & a_{12}& \cdots& a_{1j}=a_{1k} \cdots &a_{1k} &\cdots& a_{1n}\\
a_{21} & a_{22} & \cdots a_{2j} =a_{2k}& \cdots & a_{2k} &\cdots & a_{2n}\\
\vdots&\vdots&\vdots &\vdots & \vdots &\vdots\\
a_{n1}& a_{n2} &\cdots a_{nj}= a_{nk}& \cdots &a_{nk} &\cdots & a_{nn}\\
\end{bmatrix}
$$

That is, the j-th column of B is equal to the kth column of A, and rest of all things are same. That would mean ##det ~B=0##. (We have taken kth column for generality, we could show our result for any column).

For the expression
##\sum_{I=1}^{n} (-1)^{I +k} a_{ij} det ~A_{ik}##
We can replace ## a_{ij}## with ##a_{ik}## for all i.
$$
\sum_{I=1}^{n} (-1)^{I +k} a_{ij} det ~A_{ik} = \sum_{I=1}^{n} (-1)^{I +k} a_{ik} det ~A_{ik}$$
The RHS is simply det B expanded along kth column, therefore
$$
\sum_{I=1}^{n} (-1)^{I +k} a_{ij} det ~A_{ik} = det~B = 0$$

Thus, for any expression of the form ##\sum_{I=1}^{n} (-1)^{I +k} a_{ij} det ~A_{ik}## (where ##j \neq k##) we have it equal to zero.But this seems to me like a tyranny of Mathematics, we have proved something to be zero by taking it into a completely new system. Well, changing the context of something must change it's meaning, the expression is zero in the context of matrix B, not in A.
 

FAQ: Proving ##(cof ~A)^t ~A = (det A)I##

What is the meaning of "cofactor of A" in the equation (cof ~A)^t ~A = (det A)I?

The cofactor of a matrix A is the signed minor of each element in the matrix. It is used in the calculation of the determinant of A.

How do you prove the equation (cof ~A)^t ~A = (det A)I?

The equation can be proven using the properties of matrix transpose and multiplication, as well as the cofactor expansion formula for determinants. It can also be proven using the Laplace expansion method.

What is the significance of the equation (cof ~A)^t ~A = (det A)I in linear algebra?

This equation shows the relationship between the transpose of the cofactor matrix and the determinant of a matrix. It is often used in the calculation of the inverse of a matrix and in solving systems of linear equations.

Can this equation be used for any type of matrix?

Yes, this equation holds true for any square matrix, regardless of its size or the values of its elements. However, it is most commonly used for 2x2 and 3x3 matrices.

How is this equation applied in real-world problems?

This equation is commonly used in various fields such as engineering, physics, and economics. It is used to solve systems of linear equations, find the inverse of a matrix, and calculate areas and volumes in geometry.

Similar threads

Replies
11
Views
1K
Replies
19
Views
3K
Replies
3
Views
1K
Replies
6
Views
2K
Replies
1
Views
1K
Replies
3
Views
2K
Replies
11
Views
2K
Back
Top