Proof Linear Algebra Theorem: |C| = |A|*|B|

I would have thought of it. I just thought I'd show you another way to do it, just for your own information. Good job! It's a good first proof!In summary, we can prove that for any square matrices A and B of dimensions nxn and mxm respectively, the determinant of the matrix C, where C is of dimensions (m+n)x(m+n) and is divided into four submatrices [A|X] and [0|B], is equal to the product of the determinants of A and B, i.e. |C| = |A|*|B|. This is proven by reducing A and B to the identity matrix or a matrix with a zero
  • #1
daniel_i_l
Gold Member
868
0

Homework Statement


Prove the following theorm:
Given any nxn matrix A, a mxm matrix B, the m+n x m+n matrix C:
[A|X]
[0|B]

where the top left corner of C is A, the top right (X) is any mxn matrix, the bottom right is a nxm 0 matrix and the bottom right is B.
Prove:
|C| = |A|*|B| (|C| is the determinant of C...)

I found a proof but the book did something different and i want to check if my way is correct.

Homework Equations


1) any square matrix that can't be reduced to the I matrix has a determinant of 0. (because it can be reduced to a matrix with a 0 row or coloum)
2) multiplying a row or coloum by a scalar changes the determinant by a factor of 1 over the scalar.
2) changing rows changes the sign of the determinant
3) adding the multiple of a row doesn't change the D.


The Attempt at a Solution



Here's my proof:
Lets say that both A and B can be reduced to the I matrix. if A can be reduced than it can be done with by operating only with the coloums using the 3 basic operations. the process of the reduction will change the D of A (|A|) by some factor that we'll call 'a'. so the determinant of A is just a (|A| = a) - this is because the determinant of I is 1.
The same argument can be made with B - this time operating on the rows giving the determinant of b.
Since we reduced A by operating on the coloums and B with the row, we can do all those operations on C without the operation on one of them effecting the rest of the matrix. Applying these operations to C changes the determinant by a*b and since the resaulting matrix is a trianguler matrix with a diagonal of only ones - the determinant of C is a*b. And so:
|C| = a*b = |A| * |B|.

If either A or B can't be reduced then one of them has a 0 determinant so |A|*|B| = 0
and by an argument similar to the above |C| can also be reduced to a matrix with either a 0 row or 0 coloum so |C| = 0.
Q.E.D
Is this a correct proof?
Thanks.
 
Physics news on Phys.org
  • #2
daniel_i_l said:
If either A or B can't be reduced then one of them has a 0 determinant so |A|*|B| = 0

Really? And what if A = I ? :smile:

Try to use induction, I found the proof more elegant, but then again, degustibus.
 
  • #3
daniel_i_l said:

Homework Statement


Prove the following theorm:
Given any nxn matrix A, a mxm matrix B, the m+n x m+n matrix C:
[A|X]
[0|B]

where the top left corner of C is A, the top right (X) is any mxn matrix, the bottom right is a nxm 0 matrix and the bottom right is B.
Prove:
|C| = |A|*|B| (|C| is the determinant of C...)

I found a proof but the book did something different and i want to check if my way is correct.

Homework Equations


1) any square matrix that can't be reduced to the I matrix has a determinant of 0. (because it can be reduced to a matrix with a 0 row or coloum)
2) multiplying a row or coloum by a scalar changes the determinant by a factor of 1 over the scalar.
2) changing rows changes the sign of the determinant
3) adding the multiple of a row doesn't change the D.


The Attempt at a Solution



Here's my proof:
Lets say that both A and B can be reduced to the I matrix. if A can be reduced than it can be done with by operating only with the coloums using the 3 basic operations. the process of the reduction will change the D of A (|A|) by some factor that we'll call 'a'. so the determinant of A is just a (|A| = a) - this is because the determinant of I is 1.
The same argument can be made with B - this time operating on the rows giving the determinant of b.
Since we reduced A by operating on the coloums and B with the row, we can do all those operations on C without the operation on one of them effecting the rest of the matrix. Applying these operations to C changes the determinant by a*b and since the resaulting matrix is a trianguler matrix with a diagonal of only ones - the determinant of C is a*b. And so:
|C| = a*b = |A| * |B|.

If either A or B can't be reduced then one of them has a 0 determinant so |A|*|B| = 0
and by an argument similar to the above |C| can also be reduced to a matrix with either a 0 row or 0 coloum so |C| = 0.
Q.E.D
Is this a correct proof?
Thanks.
Yeah, the proof looks correct. You could clean it up a little though. There is an invertible matrix U such that AU is either identity (if A is reducible) or has a zero column (if A is not). There is also an invertible V such that VB is either identity or has a zero row. The matrix


[U|0]
[0|I]

which we'll call U', clearly has determinant |U|, and

[I|0]
[0|V]

which we'll call V', has determinant |V|. If A has a zero column, then V'CU' does too. If B has a zero row, then V'CU' does too. So if either of A or B has determinant 0, V'CU' does too, so |V'CU'| = 0. Since V' and U' are invertible, this means |C| = 0, as desired. If on the other hand both A and B have non-zero determinant, then it's clear that V'CU' is just

[I|X]
[0|I]

which has determinant 1. So

|V'CU'| = 1
|V'||C||U'| = 1
|C| = (|V'||U'|)-1
|C| = (|V||U|)-1

Since AU = I = VB, U and V are just the inverses of A and B, thus have reciprocal determinants to A and B, i.e. |U| = |A|-1, |V| = |B|-1. This gives the desired results.

Your argument was a really good one, I was just thinking it's better to clean it up instead of saying things like

"the process of the reduction will change the D of A (|A|) by some factor that we'll call 'a'."

which are unclear, and in fact wrong in this case (the determinant of A never changes, but row reducing A creates a new matrix with a new determinant).
 

FAQ: Proof Linear Algebra Theorem: |C| = |A|*|B|

What is the proof for the linear algebra theorem |C| = |A|*|B|?

The proof for this theorem involves using the properties of determinants, specifically the property that states det(AB) = det(A)*det(B). This property can be extended to show that for two matrices A and B, the determinant of their product AB is equal to the product of their determinants, det(AB) = det(A)*det(B). This can be applied to the characteristic polynomial of a matrix C, which yields |C| = |A|*|B|.

How does the linear algebra theorem |C| = |A|*|B| apply to matrices with different dimensions?

This theorem applies to matrices of any dimension, as long as the dimensions of A and B are such that the product AB is defined. This is because the determinant of a matrix is defined for any square matrix, and the product of two square matrices is also a square matrix. Therefore, the theorem holds for matrices of different dimensions, as long as the product AB is defined.

Can the linear algebra theorem |C| = |A|*|B| be extended to more than two matrices?

Yes, this theorem can be extended to any number of matrices. The general form of this theorem is |C| = |A1|*|A2|*...*|An|, where A1, A2, ..., An are the matrices involved in the product. This can be proven using mathematical induction, starting with the base case of two matrices and then extending it to n matrices.

What are the practical applications of the linear algebra theorem |C| = |A|*|B|?

This theorem has many practical applications in various fields such as physics, engineering, and economics. It is commonly used in solving systems of linear equations, calculating the volume of a parallelepiped, and finding the inverse of a matrix. It is also used in the study of quantum mechanics, where determinants are used to calculate the probability amplitudes of quantum states.

Are there any exceptions to the linear algebra theorem |C| = |A|*|B|?

Yes, there are a few exceptions to this theorem. Firstly, it only applies to square matrices, as the determinant is not defined for non-square matrices. Additionally, the theorem does not hold for matrices with complex entries, as the determinant is defined differently for complex matrices. Lastly, the theorem does not apply to matrices with zero determinants, as the product of two zero determinants would also be zero, making the equation invalid.

Similar threads

Back
Top