Can I Use Linear Algebra to Prove the Equation det(AB) = det(A) det(B)?

  • Thread starter Castilla
  • Start date
In summary: A).In summary, this equation states that the determinant of a matrix is equal to the product of the determinants of its diagonal elements.
  • #1
Castilla
241
0
I am trying to advance in my teoretical study of change of variables for double integrals but it seems I need to use this equation:
. I would like to know which elements of linear algebra I need to know to follow a proof ot that statement.
Thanks for your answer.
 
Physics news on Phys.org
  • #2
I never liked these types of proofs. This is the only thing I can say, maybe someone can add to it or show you a different way:



The right side can be simplifed / written differently and then take the determinant. I think this is the way I've seen it before, although it's really tedious. I hope someone knows an easier way :rolleyes:

Alex
 
  • #3
It depends... Sometimes, the determinant is defined recursively - of course, you don't need much lineair algebra for that. On the other hand, such a "definition" isn't very useful to work with, although it's easy to understand.
Intrinsicly though, a determinant can be defined using permutations and it involves being multilineair, alternating and having the property that det(In) = 1. If you've seen it this way, the proof isn't too long.
 
  • #4
Thanks to both.

TD, could I request you a sketch of the proof ?
 
  • #5
Ok, since it uses some of the previous definitions I will make a short introduction.

Firstly, we define a map d(A) (I think it's called this in English) which is multilineair and alternating. We can prove it satisfies the following properties:
- d(A) changes sign if you swap two columns.
- d(A) doesn't change if you had a lineair combination of columns to another column.
- d(A) = 0 if one of the columns of A is 0.
- If rank(A) < n (assuming we're starting with a n x n matrix), then d(A) is 0.

After that, we define the "det" as: which is the above (alternating and multilineair) and satisfies so that for all matrices A: .

Now we've done all of that, proving our theorem isn't that hard anymore.
We take A and B and want that det(AB) = det(A)det(B). Start with taking A and consider the map (?): , or, written in columns:

It is now easy to see that our current d is multilineair and alternating again, so we get (using our lemma) that , but seeing how we defined d, we also have . Putting that together yields:

Note:
- A function of a matrix is multilineair if it's lineair for every element.
- A function of a matrix is alternating if it's 0 when 2 columns (or rows) are equal.
 
  • #6
Here's another proof which uses the effect of elementary row operations on the determinant:
- Swapping 2 rows switches the sign of the determinant
- Adding a scalar multiple of a row to another doesn't change the determinant
- If a single row is multiplied by a scalar r, then the determinant of the resulting matrix is r times the determinant of the original matrix.

So first, note that det(AB)=det(A)det(B) if A is a diagonal matrix. Since AB is the matrix B with the ith row multiplied by a_ii. So using the scalar multiplication property for each row we see that for diagonal A:
det(AB)=(a_1)(a_2)...(a_n)det(B)=det(A)det(B)
since the determinant of a diagonal matrix is the product of the diagonal elements.

If A is singular, then AB is also singular, so det(AB)=0=det(A)det(B).

For the nonsingular case we can row reduce A to diagonal form by Gauss-Jordan elimination (we avoid row-scaling). Every row-operation can be represented by an elementary matrix, the product of which we call E. Then EA=D, where D is the reduced diagonal matrix of A. So E(AB)=(EA)B=DB.
Let r be the number of row swaps. Now we have:
 
Last edited:
  • #7
Galileo said:
HYou can show every matrix can be reduced to a diagonal matrix with these operations (gaussian elimination).
Not every matrix can be diagonalized. Over C though, it is possible to turn every matrix into an upper triangle matrix (e.g. with Gaussian elimination). Is that what you meant?
 
  • #8
TD and Galileo:

It won't be easy to understand your posts but it will be a good test for me.

Thanks again.
Castilla.
 
  • #9
TD said:
Not every matrix can be diagonalized. Over C though, it is possible to turn every matrix into an upper triangle matrix (e.g. with Gaussian elimination). Is that what you meant?

Yeah, my mistake. I treated the nonsingular case seperately in the proof so I could diagonalize.
 
  • #10
TD said:
Not every matrix can be diagonalized. Over C though, it is possible to turn every matrix into an upper triangle matrix (e.g. with Gaussian elimination). Is that what you meant?


be careful not to confuse (or cause to be confuesd) the notion of gaussian elimnation to put something into upper triangular *non-conjugate* form whcihc has nothing to do with the base field being C or anything else, and the notion of conjugate upper triangular matrix (jordan normal form)
 
  • #11
incidentally, the proof that det is multiplicative depends on your definition of determinant. of course they are all equivalent but with either of my two definitions of det it is obvious that det is mutliplicative, and it is only if you define det as some expansion by rows that it is not clear that it is multiplicative.

it is better to prove that det is the scale factor of volume, whence it becomes trivial to prove it is multiplicative
 
  • #12
matt grime said:
be careful not to confuse (or cause to be confuesd) the notion of gaussian elimnation to put something into upper triangular *non-conjugate* form whcihc has nothing to do with the base field being C or anything else, and the notion of conjugate upper triangular matrix (jordan normal form)
Right, thanks for pointing that out.
matt grime said:
incidentally, the proof that det is multiplicative depends on your definition of determinant. of course they are all equivalent but with either of my two definitions of det it is obvious that det is mutliplicative, and it is only if you define det as some expansion by rows that it is not clear that it is multiplicative.
it is better to prove that det is the scale factor of volume, whence it becomes trivial to prove it is multiplicative
May I ask what those two definitions are?
In my lineair algebra course (as I mentioned earlier), we first defined a 'determinant map' which had to be multilineair, alternating and satisfying det(In) = 1. Then we showed that this existed, was unique and given by:
 
  • #13
i told you: det is the scale factor of volume change.

formally, look at the induced action on the n'th exterior power of the vector space.
 
  • #14
TD said:
Ok, since it uses some of the previous definitions I will make a short introduction.

Firstly, we define a map d(A) (I think it's called this in English) which is multilineair and alternating. We can prove it satisfies the following properties:
- d(A) changes sign if you swap two columns.
- d(A) doesn't change if you had a lineair combination of columns to another column.
- d(A) = 0 if one of the columns of A is 0.
- If rank(A) < n (assuming we're starting with a n x n matrix), then d(A) is 0.

After that, we define the "det" as: which is the above (alternating and multilineair) and satisfies so that for all matrices A: .

Now we've done all of that, proving our theorem isn't that hard anymore.
We take A and B and want that det(AB) = det(A)det(B). Start with taking A and consider the map (?): , or, written in columns:

It is now easy to see that our current d is multilineair and alternating again, so we get (using our lemma) that , but seeing how we defined d, we also have . Putting that together yields:

Note:
- A function of a matrix is multilineair if it's lineair for every element.
- A function of a matrix is alternating if it's 0 when 2 columns (or rows) are equal.

I my try harder to later to follow this but it seems like a rather advanced proof for something which should be basic.
 

Similar threads

Back
Top