Can I Use Linear Algebra to Prove the Equation det(AB) = det(A) det(B)?

  • Thread starter Castilla
  • Start date
In summary: A).In summary, this equation states that the determinant of a matrix is equal to the product of the determinants of its diagonal elements.
  • #1
Castilla
241
0
I am trying to advance in my teoretical study of change of variables for double integrals but it seems I need to use this equation:
[tex] det(AB) = det(A) det(B)[/tex]. I would like to know which elements of linear algebra I need to know to follow a proof ot that statement.
Thanks for your answer.
 
Physics news on Phys.org
  • #2
I never liked these types of proofs. This is the only thing I can say, maybe someone can add to it or show you a different way:

[tex]AB=\left[Ab_1\cdots Ab_n\right][/tex]

The right side can be simplifed / written differently and then take the determinant. I think this is the way I've seen it before, although it's really tedious. I hope someone knows an easier way :rolleyes:

Alex
 
  • #3
It depends... Sometimes, the determinant is defined recursively - of course, you don't need much lineair algebra for that. On the other hand, such a "definition" isn't very useful to work with, although it's easy to understand.
Intrinsicly though, a determinant can be defined using permutations and it involves being multilineair, alternating and having the property that det(In) = 1. If you've seen it this way, the proof isn't too long.
 
  • #4
Thanks to both.

TD, could I request you a sketch of the proof ?
 
  • #5
Ok, since it uses some of the previous definitions I will make a short introduction.

Firstly, we define a map d(A) (I think it's called this in English) which is multilineair and alternating. We can prove it satisfies the following properties:
- d(A) changes sign if you swap two columns.
- d(A) doesn't change if you had a lineair combination of columns to another column.
- d(A) = 0 if one of the columns of A is 0.
- If rank(A) < n (assuming we're starting with a n x n matrix), then d(A) is 0.

After that, we define the "det" as: [itex]\det :M_{nn} \left( K \right) \to K[/itex] which is the above (alternating and multilineair) and satisfies [itex]\det \left( {I_n } \right) = 1[/tex]. we can show that this det is unique.
Then you can prove a small lemma. Suppose we have that initial map d again, then d can always be written as [itex]d\left( {I_n } \right)\det[/itex] so that for all matrices A: [itex]d\left( A \right) = \det \left( A \right)d\left( {I_n } \right)[/itex].

Now we've done all of that, proving our theorem isn't that hard anymore.
We take A and B and want that det(AB) = det(A)det(B). Start with taking A and consider the map (?): [itex]d_A :M_{nn} \left( K \right) \to K:d_A \left( B \right) = \det \left( {AB} \right)[/itex], or, written in columns: [itex]
d_A \left( {\begin{array}{*{20}c}
{B_1 } & {B_2 } & \cdots & {B_n } \\
\end{array}} \right) = \det \left( {\begin{array}{*{20}c}
{AB_1 } & {AB_2 } & \cdots & {AB_n } \\
\end{array}} \right)[/itex]

It is now easy to see that our current d is multilineair and alternating again, so we get (using our lemma) that [itex]d_A \left( B \right) = \det \left( B \right)d\left( {I_n } \right)[/itex], but seeing how we defined d, we also have [itex]d_A \left( {I_n } \right) = \det \left( A \right)[/itex]. Putting that together yields: [itex]\det \left( {AB} \right) = d_A \left( B \right) = \det \left( A \right)\det \left( B \right)[/itex]

Note:
- A function of a matrix is multilineair if it's lineair for every element.
- A function of a matrix is alternating if it's 0 when 2 columns (or rows) are equal.
 
  • #6
Here's another proof which uses the effect of elementary row operations on the determinant:
- Swapping 2 rows switches the sign of the determinant
- Adding a scalar multiple of a row to another doesn't change the determinant
- If a single row is multiplied by a scalar r, then the determinant of the resulting matrix is r times the determinant of the original matrix.

So first, note that det(AB)=det(A)det(B) if A is a diagonal matrix. Since AB is the matrix B with the ith row multiplied by a_ii. So using the scalar multiplication property for each row we see that for diagonal A:
det(AB)=(a_1)(a_2)...(a_n)det(B)=det(A)det(B)
since the determinant of a diagonal matrix is the product of the diagonal elements.

If A is singular, then AB is also singular, so det(AB)=0=det(A)det(B).

For the nonsingular case we can row reduce A to diagonal form by Gauss-Jordan elimination (we avoid row-scaling). Every row-operation can be represented by an elementary matrix, the product of which we call E. Then EA=D, where D is the reduced diagonal matrix of A. So E(AB)=(EA)B=DB.
Let r be the number of row swaps. Now we have:
[tex]\det(AB)=(-1)^r \det(DB)=(-1)^r \det(D)\det(B)=\det(A)\det(B)[/tex]
 
Last edited:
  • #7
Galileo said:
HYou can show every matrix can be reduced to a diagonal matrix with these operations (gaussian elimination).
Not every matrix can be diagonalized. Over C though, it is possible to turn every matrix into an upper triangle matrix (e.g. with Gaussian elimination). Is that what you meant?
 
  • #8
TD and Galileo:

It won't be easy to understand your posts but it will be a good test for me.

Thanks again.
Castilla.
 
  • #9
TD said:
Not every matrix can be diagonalized. Over C though, it is possible to turn every matrix into an upper triangle matrix (e.g. with Gaussian elimination). Is that what you meant?

Yeah, my mistake. I treated the nonsingular case seperately in the proof so I could diagonalize.
 
  • #10
TD said:
Not every matrix can be diagonalized. Over C though, it is possible to turn every matrix into an upper triangle matrix (e.g. with Gaussian elimination). Is that what you meant?


be careful not to confuse (or cause to be confuesd) the notion of gaussian elimnation to put something into upper triangular *non-conjugate* form whcihc has nothing to do with the base field being C or anything else, and the notion of conjugate upper triangular matrix (jordan normal form)
 
  • #11
incidentally, the proof that det is multiplicative depends on your definition of determinant. of course they are all equivalent but with either of my two definitions of det it is obvious that det is mutliplicative, and it is only if you define det as some expansion by rows that it is not clear that it is multiplicative.

it is better to prove that det is the scale factor of volume, whence it becomes trivial to prove it is multiplicative
 
  • #12
matt grime said:
be careful not to confuse (or cause to be confuesd) the notion of gaussian elimnation to put something into upper triangular *non-conjugate* form whcihc has nothing to do with the base field being C or anything else, and the notion of conjugate upper triangular matrix (jordan normal form)
Right, thanks for pointing that out.
matt grime said:
incidentally, the proof that det is multiplicative depends on your definition of determinant. of course they are all equivalent but with either of my two definitions of det it is obvious that det is mutliplicative, and it is only if you define det as some expansion by rows that it is not clear that it is multiplicative.
it is better to prove that det is the scale factor of volume, whence it becomes trivial to prove it is multiplicative
May I ask what those two definitions are?
In my lineair algebra course (as I mentioned earlier), we first defined a 'determinant map' [itex]\det :M_{nn} \left( K \right) \to K[/itex] which had to be multilineair, alternating and satisfying det(In) = 1. Then we showed that this existed, was unique and given by:
[tex]\det(A) = \sum_{\sigma \in S_n}
\sgn(\sigma) \prod_{i=1}^n a_{\sigma(i),i}[/tex]
 
  • #13
i told you: det is the scale factor of volume change.

formally, look at the induced action on the n'th exterior power of the vector space.
 
  • #14
TD said:
Ok, since it uses some of the previous definitions I will make a short introduction.

Firstly, we define a map d(A) (I think it's called this in English) which is multilineair and alternating. We can prove it satisfies the following properties:
- d(A) changes sign if you swap two columns.
- d(A) doesn't change if you had a lineair combination of columns to another column.
- d(A) = 0 if one of the columns of A is 0.
- If rank(A) < n (assuming we're starting with a n x n matrix), then d(A) is 0.

After that, we define the "det" as: [itex]\det :M_{nn} \left( K \right) \to K[/itex] which is the above (alternating and multilineair) and satisfies [itex]\det \left( {I_n } \right) = 1[/tex]. we can show that this det is unique.
Then you can prove a small lemma. Suppose we have that initial map d again, then d can always be written as [itex]d\left( {I_n } \right)\det[/itex] so that for all matrices A: [itex]d\left( A \right) = \det \left( A \right)d\left( {I_n } \right)[/itex].

Now we've done all of that, proving our theorem isn't that hard anymore.
We take A and B and want that det(AB) = det(A)det(B). Start with taking A and consider the map (?): [itex]d_A :M_{nn} \left( K \right) \to K:d_A \left( B \right) = \det \left( {AB} \right)[/itex], or, written in columns: [itex]
d_A \left( {\begin{array}{*{20}c}
{B_1 } & {B_2 } & \cdots & {B_n } \\
\end{array}} \right) = \det \left( {\begin{array}{*{20}c}
{AB_1 } & {AB_2 } & \cdots & {AB_n } \\
\end{array}} \right)[/itex]

It is now easy to see that our current d is multilineair and alternating again, so we get (using our lemma) that [itex]d_A \left( B \right) = \det \left( B \right)d\left( {I_n } \right)[/itex], but seeing how we defined d, we also have [itex]d_A \left( {I_n } \right) = \det \left( A \right)[/itex]. Putting that together yields: [itex]\det \left( {AB} \right) = d_A \left( B \right) = \det \left( A \right)\det \left( B \right)[/itex]

Note:
- A function of a matrix is multilineair if it's lineair for every element.
- A function of a matrix is alternating if it's 0 when 2 columns (or rows) are equal.

I my try harder to later to follow this but it seems like a rather advanced proof for something which should be basic.
 

FAQ: Can I Use Linear Algebra to Prove the Equation det(AB) = det(A) det(B)?

1. What does the equation Det(AB) = Det(A) Det(B) mean?

The equation Det(AB) = Det(A) Det(B) is known as the product rule for determinants. It means that the determinant of the product of two matrices, A and B, is equal to the product of the determinants of A and B.

2. How is the equation Det(AB) = Det(A) Det(B) useful in mathematics?

The equation Det(AB) = Det(A) Det(B) is useful in many areas of mathematics, such as linear algebra, differential equations, and geometry. It allows us to simplify calculations involving determinants and matrices, and it also helps us to prove theorems and solve problems in these fields.

3. Can the equation Det(AB) = Det(A) Det(B) be applied to non-square matrices?

No, the equation Det(AB) = Det(A) Det(B) can only be applied to square matrices. This is because the determinant of a matrix is only defined for square matrices, and the product of two non-square matrices may not even be defined.

4. How does the equation Det(AB) = Det(A) Det(B) relate to the properties of determinants?

The equation Det(AB) = Det(A) Det(B) is a consequence of the properties of determinants. These properties include linearity, which states that the determinant of a sum of matrices is equal to the sum of their determinants, and scaling, which states that multiplying a matrix by a constant also multiplies its determinant by that constant.

5. Can the equation Det(AB) = Det(A) Det(B) be used to find the determinant of a matrix raised to a power?

Yes, the equation Det(AB) = Det(A) Det(B) can be used to find the determinant of a matrix raised to a power. For example, if we have a square matrix A and we want to find the determinant of An, we can use the equation to rewrite it as (Det(A))n. This allows us to easily calculate the determinant without having to perform multiple calculations on the matrix itself.

Back
Top