How does the identity Ln(detA)=Tr(lnJ) hold true?

  • Thread starter LAHLH
  • Start date
  • Tags
    Identity
In summary, the identity det(expA)=exp(Tr(A)) can be derived from the identity log(detA)=Tr(log(A)) by substituting A = log(B) and taking the log of both sides. This manipulation is justified by the definitions of the matrix exponential and logarithm. Additionally, the properties of diagonalization and the determinant and trace of a diagonal matrix can be used to prove the desired identity. The author of the original conversation also mentions that the matrices may need to satisfy certain constraints, such as being unitary. However, assuming the existence of inverse eigenvectors allows for the proof to hold for matrices with positive and negative eigenvalues.
  • #1
LAHLH
409
1
Hi,

I've come across the identity det(expA)=exp(Tr(A)) many times now, but recently came across log(detA)=Tr(log(A)), can anyone explain to me why this is true? or if it can be derived from the more familiar first identity?

I'm not sure if there are any particular constraints the matrix must satisfy for the identity, it was a field theory book in which I saw it, so I guess the author could be assuming a couple of things about the matrices (maybe unitary for example)

cheers
 
Physics news on Phys.org
  • #2
LAHLH said:
Hi,

I've come across the identity det(expA)=exp(Tr(A)) many times now, but recently came across log(detA)=Tr(log(A)), can anyone explain to me why this is true? or if it can be derived from the more familiar first identity?

I know very little about matrix exponentials, but if you make the substitution A = log(B), and substitute into your identity and take the log of both sides, you get the result you desire:
[tex]
\det B = \det(\exp(\log(B))) = \det(\exp(A)) =\exp(\text{Tr}(A)) = \exp(\text{Tr}(\log(B)))
[/tex]
[tex]
\log(\det B) = \log(\exp(\text{Tr}(\log B))) = \text{Tr}(\log B)
[/tex]

I don't know under what conditions these manipulations are justified, though...
 
Last edited:
  • #3
Hi LAHLH! :smile:

It so happens that all steps of spamiam are justified in this case.
This can be verified by checking the steps with the definitions of the http://en.wikipedia.org/wiki/Matrix_exponential" .

Note that the key concept to the regular proof is the observation that if λ is an eigenvalue of A, that eλ is an eigenvalue of eA, and that log(λ) is an eigenvalue of log(A).

Furthermore det(A) is the product of eigenvalues, and Tr(A) is the sum of eigenvalues.
Since log and exp convert sums and products into each other, the respective formulas follow.
 
Last edited by a moderator:
  • #4
If A is a square matrix, then we can diagonalize it:
[tex]
\mathbf{A} \cdot \mathbf{U} = \mathbf{U} \cdot \mathbf{\Lambda}
[/tex]
where [itex]\mathbf{\Lambda}[/itex] is a diagonal matrix with the eigenvalues of [itex[\mathbf{A}[/itex] along its diagonal and the columns of the matrix [itex]\mathbf{U}[/itex] are the corresponding eigenvectors.

Then, we can wrie:
[tex]
\mathbf{A} = \mathbf{U} \cdot \mathbf{\Lambda} \cdot \mathbf{U}^{-1}
[/tex]
provided that [itex]\mathbf{U}^{-1}[/itex] exists (which means that the eigenvectors of A form a complete basis). Let us assume this to be the case.

Then, any function of the matrix A, [itex]f(\mathbf{A})[/itex], is defined as:
[tex]
f(\mathbf{A}) = \mathbf{U} \cdot f(\mathbf{\Lambda}) \cdot \mathbf{U}^{-1}
[/tex]
where [itex]f(\mathbf{\Lambda})[/itex] is the diagonal matrix [itex]f( \mathbf{\Lambda}) = \mathrm{diag} \left( \lbrace f(\lambda_{\alpha}) \rbrace \right)[/itex].

If you find the determinant:
[tex]
\det{ \left[ f(\mathbf{A}) \right]} = \det{ \left[ \mathbf{U} \cdot f( \mathbf{\Lambda} ) \cdot \mathbf{U}^{-1} \right] } = \det{\left[ \mathbf{U} \right]} \, \det{\left[ f(\mathbf{\Lambda}) \right]} \, \frac{1}{\det{\left[ \mathbf{U} \right]}}
[/tex]
[tex]
\det{ \left[ f(\mathbf{A} ) \right] } = \det{ \left[ f( \mathbf{\Lambda} ) \right] } = \prod_{\alpha}{f(\lambda_{\alpha})}
[/tex]
The last step follows from the fact that the determinant of a diagonal matrix is the product of the diagonal elements.

A similar rule holds for the trace:
[tex]
\mathrm{Tr} \left[ f(\mathbf{A}) \right] = \mathrm{Tr} \left[ \mathbf{U} \cdot f(\mathbf{\Lambda}) \cdot \mathbf{U}^{-1} \right] = \mathrm{Tr} \left[ \mathbf{U}^{-1} \cdot \mathbf{U} \cdot f(\mathbf{\Lambda}) \right] = \mathrm{Tr} \left[ f(\mathbf{\Lambda}) \right] = \sum_{\alpha}{f(\lambda_{\alpha})}
[/tex]
where we used the "cyclic property of the trace" and its definition as a sum of the diagonal elements in the last step.

Now, if you use some properties of the exponential function, it is not hard to prove:
[tex]
\det{ \left[ \exp{ \left( \mathbf{A} \right) } \right] } = \exp{ \left( \mathrm{Tr} \left[ \mathbf{A} \right] \right)}
[/tex]

Now, assuming [itex]\mathbf{A}[/itex] has only positive eigenvalues, you can define [itex]\ln{\mathbf{A}}[/itex]. If you write down the above identity for that matrix and take the logarithm of both sides, you will get your required identity.
 
Last edited:
  • #5
Dickfore said:
If A is a square matrix, then we can diagonalize it:
[tex]
\mathbf{A} \cdot \mathbf{U} = \mathbf{U} \cdot \mathbf{\Lambda}
[/tex]
where [itex]\mathbf{\Lambda}[/itex] is a diagonal matrix with the eigenvalues of [itex[\mathbf{A}[/itex] along its diagonal and the columns of the matrix [itex]\mathbf{U}[/itex] are the corresponding eigenvectors.

Then, we can wrie:
[tex]
\mathbf{A} = \mathbf{U} \cdot \mathbf{\Lambda} \cdot \mathbf{U}^{-1}
[/tex]
provided that [itex]\mathbf{U}^{-1}[/itex] exists (which means that the eigenvectors of A form a complete basis). Let us assume this to be the case.

Uhh :rolleyes: that is quite an assumption.


Dickfore said:
Then, any function of the matrix A, [itex]f(\mathbf{A})[/itex], is defined as:
[tex]
f(\mathbf{A}) = \mathbf{U} \cdot f(\mathbf{\Lambda}) \cdot \mathbf{U}^{-1}
[/tex]
where [itex]f(\mathbf{\Lambda})[/itex] is the diagonal matrix [itex]f( \mathbf{\Lambda}) = \mathrm{diag} \left( \lbrace f(\lambda_{\alpha}) \rbrace \right)[/itex].

If you find the determinant:
[tex]
\det{ \left[ f(\mathbf{A}) \right]} = \det{ \left[ \mathbf{U} \cdot f( \mathbf{\Lambda} ) \cdot \mathbf{U}^{-1} \right] } = \det{\left[ \mathbf{U} \right]} \, \det{\left[ f(\mathbf{\Lambda}) \right]} \, \frac{1}{\det{\left[ \mathbf{U} \right]}}
[/tex]
[tex]
\det{ \left[ f(\mathbf{A} ) \right] } = \det{ \left[ f( \mathbf{\Lambda} ) \right] } = \prod_{\alpha}{f(\lambda_{\alpha})}
[/tex]
The last step follows from the fact that the determinant of a diagonal matrix is the product of the diagonal elements.

A similar rule holds for the trace:
[tex]
\mathrm{Tr} \left[ f(\mathbf{A}) \right] = \mathrm{Tr} \left[ \mathbf{U} \cdot f(\mathbf{\Lambda}) \cdot \mathbf{U}^{-1} \right] = \mathrm{Tr} \left[ \mathbf{U}^{-1} \cdot \mathbf{U} \cdot f(\mathbf{\Lambda}) \right] = \mathrm{Tr} \left[ f(\mathbf{\Lambda}) \right] = \sum_{\alpha}{f(\lambda_{\alpha})}
[/tex]
where we used the "cyclic property of the trace" and its definition as a sum of the diagonal elements in the last step.

How did you get [itex]\mathrm{Tr} \left[ \mathbf{U} \cdot f(\mathbf{\Lambda}) \cdot \mathbf{U}^{-1} \right] = \mathrm{Tr} \left[ \mathbf{U}^{-1} \cdot \mathbf{U} \cdot f(\mathbf{\Lambda}) \right][/itex]?


Dickfore said:
Now, if you use some properties of the exponential function, it is not hard to prove:
[tex]
\det{ \left[ \exp{ \left( \mathbf{A} \right) } \right] } = \exp{ \left( \mathrm{Tr} \left[ \mathbf{A} \right] \right)}
[/tex]

Now, assuming [itex]\mathbf{A}[/itex] has only positive eigenvalues, you can define [itex]\ln{\mathbf{A}}[/itex]. If you write down the above identity for that matrix and take the logarithm of both sides, you will get your required identity.

Shouldn't negative eigenvalues also work in combination with the complex logarithm?
 
  • #6
I like Serena said:
Uhh :rolleyes: that is quite an assumption.
Yes, and I stated when this is the case.

I like Serena said:
How did you get [itex]\mathrm{Tr} \left[ \mathbf{U} \cdot f(\mathbf{\Lambda}) \cdot \mathbf{U}^{-1} \right] = \mathrm{Tr} \left[ \mathbf{U}^{-1} \cdot \mathbf{U} \cdot f(\mathbf{\Lambda}) \right][/itex]?

By the "cyclic property of the trace":
[tex]
\mathrm{Tr} \left[ \mathbf{A} \cdot \mathbf{B} \cdot \mathbf{C} \right] = \mathrm{Tr} \left[ \mathbf{B} \cdot \mathbf{C} \cdot \mathbf{A} \right] = \mathrm{Tr} \left[ \mathbf{C} \cdot \mathbf{A} \cdot \mathbf{B} \right]
[/tex]
I like Serena said:
Shouldn't negative eigenvalues also work in combination with the complex logarithm?
Actually, if we perform the analytic continuation for the logarithm, it can hold for any complex eigenvalues. However, for the principal branch of the logarithm:
[tex]
\mathrm{Log}{(z_1 z_2)} \neq \mathrm{Log}{(z_1)} + \mathrm{Log}{(z_2)}
[/tex]
For example:
[tex]
z_1 = \frac{-1 + i \sqrt{3}}{2} = e^{i \frac{2 \pi}{3}} \Rightarrow \mathrm{Log}{(z_1)} = i \frac{2 \pi}{3}
[/tex]
and
[tex]
z_2 = i = e^{i \frac{\pi}{2}} \Rightarrow \mathrm{Log}{(z_2)} = i \frac{\pi}{2}
[/tex]
[tex]
z_1 z_2 = \frac{-\sqrt{3} - i}{2} = e^{i \frac{7 \pi}{6}} \Rightarrow \mathrm{Log}{(z_1 z_2)} = -i \frac{5 \pi}{6}
[/tex]
This is not equal to:
[tex]
\mathrm{Log}{(z_1)} + \mathrm{Log}{(z_2)} = i \frac{7 \pi}{6}
[/tex]

So, in general, one cannot use:
[tex]
\mathrm{Log}{\left( \prod_{\alpha}{\lambda_\alpha} \right)} \neq \sum_{\alpha}{\mathrm{Log}{( \lambda _\alpha )}}
[/tex]
 
  • #7
Dickfore said:
Yes, and I stated when this is the case.

Couldn't it be generalized by specifying that [itex]\mathbf \Lambda[/itex] is a Jordan normal form?
Then [itex]\mathbf U[/itex] would hold the generalized eigenvectors.



Dickfore said:
By the "cyclic property of the trace":
[tex]
\mathrm{Tr} \left[ \mathbf{A} \cdot \mathbf{B} \cdot \mathbf{C} \right] = \mathrm{Tr} \left[ \mathbf{B} \cdot \mathbf{C} \cdot \mathbf{A} \right] = \mathrm{Tr} \left[ \mathbf{C} \cdot \mathbf{A} \cdot \mathbf{B} \right]
[/tex]

Ah, okay! :)


Dickfore said:
Actually, if we perform the analytic continuation for the logarithm, it can hold for any complex eigenvalues. However, for the principal branch of the logarithm:
[tex]
\mathrm{Log}{(z_1 z_2)} \neq \mathrm{Log}{(z_1)} + \mathrm{Log}{(z_2)}
[/tex]

Yes, but don't the steps in the proof hold if we calculate with all branches?

So
[tex]
\mathrm{Log}(e^{i \frac \pi 3})=\{i(\frac \pi 3 + 2k\pi) : k \in \mathbb Z\}
[/tex]
 
  • #8
I like Serena said:
Couldn't it be generalized by specifying that [itex]\mathbf \Lambda[/itex] is a Jordan normal form?
Then [itex]\mathbf U[/itex] would hold the generalized eigenvectors.
Maybe, I am not that familiar with Jordan normal forms. The procedure depends sensitively on the existence of [itex]\mathbf{U}^{-1}[/itex]. If you notice the above proof of the equality [itex]\det{\left[ \exp{\left( \mathbf{A} \right)} \right]} = \exp{\left( \mathrm{Tr}\left[ \mathbf{A} \right]\right)}[/itex] depended twice on the existince of the inverse matrix of eigenvector columns.
I like Serena said:
Yes, but don't the steps in the proof hold if we calculate with all branches?

So
[tex]
\mathrm{Log}(e^{i \frac \pi 3})=\{i(\frac \pi 3 + 2k\pi) : k \in \mathbb Z\}
[/tex]
If you use the multiple valued function [itex]\log[/itex], then:
[tex]
\exp{\left( \log{(z)} \right)} = z
[/tex]
BUT
[tex]
\log{\left( \exp{(z)} \right)} = z + 2 k \pi i \neq z
[/tex]

If you start from the identity [itex]\det{\left[ \exp{\left( \mathbf{A} \right)} \right]} = \exp{\left( \mathrm{Tr}\left[ \mathbf{A} \right]\right)}[/itex] in order to prove [itex]\log{ \left( \det{\left[ \mathbf{A} \right]} \right)} = \mathrm{Tr} \left[ \log{(\mathbf{A})} \right][/itex] in the last step you do take the logarithm on both sides. So, you must be careful to show that this equality holds as a set equality. In either case, [itex]\det{\left[\mathbf{A}\right]} \neq 0[/itex] must hold, that is 0 must not be an eigenvalue of the matrix [itex]\mathbf{A}[/itex].
 
  • #9
Dickfore said:
If you use the multiple valued function [itex]\log[/itex], then:
[tex]
\exp{\left( \log{(z)} \right)} = z
[/tex]
BUT
[tex]
\log{\left( \exp{(z)} \right)} = z + 2 k \pi i \neq z
[/tex]

If you start from the identity [itex]\det{\left[ \exp{\left( \mathbf{A} \right)} \right]} = \exp{\left( \mathrm{Tr}\left[ \mathbf{A} \right]\right)}[/itex] in order to prove [itex]\log{ \left( \det{\left[ \mathbf{A} \right]} \right)} = \mathrm{Tr} \left[ \log{(\mathbf{A})} \right][/itex] in the last step you do take the logarithm on both sides. So, you must be careful to show that this equality holds as a set equality. In either case, [itex]\det{\left[\mathbf{A}\right]} \neq 0[/itex] must hold, that is 0 must not be an eigenvalue of the matrix [itex]\mathbf{A}[/itex].

Ah, I see.
I guess the complex logarithm is a bit more troublesome than I thought!

But then I think the equation log(det A)=Tr(log A) is not quite proper.
Shouldn't it be: log(det A)=Tr(log A) mod i2pi?
The reference to log(det A) implies that all eigenvalues must be ≠ 0.
 
  • #10
I like Serena said:
But then I think the equation log(det A)=Tr(log A) is not quite proper.
Shouldn't it be: log(det A)=Tr(log A) mod i2pi?
The reference to log(det A) implies that all eigenvalues must be ≠ 0.

Yes, and this is what I call a "set equality". Both sides of that equation contain an infinite set of values. You need to show that those sets are equal.

You are right about the determinant not being equal to zero should be implied. I just made a remark in the end of my previous post.
 

FAQ: How does the identity Ln(detA)=Tr(lnJ) hold true?

What is the significance of the "Ln(detA)=Tr(lnJ) identity?

The "Ln(detA)=Tr(lnJ) identity is an important result in linear algebra and matrix theory. It shows a relationship between the natural logarithm of the determinant of a matrix (detA) and the trace of the natural logarithm of the matrix (lnJ). This identity is useful in solving problems involving determinants and eigenvalues of matrices.

How is the "Ln(detA)=Tr(lnJ) identity derived?

The "Ln(detA)=Tr(lnJ) identity can be derived using the properties of logarithms and properties of determinants and traces of matrices. It is a result of the fact that the logarithm of a product is equal to the sum of the logarithms of the individual factors. The proof for this identity can be found in many linear algebra textbooks.

In what situations is the "Ln(detA)=Tr(lnJ) identity applicable?

The "Ln(detA)=Tr(lnJ) identity is applicable in any situation where both the determinant and trace of a matrix are involved. It is commonly used in solving problems related to eigenvalues, diagonalization, and other matrix operations.

Can the "Ln(detA)=Tr(lnJ) identity be extended to higher dimensions?

Yes, the "Ln(detA)=Tr(lnJ) identity can be extended to higher dimensions. In fact, it holds true for any square matrix, regardless of its size. This makes it a powerful tool in linear algebra and matrix theory.

Are there any other identities related to the "Ln(detA)=Tr(lnJ) identity?

Yes, there are other related identities such as "ln(detA^n)=n*ln(detA)" and "det(e^A)=e^tr(A)". These identities can also be derived using similar methods and are useful in various applications of linear algebra and matrix theory.

Similar threads

Replies
3
Views
2K
6
Replies
175
Views
22K
Replies
39
Views
26K
Replies
1
Views
2K
Replies
5
Views
3K
Back
Top