Proving Simultaneous Diagonalizability of nxn Matrices A and B with AB = BA

  • Thread starter Bachelier
  • Start date
In summary: A mixture of these can be done for the matrices that has eigenvalues of multiplicity more than one but also have distinct eigenvaues. But a slightly more tedious proof will lead to similar confirming answer with block diagonal matrix...thanks.
  • #1
Bachelier
376
0
A, B nxn matrices are called simultaneously diagonalizable if there exists P such that both P^-1AP and P^-1BP are diagonal.

Prove if A and B are diagonalizable and AB = BA, then A, B are simultaneousely diagonalizable?
 
Physics news on Phys.org
  • #2
Suppose A and B commute. Then let v be an eigenvector of A with eigenvalue [itex]\lambda[/itex]. Then we have [itex]A(Bv)=(AB)v=BAv=\lambda Bv[/itex].

So [itex]Bv[/itex] is in the eigenspace of A.

Choose a candidate basis [itex]{b_1,b_2,...,b_n}[/itex] consisting of eigenvectors of A such that the eigenvectors are ordered to correspond with the eigenvalues (ie, if [itex]\lambda_1[/itex] has multiplicity 2, then [itex]b_1[/itex] and [itex]b_2[/itex] are eigenvectors corresponding to [itex]\lambda_1[/itex]).

Now this isn't necessarily a basis of eigenvectors of B. But because Bv is in the eigenspace of A, we can write B in this basis as a block diagonal matrix (where each block is mxm, where m is the multiplicity of an eigenvalue of A). But B is diagonalizable, so each block can be diagonalized, and if we do that, then we have n independent vectors that are eigenvectors of A and of B, so we win.
 
  • #3
Or from the commutativity

[tex]
AB =BA = P \Lambda_A P^{-1}B = BP \Lambda_A P^{-1}
[/tex]

Since P is invertible, by a similarity transformation on both sides, (pre multiply with [itex]P^{-1} [/itex] and post multiply with [itex]P [/itex])

[tex]
\Lambda_A P^{-1}BP = P^{-1}BP \Lambda_A
[/tex]

Since, [itex]P^{-1}BP [/itex] commutes with arbitrary diagonal matrix, itself is a diagonal matrix. Thus, P diagonalizes simultaneously A and B.
 
  • #4
Thanks.
 
  • #5
trambolin said:
Since, [itex]P^{-1}BP [/itex] commutes with arbitrary diagonal matrix, itself is a diagonal matrix. Thus, P diagonalizes simultaneously A and B.
Why does it commute with an arbitrary diagonal matrix? It commutes with a specific diagonal matrix, namely [itex]\Lambda_A[/itex], the diagonal matrix whose diagonal values are the eigenvalues of A.

I think from this you can only conclude that [itex]P^{-1}BP [/itex] is diagonal if all eigenvalues of A are different... A simple counteraxmple to your proof would be A=P equal to the identity!
 
  • #6
If A is already diagonal matrix with arbitrary (which means "any" which then means "choose any diagonalizable A and diagonalize it" by the way also note that the claim is only sufficient not necessary) real numbers as entries, can you give me a nondiagonal matrix B that commutes with A other than identity? Because if you have it, I really need it.

If P = A and P is not diagonal then [itex]P^{-1}AP[/itex] is not diagonal and does not satisfy the assumption in the original claim. If P is diagonal, then A is diagonal (in your case leads to trivial B = B) so back to my question.
 
Last edited:
  • #7
If I read your proof (post #3) with A equal to the identity (so necessarily P=A), then it says
trambolin said:
Or from the commutativity

[tex]
AB =BA = B = B
[/tex]

Since I is invertible, by a similarity transformation on both sides, (pre multiply with [itex]I^{-1} [/itex] and post multiply with [itex]I [/itex])

[tex]
B = B
[/tex]

Since, [itex]B[/itex] commutes with arbitrary diagonal matrix, itself is a diagonal matrix. Thus, I diagonalizes simultaneously A and B.
which is of course not correct. In your last post you seem to be fixing this by considering different cases (A,B both diagonal, one of them not, or both not), but I am not quite following. Could you elaborate?
 
  • #8
Sure. Let's limit the discussion to the commutativity part for now and use the notation [itex]\mathbb{D}[/itex] for the set of all diagonal matrices and [itex]\mathbb{D}_{=} \subset \mathbb{D}[/itex] for the set of all diagonal matrices with identical entries such as identity. What I am trying to say is the following.

Claim: If a matrix B is commuting with any diagonal matrix [itex]A\in\mathbb{D}[/itex]. Then B is also diagonal.

My pseudo-proof goes like this. Suppose A is a diagonal 2x2 matrix with distinct elements. Then,
[tex]
AB = \begin{pmatrix} \lambda_1B_{11} &\lambda_1B_{12}\\ \lambda_2B_{21} &\lambda_2B_{22}\end{pmatrix} \neq \begin{pmatrix} \lambda_1B_{11} &\lambda_2B_{12}\\ \lambda_1B_{21} &\lambda_2B_{22}\end{pmatrix} = BA
[/tex]
if [itex]\lambda_1 \neq \lambda_2[/itex], or [itex]B_{21}, B_{12} \neq 0[/itex].


Now, your examples are using the elements of [itex]A\in\mathbb{D}_{=}[/itex]. But my claim is about the [itex]A\in\mathbb{D}[/itex], hence a bigger set to test with because we can start with any diagonalizable matrix A which might have completely distinct eigenvalues. So if you plug in any element from the bigger set, it puts additional constraints on the off-diagonal entries of B forcing it to be diagonal as I provided a small example above.
 
  • #9
Now is the second part about the cases where A is restricted to be [itex]A\in\mathbb{D}_{=}[/itex]. Then you can start arguing as follows. I diagonalize B with a matrix Q. And to show that this also diagonalizes A is trivial since [itex]Q^{-1}AQ= Q^{-1}QA = A[/itex] since [itex]A\in\mathbb{D}_{=}[/itex] and commutes with any matrix.

A mixture of these can be done for the matrices that has eigenvalues of multiplicity more than one but also have distinct eigenvaues. But a slightly more tedious proof will lead to similar confirming answer with block diagonal matrix arguments.
 

FAQ: Proving Simultaneous Diagonalizability of nxn Matrices A and B with AB = BA

What is considered a challenging problem in science?

A challenging problem in science is typically a complex issue or question that has not yet been fully understood or solved. It often requires significant research, experimentation, and critical thinking to find a solution.

How do scientists approach solving a challenging problem?

Scientists use the scientific method to approach solving challenging problems. This involves making observations, formulating a hypothesis, designing and conducting experiments, analyzing data, and drawing conclusions. The process may need to be repeated multiple times before a satisfactory solution is found.

What are some common challenges scientists face when trying to solve a problem?

Some common challenges scientists face when trying to solve a problem include limited resources, conflicting results, lack of relevant data, and unexpected variables. They may also encounter ethical or practical obstacles that can complicate the process.

How long does it typically take for scientists to solve a challenging problem?

The time it takes for scientists to solve a challenging problem can vary greatly depending on the complexity of the problem, available resources, and the research methods being used. Some problems may take weeks or months to solve, while others may take years or even decades.

Can a challenging problem ever be fully solved?

In science, it is rare for a problem to be fully solved as new discoveries and advancements in technology can lead to new information and perspectives. However, scientists may reach a satisfactory solution or understanding of a problem that can stand the test of time and contribute to further research and discoveries.

Similar threads

Replies
5
Views
2K
Replies
1
Views
850
Replies
5
Views
3K
Replies
5
Views
6K
Replies
4
Views
587
Back
Top