Zero-Trace Symmetric Matrix is Orthogonally Similar to A Zero-Diagonal Matrix.

In summary, the problem states that given a symmetric $n\times n$ matrix $A$ with real entries and $\text{trace}(A)=0$, there exists an orthogonal matrix $Q$ such that $QAQ^{-1}$ has only zeroes on the diagonal. However, this statement is not true as there is a counterexample where $A$ is not positive semi-definite. Additional hypotheses, such as positive semi-definiteness, are needed to prove the statement. One approach to proving this is by finding an orthogonal transformation that converts $A$ to a matrix with a zero on the $(1,1)$-position on the diagonal. Then, using the intermediate value theorem, it can be shown that there exists a value
  • #1
caffeinemachine
Gold Member
MHB
816
15
Hello MHB.

During my Mechanics of Solids course in my Mechanical Engineering curriculum I came across a certain fact about $3\times 3$ matrices.

It said that any symmetric $3\times 3$ matrix $A$ (with real entries) whose trace is zero is orthogonally similar to a matrix $B$ which has only zeroes on the diagonal.

In other words, given a symmetric matrix $A$ with $\text{trace}(A)=0$, there exists an orthogonal matrix $Q$ such that $QAQ^{-1}$ has only zeroes on the diagonal.

I think the above fact should be true not only for $3\times 3$ matrices but for matrices with any dimension.

So what I am trying to prove is that:

Problem: Given a symmetric $n\times n$ matrix with real entries with $\text{trace}(A)=0$, there exists an orthogonal matrix $Q$ such that $QAQ^{-1}$ has only zeroes on the diagonal.

I have tried to attack the problem using the spectral theorem.
Since $A$ is symmetric, we know that there exists an orthogonal matrix $S$ such that $D=SAS^{-1}$ is a diagonal matrix.
We need to show that $D$ is orthogonally similar to a matrix with only zeroes on the diagonal.
Thus we have to find an orthogonal matrix $Q$ such that $QDQ^{-1}$ has only zeroes on the diagonal.
This is equivalent to show that $\sum_{k=1}^n q^2_{ik}d_k=0$ for all $i\in \{1,\ldots,n \}$, where $q_{ij}$ is the $i,j$-th entry of $Q$ and $d_k$ is the $k$-th diagonal entry of $D$.
We also know that $d_1+\ldots+d_n=0$.
Here I am stuck.
From the above it can be seen that proposition is true for $n=2$. From $n=3$ I have taken the fact from the book but cannot easily prove it. Can anybody help.
 
Physics news on Phys.org
  • #2
caffeinemachine said:
Hello MHB.

During my Mechanics of Solids course in my Mechanical Engineering curriculum I came across a certain fact about $3\times 3$ matrices.

It said that any symmetric $3\times 3$ matrix $A$ (with real entries) whose trace is zero is orthogonally similar to a matrix $B$ which has only zeroes on the diagonal.

In other words, given a symmetric matrix $A$ with $\text{trace}(A)=0$, there exists an orthogonal matrix $Q$ such that $QAQ^{-1}$ has only zeroes on the diagonal.

I think the above fact should be true not only for $3\times 3$ matrices but for matrices with any dimension.

So what I am trying to prove is that:

Problem: Given a symmetric $n\times n$ matrix with real entries with $\text{trace}(A)=0$, there exists an orthogonal matrix $Q$ such that $QAQ^{-1}$ has only zeroes on the diagonal.

I have tried to attack the problem using the spectral theorem.
Since $A$ is symmetric, we know that there exists an orthogonal matrix $S$ such that $D=SAS^{-1}$ is a diagonal matrix.
We need to show that $D$ is orthogonally similar to a matrix with only zeroes on the diagonal.
Thus we have to find an orthogonal matrix $Q$ such that $QDQ^{-1}$ has only zeroes on the diagonal.
This is equivalent to show that $\sum_{k=1}^n q^2_{ik}d_k=0$ for all $i\in \{1,\ldots,n \}$, where $q_{ij}$ is the $i,j$-th entry of $Q$ and $d_k$ is the $k$-th diagonal entry of $D$.
We also know that $d_1+\ldots+d_n=0$.
Here I am stuck.
From the above it can be seen that proposition is true for $n=2$. From $n=3$ I have taken the fact from the book but cannot easily prove it. Can anybody help.

It's not true.
Counter example:
$$A = \begin{bmatrix}
-1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & +1
\end{bmatrix}$$

Additionally, you need for instance that the matrix is positive semi-definite.
 
  • #3
I like Serena said:
It's not true.
Counter example:
$$A = \begin{bmatrix}
-1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & +1
\end{bmatrix}$$

Additionally, you need for instance that the matrix is positive semi-definite.

Thank you ILS for participating.

Put $$Q=
\begin{bmatrix}
1/\sqrt{2} & 0 & 1/\sqrt{2}\\
0& 1& 0\\
-1/\sqrt{2}& 0 &1/\sqrt{2}
\end{bmatrix}$$

Then $$QAQ^{-1}=\begin{bmatrix}
0 & 0 & 1\\
0 & 0 & 0\\
1 & 0 & 0
\end{bmatrix}$$

So this doesn't serve as a counterexample. May be there is another one.

Anyway. Can you provide a proof (or hints) for the additional hypothesis of positive semi-definitivity? Does this result have a name?
 
  • #4
Let $A = (a_{ij})$ be an $n\times n$ symmetric matrix with trace zero. The first step is to find an orthogonal transformation that converts $A$ to a matrix with a zero on the $(1,1)$-position on the diagonal. If $a_{11}=0$ there is nothing to prove. Otherwise, choose $j>1$ such that $a_{jj}$ has the opposite sign to $a_{11}$. Such a $j$ must exist because the diagonal entries sum to zero. Now let $P_\theta$ be the orthogonal matrix given by rotating the $1$ and $j$ coordinates through an angle $\theta$ and leaving all the other coordinates alone. Specifically, the $2\times2$ submatrix of $P_\theta$ consisting of rows and columns $1$ and $j$ looks like $\begin{bmatrix}\cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}$, $P_\theta$ has $1$s in all the other diagonal places, and zeros everywhere else. You can check that the $(1,1)$-element of $P_\theta AP_\theta^{-1}$ is $a_{11}\cos^2\theta - 2a_{1j}\cos\theta\sin\theta + a_{jj}\sin^2\theta$. When $\theta=0$ this is $a_{11}$. When $\theta=\pi/2$ it is $a_{jj}$ which has the opposite sign to $a_{11}$. By the intermediate value theorem there must be some value of $\theta$ for which this element is $0$. For that value of $\theta$ $$ P_\theta AP_\theta^{-1} = \begin{bmatrix} 0& v \\ w & B \end{bmatrix},$$ where $v$ is a row vector, $w$ is a column vector (each with $n-1$ elements) and $B$ is a symmetric $(n-1)\times(n-1)$ matrix with trace $0$ (because $P_\theta AP_\theta^{-1}$ has the same trace as $A$).

Now proceed inductively. By the same process as above, you can successively find orthogonal transformations that convert $A$ to a matrix with increasingly many zeros down the diagonal. At the end, you will find that the final two diagonal elements $a_{(n-1)(n-1)}$ and $a_{nn}$ are negatives of each other and you can find an orthogonal transformation converting both of them to $0$.
 
Last edited:
  • #5
caffeinemachine said:
So this doesn't serve as a counterexample. May be there is another one.

Good point.
I was mixing it up with diagonalizability, which is not the point here.

Anyway, Opalg has already given a proof.
Anyway. Can you provide a proof (or hints) for the additional hypothesis of positive semi-definitivity? Does this result have a name?

The additional condition of positive semi-definitivity means that all eigenvalues are non-negative.
Since the trace is the sum of the eigenvalues, it follows that a trace of zero implies that all eigenvalues are 0.
Still, this turns out to be irrelevant for your problem.
 
  • #6
Opalg said:
Let $A = (a_{ij})$ be an $n\times n$ symmetric matrix with trace zero. The first step is to find an orthogonal transformation that converts $A$ to a matrix with a zero on the $(1,1)$-position on the diagonal. If $a_{11}=0$ there is nothing to prove. Otherwise, choose $j>1$ such that $a_{jj}$ has the opposite sign to $a_{11}$. Such a $j$ must exist because the diagonal entries sum to zero. Now let $P_\theta$ be the orthogonal matrix given by rotating the $1$ and $j$ coordinates through an angle $\theta$ and leaving all the other coordinates alone. Specifically, the $2\times2$ submatrix of $P_\theta$ consisting of rows and columns $1$ and $j$ looks like $\begin{bmatrix}\cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}$, $P_\theta$ has $1$s in all the other diagonal places, and zeros everywhere else. You can check that the $(1,1)$-element of $P_\theta AP_\theta^{-1}$ is $a_{11}\cos^2\theta - 2a_{1j}\cos\theta\sin\theta + a_{jj}\sin^2\theta$. When $\theta=0$ this is $a_{11}$. When $\theta=\pi/2$ it is $a_{jj}$ which has the opposite sign to $a_{11}$. By the intermediate value theorem there must be some value of $\theta$ for which this element is $0$. For that value of $\theta$ $$ P_\theta AP_\theta^{-1} = \begin{bmatrix} 0& v \\ w & B \end{bmatrix},$$ where $v$ is a row vector, $w$ is a column vector (each with $n-1$ elements) and $B$ is a symmetric $(n-1)\times(n-1)$ matrix with trace $0$ (because $P_\theta AP_\theta^{-1}$ has the same trace as $A$).

Now proceed inductively. By the same process as above, you can successively find orthogonal transformations that convert $A$ to a matrix with increasingly many zeros down the diagonal. At the end, you will find that the final two diagonal elements $a_{(n-1)(n-1)}$ and $a_{nn}$ are negatives of each other and you can find an orthogonal transformation converting both of them to $0$.
Thank You!
 

FAQ: Zero-Trace Symmetric Matrix is Orthogonally Similar to A Zero-Diagonal Matrix.

What is a zero-trace symmetric matrix?

A zero-trace symmetric matrix is a square matrix where the sum of the elements on the main diagonal (from top left to bottom right) is equal to zero, and the elements above and below the main diagonal are symmetric.

What does it mean for a matrix to be orthogonally similar?

A matrix A is orthogonally similar to matrix B if there exists an orthogonal matrix P such that PTAP = B. Essentially, this means that the two matrices are related by a rotation and/or reflection.

What is a zero-diagonal matrix?

A zero-diagonal matrix is a square matrix where all the elements on the main diagonal are equal to zero. The elements above and below the main diagonal can be any value.

What is the significance of a zero-trace symmetric matrix being orthogonally similar to a zero-diagonal matrix?

The significance of this is that it allows us to simplify calculations and make certain properties of the matrices easier to understand. It also has applications in various areas of mathematics, such as linear algebra and graph theory.

How can I determine if a given zero-trace symmetric matrix is orthogonally similar to a zero-diagonal matrix?

To determine if a zero-trace symmetric matrix A is orthogonally similar to a zero-diagonal matrix B, you can use the spectral theorem. If A has a basis of eigenvectors that are orthogonal to each other, and the eigenvalues are all distinct, then A is orthogonally similar to a diagonal matrix. If the eigenvalues are all zero, then A is orthogonally similar to the zero-diagonal matrix B.

Back
Top