Proving Invertible Matrix Theorem: A^TDA=D

  • Thread starter dEdt
  • Start date
  • Tags
    Theorem
In summary: The Galilean transformations are a subgroup of the affine transformations, which preserve the Euclidean metric in four dimensions. This is a natural structure that one finds in classical mechanics. The symmetries of the equations of motion are a subset of these transformations. At the level of the Lagrangian, the symmetries are not affine, but they are still linear transformations. And they are not exactly the Galilean transformations, but they are close. The classical theory of symmetries is called Noether's theorem, and there is a quantum version due to Wigner. See, for example, this mathoverflow post.The Euclidean or Minkowski metric is a quadratic
  • #1
dEdt
288
2
Let A be any real invertible matrix. There exists a non-zero diagonal matrix D such that [itex]A^T D A=D[/itex].

I'm pretty sure this is true (maybe with some conditions on A), but I need some help proving it.
 
Physics news on Phys.org
  • #2
dEdt said:
Let A be any real invertible matrix. There exists a non-zero diagonal matrix D such that [itex]A^T D A=D[/itex].

I'm pretty sure this is true (maybe with some conditions on A), but I need some help proving it.

This does not seem to be true as stated. The matrix

[tex]A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}[/tex]

is a counter-example.

What is true is that, given a nondegenerate (no zero eigenvalue) matrix [itex]Q[/itex] (not necessarily diagonal), there is a matrix [itex]A[/itex] such that [itex]A^T Q A = Q[/itex]. The set of matrices [itex]A[/itex] should actually form a group, which is the orthogonal group of [itex]Q[/itex] viewed as a bilinear form. This gives us some insight as to why the original proposition is false. For [itex]Q = I[/itex], the matrices [itex]A[/itex] actually satisfy [itex]A^{-1} =A^T[/itex], so this is a fairly restrictive condition. So it is not surprising that a generic invertible matrix [itex]A[/itex] should fail to preserve a bilinear form.
 
  • #3
Thanks for your answer.

After thinking about it a bit more, it's clear that few real invertible matrices could satisfy that condition simply because
[tex]A^T DA=A\rightarrow \mathrm{det}(A)\mathrm{det}(D) \mathrm{det}(A)=\mathrm{det}(D) \rightarrow \mathrm{det}A=\pm 1.[/tex]

So, let A be any real invertible matrix with [itex]\mathrm{det}A=\pm 1[/itex]. There exists a non-zero diagonal matrix D such that [itex]A^T DA=A[/itex]. Is this a theorem?
 
  • #4
dEdt said:
Thanks for your answer.

After thinking about it a bit more, it's clear that few real invertible matrices could satisfy that condition simply because
[tex]A^T DA=A\rightarrow \mathrm{det}(A)\mathrm{det}(D) \mathrm{det}(A)=\mathrm{det}(D) \rightarrow \mathrm{det}A=\pm 1.[/tex]

I didn't mention this because you didn't specify that ##\mathrm{det}(D)\neq 0##. You'll note that this is implied by the nondegeneracy property specified in the theorem I mentioned.

So, let A be any real invertible matrix with [itex]\mathrm{det}A=\pm 1[/itex]. There exists a non-zero diagonal matrix D such that [itex]A^T DA=A[/itex]. Is this a theorem?

[tex]A = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}[/tex]

is a counterexample with ##\mathrm{det}A=- 1##.
 
  • #6
fzero said:
[tex]A = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}[/tex]

is a counterexample with ##\mathrm{det}A=- 1##.

Of course, how stupid of me!

micromass said:

It was interesting, thank you for the link. Unfortunately it didn't help me find what I was looking for.

I'll give you guys a bit more info about what I'm actually trying to do.

For simplicity, consider a two dimensional vector space. One class of coordinate transformations are the Euclidean rotations
[tex]R(\theta)=\left(
\begin{array}{cc}
\cos \theta & \sin \theta\\
-\sin \theta & \cos \theta
\end{array}
\right).[/tex]
These transformations preserve [itex]x^2+y^2[/itex], and hence [itex]R^T D R=D[/itex] for
[tex]D=\left(
\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}
\right).[/tex]
Another two types of transformations are the Galilean and Lorentz transformations. There are also transformations that scale the coordinates by some factor. All of these transformations satisfy the condition above.

I'm pretty confident that these four classes of transformations are the only permissible linear coordinate transformations, and I want to prove it by showing that any linear coordinate transformation [itex]A[/itex] must satisfy [itex]A^T D A=D[/itex] for some diagonal matrix [itex]D[/itex].
 
  • #7
dEdt said:
Of course, how stupid of me!



It was interesting, thank you for the link. Unfortunately it didn't help me find what I was looking for.

I'll give you guys a bit more info about what I'm actually trying to do.

For simplicity, consider a two dimensional vector space. One class of coordinate transformations are the Euclidean rotations
[tex]R(\theta)=\left(
\begin{array}{cc}
\cos \theta & \sin \theta\\
-\sin \theta & \cos \theta
\end{array}
\right).[/tex]
These transformations preserve [itex]x^2+y^2[/itex], and hence [itex]R^T D R=D[/itex] for
[tex]D=\left(
\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}
\right).[/tex]
Another two types of transformations are the Galilean and Lorentz transformations. There are also transformations that scale the coordinates by some factor. All of these transformations satisfy the condition above.

I'm pretty confident that these four classes of transformations are the only permissible linear coordinate transformations, and I want to prove it by showing that any linear coordinate transformation [itex]A[/itex] must satisfy [itex]A^T D A=D[/itex] for some diagonal matrix [itex]D[/itex].

Linear coordinate transformations modulo translations form a group known as the general linear group, which is the group of invertible matrices (over the reals). In some cases, a linear transformation does preserve a bilinear form, for instance, for the Euclidean metric [itex]\delta = \mathrm{diag}(1, \ldots 1)[/itex] we get the rotation group. For the Minkowski metric, [itex]\eta = \mathrm{diag}(-1,1, \ldots 1)[/itex], we find the Lorentz transformations. Both of these groups are orthogonal groups, with the Lorentz group an example of an indefinite orthogonal group.

In general, there won't be a preserved bilinear form. For instance, the Galilean transformations don't preserve one. Rescalings of coordinates also do not preserve a quadratic form. Under a uniform rescaling of all coordinates, the Euclidean and Minkowski metrics are rescaled by the square of the parameter.

Therefore, the tractable question is really, given a bilinear form or matrix [itex]Q[/itex], to describe the properties of the matrices [itex]A[/itex] that leave it invariant. This is a classical problem and it reduces to the properties of [itex]Q[/itex]: the symmetry properties, the signature, etc. Over the reals, we find the orthogonal and symplectic groups, see here.
 
  • #8
fzero said:
In general, there won't be a preserved bilinear form. For instance, the Galilean transformations don't preserve one. Rescalings of coordinates also do not preserve a quadratic form. Under a uniform rescaling of all coordinates, the Euclidean and Minkowski metrics are rescaled by the square of the parameter.

The Galilean transformations may not preserve a bilinear form, but they preserve [itex]x^2 +y^2+z^2[/itex] and [itex]t^2[/itex] separately, so there's still a D satisfying those conditions.
 
  • #9
dEdt said:
The Galilean transformations may not preserve a bilinear form, but they preserve [itex]x^2 +y^2+z^2[/itex] and [itex]t^2[/itex] separately, so there's still a D satisfying those conditions.

Yes, I was forgetting about the origin. So it's a pair of bilinear forms that are preserved independently. You could derive the form of the Galilean transformations in an analogous fashion to the previous cases.
 

FAQ: Proving Invertible Matrix Theorem: A^TDA=D

What is the Invertible Matrix Theorem?

The Invertible Matrix Theorem states that a square matrix A is invertible if and only if its determinant is non-zero. This means that the matrix can be inverted and has a unique solution for its inverse.

What is the importance of the Invertible Matrix Theorem?

The Invertible Matrix Theorem is important because it allows us to determine if a matrix is invertible or not, which is crucial in solving systems of linear equations and performing other important operations in linear algebra.

What is the transpose of a matrix?

The transpose of a matrix is a new matrix that is formed by interchanging the rows and columns of the original matrix. This means that the first row of the original matrix becomes the first column of the transposed matrix, the second row becomes the second column, and so on. It is denoted by adding a superscript 'T' to the original matrix, such as A^T.

What does the equation A^TDA=D mean in the Invertible Matrix Theorem?

This equation is a fundamental part of the Invertible Matrix Theorem. It states that a matrix A is invertible if and only if its transpose A^T and diagonal matrix D satisfy the equation. This is because the transpose and diagonal matrices have special properties that help in determining if A is invertible.

How can the Invertible Matrix Theorem be proven?

The Invertible Matrix Theorem can be proven by using the properties of determinants, transpose, and diagonal matrices. By manipulating the equation A^TDA=D, it can be shown that if the determinant of A is non-zero, then A is invertible. Conversely, if A is invertible, then its determinant must be non-zero. This proves the Invertible Matrix Theorem.

Similar threads

Replies
1
Views
961
Replies
11
Views
2K
Replies
5
Views
2K
Replies
1
Views
2K
Replies
3
Views
1K
Replies
9
Views
4K
Replies
12
Views
2K
Back
Top