Prove two non-degenerate inner product spaces (Rn and R(p, n-p)) are isomorphic

In summary: From what I gather, the conversation is discussing the proof that any non-degenerate inner product on \mathbb{R}^{n} is isomorphic to \mathbb{R}^{p, n-p} for some 0 \le p \le n. The non-degenerate inner product on \mathbb{R}^{p, n-p} is defined as the sum of the products of the first p coordinates and the difference of the products of the remaining n-p coordinates. Two non-degenerate inner product spaces are isomorphic if there exists an invertible matrix P such that A' = P^{T}AP. The conversation discusses various attempts at proving this, such as trying to find a general solution for PTA'
  • #1
Ryker
1,086
2

Homework Statement


Prove that any non-degenerate inner product on [tex]\mathbb{R}^{n}[/tex] is, as a non-degenerate inner product space, isomorphic to [tex]\mathbb{R}^{p, n-p}[/tex] for some [tex]0 \le p \le n[/tex].


Homework Equations


The non-degenerate inner product on [tex]\mathbb{R}^{p, n-p}[/tex] is defined as
[tex]\sum\limits_{i=1}^{n-p} x_{i}y_{i} - \sum\limits_{j=n-p+1}^n x_{j}y_{j}[/tex]

Two non-degenerate inner product spaces are isomorphic as such [tex]\Leftrightarrow[/tex] there exists an invertible matrix P, such that [tex]A' = P^{T}AP[/tex]


The Attempt at a Solution


Ugh, I'm completely lost with this one. I've tried writing out the matrix that represents the non-degenerate inner product in [tex]\mathbb{R}^{p, n-p}[/tex], but I can't get anywhere. Any suggestions?
 
Physics news on Phys.org
  • #2
What are the matrices A and A'? I can help you if they are what I think they are.
 
  • #3
A and A' are matrices representing the nondegenerate inner product, that is, (in this case real) invertible symmetric matrices, such that

[tex](\vec u, \vec v) = \vec u^{T}A\vec v.[/tex]

From what I gather, A in [tex]\mathbb{R}^{n}[/tex]can (and must) be any real invertible symmetric matrix (ie. any nondegenerate inner product), whereas A' in [tex]\mathbb{R}^{p, n-p}[/tex] is, if I got this right, the following matrix:

[tex]A' = \left[ \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ . & . & . & . & . & . \\ . & . & 1 & . & . & . \\ . & . & . & -1 & . & . \\ . & . & . & . & . & . \\ 0 & 0 & 0 & 0 & 0 & -1 \end{array} \right],[/tex]

where aii = -1 for all i > n - p + 1.

So I know both A and any of the A' can be orthgonally diagonalized, but I don't know if that gets me anywhere. So far I haven't been able to show that any matrix that represents a nondegenerate inner product can be orthogonally diagonalized so that the diagonal matrix you get is one of A'. I've also tried to find a general solution to PTA'P, but then I get an, albeit symmetric and invertible, matrix full of sums. Now I guess if I could show that this way you can get any invertible symmetric matrix, then I would prove what I want to, but, first, I can't see from what I get whether that is the case, and second, we "know" that [tex]\mathbb{R}^{3, n-3}[/tex] and [tex]\mathbb{R}^{4, n-4}[/tex] aren't isomorphic as nondegenerate inner product spaces, so that automatically excludes matrices that look like A'. And it would be weird for that general product to span all symmetric invertible matrices, except for those select few.

Any help would be greatly appreciated, as I've now spent around 4 - 5 hours on this, and seem to be locked into trying out the same damn solutions which either don't work or have something that I keep missing.
 
  • #4
I don't know for sure that this works, but here's an idea. We know that A can be diagonalized. Suppose that the set of eigenvalues of A is {x, -x} for some positive real number x. Then the P we need is just some orthogonal matrix (ie, an improper rotation, det P = +1 or -1) multiplied by a constant.
 
  • #5
Yeah, I thought about that already, but I don't think we can assume that the set of eigenvalues of A is {x, -x}. Maybe I'm missing something, though, so could you perhaps explain why you think this can be done?
 
  • #6
You're right - we can't assume that the eigenvalues of A are {x, -x}. I thought it might be possible to prove this, but I realize now that it's not true in general.

We do know, however, that 0 is not an eigenvalue of A (by non-degeneracy). So the problem is reduced to the following:

(**)Given a real diagonal matrix B with no zeros on the diagonal, find an invertible matrix L s.t. [tex]L^T\ A\ L\ =\ B\ [/tex].

Suppose we can do this. There's some rotation U s.t. [tex]U\ A'\ U^T\ [/tex] is diagonal. Set [tex]B\ =\ U\ A'\ U^T\ [/tex]. Then set [tex]P\ =\ U\ L\ [/tex], where L is the matrix we can find by assumption.

I think (**) can be done.
 
Last edited:
  • #7
I tried to edit the Latex, but it doesn't seem to have changed. I should have written, 'There's some rotation U s.t. [tex]U\ A'\ U^T\ [/tex] is diagonal. Set [tex]B\ =\ U\ A'\ U^T\ [/tex].'
 
  • #8
summerwind said:
You're right - we can't assume that the eigenvalues of A are {x, -x}. I thought it might be possible to prove this, but I realize now that it's not true in general.

We do know, however, that 0 is not an eigenvalue of A (by non-degeneracy). So the problem is reduced to the following:

(**)Given a real diagonal matrix B with no zeros on the diagonal, find an invertible matrix L s.t. [tex]L^T\ A\ L\ =\ B\ [/tex].

Suppose we can do this. There's some rotation U s.t. [tex]U\ A'\ U^T\ [/tex] is diagonal. Set [tex]B\ =\ U\ A'\ U^T\ [/tex]. Then set [tex]P\ =\ U\ L\ [/tex], where L is the matrix we can find by assumption.

I think (**) can be done.
Hey, thanks for the response, I just came home, so I'll look at what you suggested more closely in an hour or so, and follow up with comments then. However, what I was wondering off the bat is how does non-degeneracy imply 0 is not an eigenvalue? Eigenvalues only appear in the diagonal matrix, but that diagonal matrix need not be invertible. I mean, I have to prove that yes, it will be, but I'm not sure we can make that assumption.

edit: Ah, OK, nevermind, it's because if A and L are both invertible, then D must also be invertible, since its determinant is the product of the determinants of A, L and L-1 (= LT), ie. it is equal to the determinant of A, hence non-zero, right?

Nevertheless, how can you introduce B to be both

[tex]L^T\ A\ L\ =\ B[/tex] and [tex]B\ =\ U\ A'\ U^T\ ?[/tex]

I mean, we know they can both be diagonalized, but not that the diagonal matrix we get is the same in both cases. Or do we?
 
Last edited:
  • #9
I think you have the right idea for showing that 0 is not an eigenvalue of A, though you're not using the same names for the matrices as I used. The idea is that an invertible matrix remains invertible when you rotate it to a different basis because of the property of determinants that you mentioned. In particular, an invertible symmetric matrix remains invertible when you rotate it to diagonal form. Thus, in diagonal form, it cannot have a zero on the diagonal.

Let me rephrase what I was trying for with the matrix [tex]B[/tex]. First, I said to assume that the following statement is true:

(**) For any real diagonal matrix [tex]B[/tex] that does not have 0 as an eigenvalue, there exists an invertible matrix [tex]L[/tex] s.t. [tex]L^T\ A\ L\ =\ B[/tex].

(**) is a statement that holds for any real diagonal matrix. So it must hold true for [tex]U^T\ A'\ U\ [/tex], which is a particular example of a real diagonal matrix. Therefore, assuming (**) is true, we know that there exists some invertible matrix [tex]L[/tex] s.t. [tex]L^T\ A\ L\ =\ U^T\ A'\ U\ [/tex]. All I have done is insert [tex]B\ =\ U^T\ A'\ U\ [/tex] into (**).

I should say that I haven't proven (**) and I don't know for sure that it's true. This is just an idea I had.
 
  • #10
Hey, sorry for not getting back to you for so long, but once again, thanks for the help. In the end, I didn't go with what you suggested, because I don't think you are able to do that. However, that tip on eigenvalues being different from zero was really helpful, and I've used that and the fact that we can rearrange columns of the diagonal matrix any way we want. So then that diagonal matrix D is just some A'X, where X is another diagonal matrix, which we know exists, because both D and all A' have only real entries on the diagonal. Then via the square root of this X, which for some A' has real entries due to being able to rearrange D in a way that the first rows are comprised of positive eigenvalues, and the last ones of negative ones, you can just multiply both sides of the A = P-1A'P equation to get the desired P. I can write this up in a bit more detail, as well, if you'd like to hear it, but this is a short outline of what I've done. We haven't gotten our assignments back yet, so I'm not sure whether it was correct, but my gut tells me it is :smile:
 

FAQ: Prove two non-degenerate inner product spaces (Rn and R(p, n-p)) are isomorphic

What is an inner product space?

An inner product space is a vector space equipped with an additional structure that allows for the definition of a dot product or inner product between vectors. This structure includes properties such as linearity, symmetry, and positive definiteness.

What does it mean for two inner product spaces to be isomorphic?

When two inner product spaces are isomorphic, it means that there exists a bijective linear transformation between them that preserves the inner product structure. In other words, the two spaces have the same underlying vector space structure and their inner products behave in the same way.

How do you prove that two inner product spaces are isomorphic?

To prove that two inner product spaces are isomorphic, you must show that there exists a linear transformation between them that preserves the inner product structure. This can be done by showing that the transformation is bijective, linear, and preserves the inner product, which means that the inner product of two vectors in one space is equal to the inner product of their images in the other space.

What is the significance of proving that two inner product spaces are isomorphic?

Proving that two inner product spaces are isomorphic allows us to establish a one-to-one correspondence between the two spaces, which means that their properties and structures are essentially the same. This can be useful in solving problems or proving theorems in one space by using techniques and results from the other space.

Can two non-degenerate inner product spaces be isomorphic if they have different dimensions?

Yes, two non-degenerate inner product spaces can still be isomorphic even if they have different dimensions. This is because the isomorphism is based on the structure and properties of the spaces, not their specific dimensions. However, the dimensions of the two spaces must still be compatible in order for an isomorphism to exist.

Similar threads

Back
Top