What is the relationship between a matrix A and its eigenvalue g when A-1 = A?

  • Thread starter agro
  • Start date
  • Tags
    Eigenvalue
In summary, the conversation discusses finding the eigenvalues of a matrix A such that A-1 = A. It is proven that any eigenvalues of such a matrix must be 1 or -1, but it is not guaranteed that both values will be eigenvalues. A low-tech example is given to explain that the solution set is a subset of {-1, 1}, but not always equal to it.
  • #1
agro
46
0
Suppose there is a matrix A such that A-1 = A. What can we say about the eigenvalue of A, g?

1) Ax = gx
2) A-1 Ax = A-1 gx
3) Ix = g A-1x
4) x = g Ax
5) x = g gx
6) 1x = g2x

Therefore

7) g2 = 1
8) g = 1 or g = -1

But suppose A = I (the identity matrix). For I, the only eigenvalue is g = 1 (g = -1 is not an eigenvalue of I). So, something must be wrong in the steps above. Can anyone point out what and where?

Thanks a lot...
 
Physics news on Phys.org
  • #2
There's nothing wrong. Every eigenvalue of I is certainly equal to 1 or -1.
 
  • #3
How can I have an eigenvalue of -1?

I is a triangle matrix, and the eigenvalues of a triangle matrix is its entries in the main diagonal (in the case of I, the entries are 1).

If we use the characteristic equation, then

det(gI - I) = 0
det(gI - 1I) = 0
det((g - 1) I) = 0

certainly (g - 1) = 0 which means g = 1

If g = -1 for I exists, show me the case where

Ix = (-1)x

(btw that equation implies that x = -x which can only be true if x = 0, which isn't a valid eigenvector)

and show me how to derive g = -1

Thank you.
 
  • #4
There are many matrices with A-1=A that are not the identity.

For example

A=
Code:
-1  0
 0 -1
 
  • #5
Think carefully about what you proved:

If λ is an eigenvalue of I, then λ = 1 or λ = -1.

You did not prove that 1 and -1 are both eigenvalues of I. Your first post doesn't even prove that I even has an eigenvalue! (though we know it does via other methods)
 
Last edited:
  • #6
To my understanding, The proof on the first post showed that for any selfinverse matrix A and any nonzero vector x, the solution set for g in the equation:

Ax = gx

is {1, -1}

which is equivalent to {1} OR {-1}

But for I (a selfinverse matrix), I can show (in my 2nd post) that the solution set for g in the equation:

Ix = gx

is {1}

Which doesn't agree with the first result.

------

For an analogy, let's consider the equation

x2 - a2 = 0 ... (1)

If we solve for x, we get

(x + a) (x - a) = 0

Which means the solution set is {a, -a}.

That means for any value of a:
- if we replace x in equation (1) with a we will get an equality.
- If we replace x in equation (1) with -a we will get an equality.

------

For the eigenvalue case, if we set A = I, then (referring to the equation that defines eigenvalue):
- if we replace g in the equation with 1 we will get an equality for any nonzero vector x.
- If we replace g in the equation with -1 we will get an equality only if x is 0.

Which means -1 isn't a solution set, which contradicts the proof that showed that the solution set is {1, -1}.

If I'm wrong, where did I think incorrectly? Did I choose a correct analogy?

Thanks a lot...
 
  • #7
Hey, maybe I have the answer...

When we solve the equation:

Ax = gx

for g, there's no restriction in that equation that forbids x to be 0. When we solve for g, we did exactly what solving an equation does, to give us a solution set where the equation is true if we replace the corresponding variable in the equation with an element of the solution set.

Applied to our eigenvalue problem, let's start with the equation:

Ix = gx

now try to replace g with -1 (an element of the solution set in the first post), we get:

x = -x

If we try to solve this for x, we do get x = 0, but the equality:

0 = -0

is valid! And that's just what our solution set good for!

If we don't want x = 0, then we need to do another step of checking, because the solution set doesn't guarantee that (the solution set only guarantees that the term Ax is equal to gx for any selfinverse matrix and for a vector x (which can be 0).

What do you think about it, Hurkyl?

Btw, I still don't agree with what you said: "There's nothing wrong. Every eigenvalue of I is certainly equal to 1 or -1"

Can you prove/show that I has an eigenvalue -1?


Thanks a lot
 
  • #8
Originally posted by agro
To my understanding, The proof on the first post showed that for any selfinverse matrix A and any nonzero vector x, the solution set for g in the equation:

Ax = gx

is {1, -1}

which is equivalent to {1} OR {-1}

No, the proof showed that If A2= I, then any eigenvalues must be either 1 or -1 or, said in a different way, the set of eigenvalues must be a subset of {1, -1}. It doesn't show that any such matrix must have both as eigenvalues.

For example,
[1 0]
[0 1] has only 1 as an eigenvalue.

[-1 0]
[0 -1] has only -1 as an eigenvalue

[1 0]
[0 -1] has both.


Of course all of those satisfy A2= I.
 
Last edited by a moderator:
  • #9
That's why. The proof only proved that {-1, 1} is the solution set for the eigenvalue definition equation, but it doesn't guarantee that x is nonzero. That means that the actual solution (for nonzero x) is a subset of {-1, 1}, but not always equal to {-1, 1}.
 
  • #10
How about a low tech example; solve:

[tex]x - 3 = \sqrt{x - 1}[/tex]

Start by squaring, and then:

[tex]x^2 - 6x + 9 = x - 1[/tex]
[tex]x^2 - 7x + 10 = 0[/tex]
[tex](x - 5)(x - 2) = 0[/tex]

So the solution set is {2, 5} right?

But do they both work?

No; all we proved that if x is a solution then x is in {2, 5}... our steps are not reversible, in this case we can't work backwards from x=2 to the original equation.

The solution set in this case is {5}, not {2, 5}.
 
  • #11
Hurkyl, that's because your first equation implies the second equation, but not the reverse, since

Code:
x = a    =>    x[sup]2[/sup] = a[sup]2[/sup]  (but not the reverse)

On the other hand, my first equation Ax = gx with A2 = I is equivalent to the solution set {-1, 1} for x. However, the solution set doesn't guarantee that x is nonzero (though it guarantees that the first equation is valid (and that's the definition of solution set)).

I think you did an apples to oranges comparison...
 
  • #12
Yah, this wasn't precisely analogous...

The example I was trying to make is that the proof proves that x is 5 or x is 2, but it doesn't prove that both are valid answers.

The same is true with yours; you proved g is 1 or g is -1, but you haven't proved both are valid answers.
 
  • #13
The problem basically reduces to the fact that you know g (in your notation) to satisfy

g2 = 1

Your derivation of that property is correct, but that does not imply that all such numbers (i.e., all numbers that satisfy this property) are eigenvalues of A.

Put another way, think of one real number. Your number is such that it has the
property x2=Y.
Say you tell me what Y is. Now I know that your number is either |sqrt(Y)| or -|sqrt(Y)|. I cannot conclude from this that you cheated and
were actually thinking of two numbers. Or three, since I could have found a cubic equation for your number.
Or, back to your example, imagine you found a way to prove that eigenvalues of A
must be integers. You cannot expect now all integers to be eigenvalues of A.
Or, you may find a way to show that all eigenvalues of A are positive. Again,
not all positive numbers will do.
However, there are few numbers (in this case, only 1) that will have all the
properties that you can possibly find for the eigenvalues of A. Knowing it to be
1. either 1 or -1, and
2. positive
Would be enough.
In general, when you introduce powers while determining properties of numbers,
you need to be aware of the fact that some solutions of your final equation may
not satisfy your original problem.

In this case, you do introduce a square when you multiply by A-1.
 

FAQ: What is the relationship between a matrix A and its eigenvalue g when A-1 = A?

1. What is a nonexistent eigenvalue?

A nonexistent eigenvalue is a term used in linear algebra to describe a situation where a matrix does not have any eigenvalues. This can occur when the matrix is not square, or when the matrix does not have enough independent eigenvectors to form a complete set.

2. How can I tell if a matrix has nonexistent eigenvalues?

To determine if a matrix has nonexistent eigenvalues, you can calculate the determinant of the matrix. If the determinant is equal to zero, it means that the matrix does not have any eigenvalues.

3. Can a matrix have both real and nonexistent eigenvalues?

Yes, a matrix can have a combination of real and nonexistent eigenvalues. This can occur when the matrix has complex eigenvalues, which do not have corresponding eigenvectors.

4. Are nonexistent eigenvalues important in practical applications?

Nonexistent eigenvalues are not as commonly encountered in practical applications as real eigenvalues. However, they can be important in certain situations, such as when studying the behavior of non-square matrices or when dealing with complex systems.

5. How can I handle nonexistent eigenvalues in my calculations?

If you encounter a matrix with nonexistent eigenvalues in your calculations, you may need to use alternative methods such as singular value decomposition or generalized eigenvalue methods. It is also important to check for the presence of nonexistent eigenvalues before using traditional eigenvalue methods.

Similar threads

Replies
5
Views
1K
Replies
1
Views
1K
Replies
4
Views
1K
Replies
1
Views
1K
Replies
14
Views
2K
Back
Top