Eigenvalues & Eigenvectors of $t: M_2 \implies M_2$ Matrix

  • MHB
  • Thread starter ognik
  • Start date
  • Tags
    Eigenvector
In summary, Evgeny is trying to explain to me that the map $t: M_2 \implies M_2$ has eigenvalues and eigenvectors.
  • #1
ognik
643
2
Q: Find the eigenvalues and eigenvectors of this map $t: M_2 \implies M_2$

$\begin{bmatrix}a&b\\c&d\end{bmatrix}$ $\implies\begin{bmatrix}2c&a+c\\b-2c&d\end{bmatrix}$

I don't know where to start, I suspect because I'm just not recognising what this represents, so if someone can tell me it is similar to (something like) $Ax=\lambda x$ with a change of basis ... ?
 
Physics news on Phys.org
  • #2
Consider $2\times2$ matrices as vectors of length 4. Then write the matrix of the map.
 
  • #3
Hi Evgeny, it was finding the mapping that I was struggling with and sorry - thinking of the matrices as vectors didn't spark anything new in my head.

What I've been thinking is:

Call the 1st matrix V, the 2nd W.
Then I think the mapping is W=AV, so I need to find A. Then I think I find the eigenvalues/vectors of A?

Anyway, $A=WV^{-1}$ (or should it be $ A=V^{-1} W $ ?) ... and from the special case of inverting a 2x2, $ V^{-1}= \begin{bmatrix}d&-b\\-c&a\end{bmatrix}/(ad-bc)$

But when I multiply it out (either way round), I get a 2x2 with messy terms in a,b,c,d that don't simplify. So before I try and figure out eigenvalues of that beast, am I doing it right and is there a trick that could bypass the messy A?
 
  • #4
Evgeny.Makarov said:
Consider $2\times2$ matrices as vectors of length 4. Then write the matrix of the map.
Note that for $A \in M_2(\Bbb R)$, say:

$A = \begin{bmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{bmatrix}$

we have:

$A = a_{11}\begin{bmatrix}1&0\\0&0\end{bmatrix} + a_{21}\begin{bmatrix}0&0\\1&0\end{bmatrix} + a_{12}\begin{bmatrix}0&1\\0&0\end{bmatrix} + a_{22}\begin{bmatrix}0&0\\0&1\end{bmatrix}$

$ = a_{11}E_{11} + a_{21}E_{21} + a_{12}E_{12} + a_{22}E_{22}$

Convince yourself that $B = \{E_{11},E_{21},E_{12},E_{22}\}$ is a basis for $M_2(\Bbb R)$, and that the bijection:

$e_1 \leftrightarrow E_{11}$
$e_2 \leftrightarrow E_{21}$
$e_3 \leftrightarrow E_{12}$
$e_4 \leftrightarrow E_{22}$

gives a linear isomorphism of $\Bbb R^4$ with $M_2(\Bbb R)$.

This process of writing the columns of a matrix "end to end" is called the VECTORIZATION of a matrix.
 
  • #5
For me this raised more Qs than it answered :-) Sometimes I feel a little sysiphusian ...

Wiki shows vectorisation a bit differently, so they vectorise $ \begin{bmatrix}a&b\\c&d\end{bmatrix}$ to $ \begin{bmatrix}a\\c\\b\\d\end{bmatrix}$; if I expand that I think I get the $a_{12} E_{21}$ and $a_{21} E_{12}$ swapped, compared with yours?

I've just learned what a bijection is, 1-to-1 correspondence? linear isomorphism seems to retain the structure, but allow the elements to be substituted? I agree B would be a basis for $ \Re^2 $; Do you mean that the 4 x 1 vector of E's (or e's) would be a basis of $ \Re^4 $?

But that leaves me with two 4 x 1 vectors, and vectors aren't invertible, how would I find the mapping?

Also - apart from producing messy matrices, is the method I suggested earlier otherwise OK?
 
  • #6
ognik said:
For me this raised more Qs than it answered :-) Sometimes I feel a little sysiphusian ...

Wiki shows vectorisation a bit differently, so they vectorise $ \begin{bmatrix}a&b\\c&d\end{bmatrix}$ to $ \begin{bmatrix}a\\c\\b\\d\end{bmatrix}$; if I expand that I think I get the $a_{12} E_{21}$ and $a_{21} E_{12}$ swapped, compared with yours?

I've just learned what a bijection is, 1-to-1 correspondence? linear isomorphism seems to retain the structure, but allow the elements to be substituted? I agree B would be a basis for $ \Re^2 $; Do you mean that the 4 x 1 vector of E's (or e's) would be a basis of $ \Re^4 $?

But that leaves me with two 4 x 1 vectors, and vectors aren't invertible, how would I find the mapping?

Also - apart from producing messy matrices, is the method I suggested earlier otherwise OK?

No, the Wiki is the same as mine, their $c$ is my $a_{21}$.

$B$ is most definitely NOT a basis for $\Bbb R^2$, as $\Bbb R^2$ has dimension $2$, and any basis has but two elements, while $B$ clearly has $4$. It should be "obvious" that $B$ spans $M_2(\Bbb R)$ but proving linear independence is going to be up to you.

While it is true that the mapping $X \mapsto AX$ is a linear map for suitably-sized matrices $A,X$, it is by no means automatic that such a mapping involves an invertible $A$, nor is it true that any linear map:

$M_2(\Bbb R) \to M_2(\Bbb R)$ is of that form.

You're not "solving for a matrix" like you might "solve for a number" in a polynomial, you're trying to find eigenvalues (characteristic values) of a linear transformation. What Evgeny and I have both been trying to tell you is you need to come up with a 4x4 matrix.

Which 4x4 matrix? Not going to say...but you might try investigating the results of $t$ when:

$\begin{bmatrix}a&b\\c&d\end{bmatrix} = E_{ij}$ for each $i$ and $j$.
 
  • #7
Sorry, yes about $c_{12}$. And yes, I meant B is a basis for $M_2$, not R. (Because the 4 2x2 matrices are linearly independent?)

You've given me lots of hints, while I am pondering I'd like to put some small points to bed:

So my approach was invalid because I can't show V is invertible? But would it be OK if V was invertible (non-singular)?

Finally, is $M_2$ something special? I haven't seen it before and was thinking of it as a sub-space of $R^2$?
 
  • #8
Uh...even if $t$ WAS of the form $X \mapsto AX$ for some invertible matrix $A$, finding $A^{-1}$ would not give you the eigenvalues.

Just as $n$-tuples are the "numerical" form of vectors, matrices are the numerical form of linear transformations.

The square matrices ($n \times n$, for any given $n$) are, of course, special, and correspond to linear ENDOMORPHISMS (this is a fancy way of saying they are linear, and the domain and co-domain are the same vector space).

It turns out this this, too, is ALSO a vector space, and if $\dim_{\ F}(V) = n$, then $\dim_{\ F}(\text{End}(V)) = n^2$.

So, for example, with $\Bbb R^2$ (of dimension $2$), we have $\dim_{\ \Bbb R}(M_2(\Bbb R)) = 4$ (which is two squared).

Having dimension $4$, it cannot be a subspace of $\Bbb R^2$, which only has dimension $2$ (subspaces always have dimension $\leq$ than that of their "parent space").

A word on what eigenvectors ARE: an eigenvector is a vector on which a matrix acts as a scalar, it just stretches the vector by a certain value, called it's eigenvalue. The simplest (non-trivial) example, is a SHEAR matrix:

$A = \begin{bmatrix}1&a\\0&1\end{bmatrix}$ for $a \neq 0$, which takes $(x,y)$ to $(x + ay,y)$ turning squares into parallelograms.

This preserves "horizontal" vectors (non-zero multiples of $e_1$), which is to say these have an eigenvalue of $1$. These are the ONLY eigenvectors, since:

$\det(xI - A) = \begin{vmatrix}x-1&-a\\0&x-1\end{vmatrix} = (x - 1)^2 - (-a)(0) = (x-1)^2$, which has $1$ as its only root, and:

$(I - A)(x,y) = 0$ means:

$\begin{bmatrix}0&-a\\0&0\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}0\\0\end{bmatrix}$

so that $-ay = 0$ (and if $a \neq 0$ then $y = 0$), and $x$ can be any non-zero number.

These matrices are "defective" in a certain important way, they cannot be made diagonal via an eigenbasis (the space of eigenvectors + the zero vector is only the $y$-axis).
 
  • #9
Deveno said:
Uh...even if $t$ WAS of the form $X \mapsto AX$ for some invertible matrix $A$, finding $A^{-1}$ would not give you the eigenvalues.

What I meant was to find $A=WV^{-1}$, and then find the eigenvalues of A by solving $\det(\lambda I - A)=0$, then finding the eigenvectors by solving $ (A-\lambda_i I_n ) \vec{x} =0 $; this is the only way I have learned (so far) to find eigenvalues/vectors. Problems like your shear matrix example I can do without fuss. I do understand that this is not the method to use here - but could I use it to find a similar mapping if we had numeric coefficiants and an invertible V?

Thanks for the info about $M_2$; please never hesitate to assume I don't know things.

It seems to me that if I can vectorise the left matrix, I can also vectorise the right.
I can vectorise the right by$ \begin{bmatrix}0+0+2+0\\0+1-2+1 \\1+0+1+0 \\0+0+0+1 \end{bmatrix}$$ \begin{bmatrix}a\\c\\b\\d\end{bmatrix}$=$ \begin{bmatrix}2c\\b-2c\\a+c\\d\end{bmatrix}$, so is $ \begin{bmatrix}0&0&2&0\\0&1&-2&1 \\1&0&1&0 \\0&0&0&1 \end{bmatrix}$ the matrix I am looking for?

I ask because the characteristic poly I then get is $ (-\lambda)(1-\lambda)^3 -2(1-\lambda)^2 =0 $ (so 1 is an eigenvalue) which then can be simplified to $ \lambda ^2 -\lambda -2 =0 $
Giving me 2 more eigenvalues 2, -1

The solution has 1, -2, -1 so I know it's close but something is not right ...

(Also, please confirm the equivalent matrix for the left is the 4 x 4 identity matrix?)
 
Last edited:
  • #10
Your 4x4 matrix should be:

$\begin{bmatrix}0&2&0&0\\0&-2&1&0\\1&1&0&0\\0&0&0&1\end{bmatrix}$

You seem to be getting tripped up by the fact that your input vector is $(a,c,b,d)$ not $(a,b,c,d)$.

It *is* possible to find the determinant of a matrix $A$ by row-reduction:

If $A = PB = P_1P_2\cdots P_kB$, then

$\det(A) = \det(P_1)\det(P_2)\cdots\det(P_k)\det(B)$

The determinants of the elementary row-operation matrices $P_j$ are easy to calculate, so if one "keeps careful track" of these, and $\det(B)$ is easy to calculate (perhaps $B$ is nearly diagonal), one might proceed that way.

In this case, I would prefer "expansion by minors" about the entries of the last column (or row), which gives:

$\begin{vmatrix}x&-2&0&0\\0&x+2&-1&0\\-1&-1&x&0\\0&0&0&x-1\end{vmatrix} = \begin{vmatrix}x&-2&0\\0&x+2&-1\\-1&-1&x\end{vmatrix}\cdot(x-1)$.

The 3x3 determinant can be calculated use the Rule of Sarrus:

$\begin{vmatrix}x&-2&0\\0&x+2&-1\\-1&-1&x\end{vmatrix} = x^2(x+2) - 2 - x = (x^2 - 1)(x + 2) = (x - 1)(x + 1) (x + 2)$

So $\det(Ix - A) = (x-1)^2(x+1)(x+2)$, giving eigenvalues of $1,-1$ and $-2$.

Now when you solve $(I - \lambda A)(x_1,x_2,x_3,x_4) = 0$, remember to put your solutions back in matrix form:

$\begin{bmatrix}x_1&x_3\\x_2&x_4\end{bmatrix}$.
 
  • #11
Yes, even though I wrote [a, c, b, d] I must have, out of habit, mentally reverted to a, b...and thanks, I would probably have done that again putting back into matrix form. Thanks also for reminding me of Sarrus, I've not used it before (don't know why) - and it was useful.

I prefer to use $ (A-\lambda I) $ instead of $ (I\lambda −A) $ it will always work out the same mathematically (I did that for my eigenvalues and vectors) - just wondering if there is any subtle reason to do it one way or the other?

For $\lambda=1$, I got the eigenspace spanned by col[2, 1, 3, 0] - the solution shows 2 vectors, $ \begin{bmatrix}2&3\\1&0\end{bmatrix}$ - which I got, and $ \begin{bmatrix}0&0\\0&1\end{bmatrix}$ - which I didn't.

I see this is a variation of my 'zero vector question' where we have 1 independent variable (so it can assume any value?). As I see it, anytime one of the variables $x_i$ is independent, I must then have a corresponding 'independent' eigenvector with every entry zero except $x_I = 1$? Why please, and can you give me a real-world example of such a situation?

And if there are multiple independent variables, then that many corresponding 'independent' eigenvectors?
 
  • #12
Note that the characteristic polynomial $(x - 1)^2(x+1)(x+2)$ has $1$ as a double root, which makes it possible that the eigenspace might have dimension 2 (and at MOST 2).

Solving $(I - A)v = 0$ leads to:

$2c = a$
$c - 2b = c$
$a+c = b$
$d = d$.

Note that $d = d$ for ANY choice of $d$, so if we set $d = t$, and $c = s$, we get the solution:

$(2s,3s,s,t) = s(2,3,1,0) + t(0,0,0,1)$.

Since $\{(2,3,1,0),(0,0,01)\}$ is a LI set of $\Bbb R^4$, these two vectors are a basis for the eigenspace belonging to $1$.

It is easy to see that $\begin{bmatrix}0&0\\0&1\end{bmatrix}$ is indeed an eigenmatrix for $t$, since:

$t\left(\begin{bmatrix}0&0\\0&1\end{bmatrix}\right) = \begin{bmatrix}2\cdot 0&0+0\\0-2\cdot 0&1\end{bmatrix} = \begin{bmatrix}0&0\\0&1\end{bmatrix}$

I prefer to use $\det(xI - A)$ instead of $\det(A - xI)$ since it makes the polynomial monic. But:

$f(x) = 0$ and $-f(x) = 0$ have the same solutions, and $(A - \lambda I)v = 0$ has the same solutions $v$ as:

$(\lambda I - A)v = 0$, since if $Bv = 0$ for a matrix $B$, then $-Bv = -0 = 0$. It doesn't really make any difference.

I'm not sure what you're asking with your question about "independent variables" (I think you might perhaps mean what are typically called "free variables" or "parameters"). If that is so, your conjecture is incorrect, even though in the example above the solutions to:

$(I - A)v = 0$ have "two free variables" ($s$ and $t$):

$v = (2s,3s,s,t)$

neither one of our two eigenvectors has exactly two non-zero coordinates.
 
  • #13
Thanks, that is a great explanation, also your saying d=d somehow energised the light bulb.

I think my terminology may need correcting: d=d is what I think of as an independent variable, because they have no relationship with the other variables (like a or $x_1$) - so I should say 'free' variables instead...

My question : So let's say we solve some (5x5) (I−A)v=0 and get for example 2c=a, b=a+2c, d=d, e=e
Then I would set d=r, e=s, c=t and get (2t, 4t, t, r, s) = t(2, 4, 1, 0, 0) + r(0, 0, 0, 1, 0) + s(0, 0, 0, 0, 1)

I called vectors that result from d=d, e=e etc. 'independent' eigenvectors - is there a formal term for them?
Extending that, if we get the identity matrix, then all the variables are free? (They would also be orthogonal)

So, in my example, 2 free variables = 2 'independent' eigenvectors, plus 1 or more 'dependent' eigenvectors. So it seems clear that we will always have n 'independent' eigenvectors for n free variables? (maybe this is also a bijection ? :-))

From that I was wondering if you knew of an example that might help my intuition of what these free variables and their corresponding eigenvectors represent? At the moment it is all a little too abstract for my liking. Thanks again.
 

FAQ: Eigenvalues & Eigenvectors of $t: M_2 \implies M_2$ Matrix

What are eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are properties of a square matrix that represent the scaling factor and direction of a vector when the matrix is applied to it. In other words, they are the special values and corresponding vectors that remain unchanged (up to a scalar multiple) when multiplied by the matrix.

How do I find the eigenvalues and eigenvectors of a matrix?

To find the eigenvalues, you need to solve the characteristic equation of the matrix, which is obtained by setting the determinant of the matrix minus a scalar multiple of the identity matrix equal to zero. The resulting values are the eigenvalues of the matrix. To find the corresponding eigenvectors, you need to solve the system of equations obtained by plugging in the eigenvalues into the equation (A-λI)x=0, where A is the original matrix, λ is the eigenvalue, and x is the eigenvector.

Why are eigenvalues and eigenvectors important?

Eigenvalues and eigenvectors have many applications in mathematics, physics, and engineering. They are particularly useful in understanding the behavior of systems, such as in Markov chains and differential equations. They also play a crucial role in diagonalizing matrices and finding the inverse of a matrix.

Can a matrix have complex eigenvalues and eigenvectors?

Yes, a matrix can have complex eigenvalues and eigenvectors. In fact, if a matrix has real entries, but its eigenvalues are complex, then its eigenvectors must also be complex. Complex eigenvalues and eigenvectors can still provide valuable insights into the behavior of a matrix and its associated system.

What is the relationship between eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are related in that each eigenvalue corresponds to a unique eigenvector (up to a scalar multiple). In other words, the eigenvalues determine the direction and magnitude of the eigenvectors. Additionally, the sum of the eigenvalues equals the trace of the matrix, and the product of the eigenvalues equals the determinant of the matrix.

Similar threads

Back
Top