Are Eigenvalues and Eigenvectors Correctly Understood in Matrix Operations?

In summary, the conversation discusses eigenvalues and norms of matrices. The concept of eigenvalues is introduced as those that satisfy Ax = \lambda x, where A is a matrix, x is an eigenvector, and \lambda is a scalar. It is clarified that a matrix must be square and have a rank of n in order to have n eigenvalues and n eigenvectors. It is also mentioned that the eigenvectors do not necessarily form a basis unless the matrix is diagonalizable. The conversation then delves into the magnitude of vectors in terms of eigenvectors and the "gain in magnitude" when multiplied by the matrix A. It is mentioned that the 2-norm can be difficult to compute, leading to the suggestion
  • #1
saltine
89
0

Homework Statement



I am studying about eigenvalues and norms. I was wondering whether the way I understand them is correct.

Homework Equations




The Attempt at a Solution



The eigenvalue of a matrix those that satisfy [tex]Ax = \lambda x[/tex], where A is a matrix, x is an eigenvector, [tex]\lambda[/tex] is a scalar. The significant here is that from the perspective of the eigenvector, a matrix multiplication with A is the same as a scalar multiplication with lambda. When A is square, non-signular, and of rank n, A has n eigenvalues and n eigenvectors. The n eigenvectors forms a basis.

Suppose vi is an eigenvector with associated eigenvalue [tex]\lambda_i[/tex]. Suppose an arbitrary vector y can be presented as:

[tex]y = a v_1 + bv_2[/tex]

Then

[tex]Ay = a\lambda_1v_1 + b\lambda_2v_2[/tex]

The magnitude (2-norm) of vector y in terms of the eigenvector basis is [tex]\sqrt{a^2+b^2}[/tex]. The magnitude of Ay is [tex]\sqrt{(a\lambda_1)^2 + (b\lambda_2)^2}[/tex]. The gain in magnitude is the ration between the two, which could be a mess to compute. If we check the gain in magnitude over all possible y, we get the 2-norm of matrix A.

Since the 2-norm could be messy to compute. Say we look at the [tex]\infty[/tex]-norm. With the infinity norm, the magnitude of y is the maximum value between a and b, and the magnitude of Ay is the maximum value between [tex]a\lambda_1[/tex] and [tex]b\lambda_2[/tex]. The gain in magnitude is again the ratio. For each y, the gain would one of the four possiblilities: [tex]\lambda_1[/tex], [tex]\lambda_2[/tex], [tex]\lambda_1a/b[/tex], [tex]\lambda_2b/a[/tex]. In the last two cases, the fraction a/b or b/a must be less than 1, because, it a > b in the first case, then the norm of y would be a, so its gain would have been [tex]\lambda_1[/tex] instead. Therefore, when all possible vector y are considered, the gain of matrix A must be the maximum between [tex]\lambda_1[/tex] and [tex]\lambda_2[/tex].


Topic 2: the eigenvalues of sum of matrices:

For a matrix A, there is a Jordan norm form which is an upper triangular matrix with the eigenvalues of A in its diagonal. A and its Jordan form J by an invertible matrix P in this fashion: [tex]AP = PJ[/tex]. Since det(AP)=det(A)det(P), det(A) = [tex]\prod \lambda_i[/tex].

Suppose we have matrix A and B. A has Jordan form J such that AP = PJ. B has Jordan form K such that BQ = QK. Then the determinant of A+B is:

det(A + B) = det( PJP-1 + QKQ-1 )

If P happens to equal Q, then:

det(A + B) = det( P( J+K )P-1 ) = [tex]\prod (\lambda_j + \lambda_k)[/tex]

One situation where P can equal Q is when A is the identity matrix. So the matrix M := I+B would have eigenvalues [tex]\lambda_m = 1+\lambda_b[/tex].

Is this explanation correct?

- Thanks
 
Physics news on Phys.org
  • #2


saltine said:

Homework Statement



I am studying about eigenvalues and norms. I was wondering whether the way I understand them is correct.

Homework Equations




The Attempt at a Solution



The eigenvalue of a matrix those that satisfy [tex]Ax = \lambda x[/tex], where A is a matrix, x is an eigenvector, [tex]\lambda[/tex] is a scalar. The significant here is that from the perspective of the eigenvector, a matrix multiplication with A is the same as a scalar multiplication with lambda. When A is square, non-signular, and of rank n, A has n eigenvalues and n eigenvectors. The n eigenvectors forms a basis.
No. A matrix must of course be square in order for [tex]Ax= \lambda x[/tex] to make sense (more generally for Av to be equal to a multiple of v, A must be a linear transformation for a given vectors space to itself) but it is not necessary that a matrix be non-singular. If a matrix is singular that just means that it has at least one eigenvalue equal to 0. Finally, a square, non-singular matrix (and if it is non-singular it is necessarily of rank n) does not in general have n independent eigenvectors and so they do not necessarily form a basis. That is true if and only if the matrix is "diagonalizable", which, for matrices over the real numbers, is true if and only if the matrix is symmetric.
For example, the matrix
[tex]\left[\begin{array}{cc}1 & 1 \\ 0 & 1\end{array}\right][/tex]
which is not symmetric, has the single eigenvalue 1 and all eigenvectors are multiples of (1, 0).

Suppose vi is an eigenvector with associated eigenvalue [tex]\lambda_i[/tex]. Suppose an arbitrary vector y can be presented as:

[tex]y = a v_1 + bv_2[/tex]

Then

[tex]Ay = a\lambda_1v_1 + b\lambda_2v_2[/tex]

The magnitude (2-norm) of vector y in terms of the eigenvector basis is [tex]\sqrt{a^2+b^2}[/tex]. The magnitude of Ay is [tex]\sqrt{(a\lambda_1)^2 + (b\lambda_2)^2}[/tex]. The gain in magnitude is the ration between the two, which could be a mess to compute. If we check the gain in magnitude over all possible y, we get the 2-norm of matrix A.
By "gain in magnitude" I take it you mean
[tex]\frac{||Ay||}{||y||}[/tex]
and you "get the 2 norm of matrix A" as the supremum of that. It is sufficient to look at vectors with norm 1.

Since the 2-norm could be messy to compute. Say we look at the [tex]\infty[/tex]-norm. With the infinity norm, the magnitude of y is the maximum value between a and b, and the magnitude of Ay is the maximum value between [tex]a\lambda_1[/tex] and [tex]b\lambda_2[/tex]. The gain in magnitude is again the ratio. For each y, the gain would one of the four possiblilities: [tex]\lambda_1[/tex], [tex]\lambda_2[/tex], [tex]\lambda_1a/b[/tex], [tex]\lambda_2b/a[/tex]. In the last two cases, the fraction a/b or b/a must be less than 1, because, it a > b in the first case, then the norm of y would be a, so its gain would have been [tex]\lambda_1[/tex] instead. Therefore, when all possible vector y are considered, the gain of matrix A must be the maximum between [tex]\lambda_1[/tex] and [tex]\lambda_2[/tex].


Topic 2: the eigenvalues of sum of matrices:

For a matrix A, there is a Jordan norm form which is an upper triangular matrix with the eigenvalues of A in its diagonal. A and its Jordan form J by an invertible matrix P in this fashion: [tex]AP = PJ[/tex]. Since det(AP)=det(A)det(P), det(A) = [tex]\prod \lambda_i[/tex].
It's a bit more than that. A Jordan form has non-zero entries only on the main diagonal and the diagonal just above that. Each "Jordan block" consists of a single number (an eigenvalue of the matrix) on the main diagonal and either 1 or 0 on the diagonal above.

By the way, if it were true, as you said above, that the eigenvectors of every matrix formed a basis, we wouldn't need the Jordan Form at all! Taking P as the matrix with eigenvectors of A as columns, we would have AP= PD where D is the diagonal matrix with the eigenvalues of A on the main diagonal.

[/quote]Suppose we have matrix A and B. A has Jordan form J such that AP = PJ. B has Jordan form K such that BQ = QK. Then the determinant of A+B is:

det(A + B) = det( PJP-1 + QKQ-1 )

If P happens to equal Q, then:

det(A + B) = det( P( J+K )P-1 ) = [tex]\prod (\lambda_j + \lambda_k)[/tex]

One situation where P can equal Q is when A is the identity matrix. So the matrix M := I+B would have eigenvalues [tex]\lambda_m = 1+\lambda_b[/tex].

Is this explanation correct?

- Thanks[/QUOTE]
From what you say, it would only follow that the product of the eigenvalues of M is the product of (1+ eignvalues of B), not that the individual eigenvalues are the same.
 

FAQ: Are Eigenvalues and Eigenvectors Correctly Understood in Matrix Operations?

What are eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are two important concepts in linear algebra. Eigenvalues represent the scalar values that, when multiplied by a given matrix, produce the same vector. Eigenvectors are the corresponding vectors that are scaled by the eigenvalues. In other words, eigenvectors are the directions that are left unchanged when the matrix is multiplied by its corresponding eigenvalue.

How do eigenvalues and eigenvectors help with matrix operations?

Eigenvalues and eigenvectors are useful for many reasons. They can simplify matrix operations, as they allow us to transform a complex matrix into a diagonal matrix. This makes it easier to perform calculations and determine important properties of the matrix, such as its determinant and inverse. Additionally, eigenvalues and eigenvectors can be used to find solutions to systems of linear equations.

How are eigenvalues and eigenvectors calculated?

Calculating eigenvalues and eigenvectors involves finding the solutions to a characteristic polynomial, which is a polynomial equation that is derived from the given matrix. This can be done using various methods, such as the QR algorithm or the power method. There are also many mathematical software programs that can calculate eigenvalues and eigenvectors quickly and accurately.

What are the applications of eigenvalues and eigenvectors?

Eigenvalues and eigenvectors have many applications in mathematics, physics, and engineering. They are used in computer graphics and image processing, as well as in machine learning and data analysis. In physics, eigenvalues and eigenvectors are used to study quantum mechanics and analyze the behavior of quantum systems. In engineering, they are used to model and analyze complex systems, such as electrical circuits and mechanical structures.

Are there any limitations to using eigenvalues and eigenvectors?

While eigenvalues and eigenvectors are powerful tools in linear algebra, there are some limitations to their use. For example, not all matrices have eigenvalues and eigenvectors, and some matrices may have complex eigenvalues and eigenvectors. Additionally, eigenvalues and eigenvectors may not always provide the most accurate or meaningful solutions to certain problems, and alternative methods may be needed.

Similar threads

Replies
4
Views
820
Replies
19
Views
3K
Replies
6
Views
2K
Replies
6
Views
2K
Replies
5
Views
2K
Replies
18
Views
3K
Replies
6
Views
3K
Back
Top