- #1
Poirot1
- 245
- 0
From wikipedia I read that every linear map from T:V->V, where V is finite dimensional and dim(V) > 1 has an eigenvector. What is the proof ?
This result is only true if V is a vector space over an algebraically closed field, such as the complex numbers. For example, the map $T:\mathbb{R}^2 \to \mathbb{R}^2$ with matrix $\begin{bmatrix}0&1 \\-1&0\end{bmatrix}$ represents the operation of rotation through a right angle, and it is fairly obvious that there are no nonzero vectors in $\mathbb{R}^2$ whose direction is left unchanged by this map. However, if you allow complex scalars then $(1,i)$ is an eigenvector, with eigenvalue $i$, because $T(1,i) = (i,-1) = i(1,i)$.Poirot said:From wikipedia I read that every linear map from T:V->V, where V is finite dimensional and dim(V) > 1 has an eigenvector. What is the proof ?
An eigenvector is a vector that, when multiplied by a linear map, results in a scalar multiple of itself. In other words, the linear map only changes the magnitude of the eigenvector, not its direction.
Eigenvectors are important because they provide a basis for representing and understanding the behavior of linear maps. Proving their existence ensures that we can always find a set of eigenvectors to use as a basis for our calculations.
The proof typically involves using the Cayley-Hamilton theorem, which states that every linear map on a finite-dimensional vector space satisfies its own characteristic polynomial. This allows us to find the eigenvalues and eigenvectors of the map.
No, the proof only holds for finite-dimensional spaces. In infinite-dimensional spaces, the concept of eigenvectors becomes more complex and may not always exist for every linear map.
Yes, the proof has many applications in various fields such as physics, engineering, and computer science. It is used to solve systems of differential equations, analyze the stability of dynamical systems, and perform dimensionality reduction in machine learning algorithms.