Transforming a Linear Transformation Matrix to an Orthonormal Basis

In summary, the conversation involves finding an orthonormal basis for a linear transformation and its corresponding matrix. The correct eigenvectors for the matrix are (1, i) and (1, -i), which are then divided by their magnitudes to make the basis orthonormal. The transformation matrix from this new basis to the standard basis is found and used to calculate the matrix of the linear transformation with respect to the orthonormal basis. It is noted that this matrix is always diagonal if the transformation matrix consists of eigenvectors.
  • #1
Sudharaka
Gold Member
MHB
1,568
1
Hi everyone, :)

Here's a question with my answer. It's pretty simple but I just want to check whether everything is perfect. Thanks in advance. :)

Question:

Let \(f:\,\mathbb{C}^2\rightarrow\mathbb{C}^2\) be a linear transformation, \(B=\{(1,0),\, (0,1)\}\) the standard basis of \(\mathbb{C}^2\) and \(A_{f,\,B}=\begin{pmatrix}3&-i\\i&3\end{pmatrix}\). Find an orthonormal basis \(C\) of engenvectors for \(f\) and \(A_{f,\,C}\).

Answer:

The ​eigenvectors of \(A_{f,\,B}\) in terms of the standard basis are \(v_1=(1,\, 1)\mbox{ and }v_2=(1,\,-1)\). To make this basis \(\{v_1,\,v_2\}\) orthonormal we shall divide each of the eigenvectors by their magnitudes. Hence,

\[C=\left\{\left( \frac{1}{\sqrt{2}}, \, \frac{1}{\sqrt{2}} \right), \, \left(\frac{1}{\sqrt{2}} ,\, -\frac{1}{\sqrt{2}} \right) \right\}\]

Now the transformation matrix from basis \(C\) to \(B\) would be, \(\begin{pmatrix}\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}\end{pmatrix}\). It could be easily seen that the inverse of this matrix is itself. Hence the transformation matrix from basis \(B\) to \(C\) would also be the same as above. Therefore,

\[A_{f,\,C}=\begin{pmatrix}\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}\end{pmatrix}\begin{pmatrix}3&-i\\i&3\end{pmatrix}\begin{pmatrix}\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}\end{pmatrix}=\begin{pmatrix}3&i\\-i&3\end{pmatrix}\]

Even if this is correct, I have the feeling that there should be an easier method. After all the answer is just multiplying the anti-diagonal entries of \(A_{f,\, B}\) with \(-1\). :)
 
Physics news on Phys.org
  • #2
Re: Finding and Orthonormal Basis

Hmm, it seems to me that $A_{f,B} v_1 \ne \lambda v_1$...
 
  • #3
Re: Finding and Orthonormal Basis

I like Serena said:
Hmm, it seems to me that $A_{f,B} v_1 \ne \lambda v_1$...

Sorry, as you see the eigenvectors aren't correct. Here's the correct answer,

The ​eigenvectors of \(A_{f,\,B}\) in terms of the standard basis are \(v_1=(1,\, i)\mbox{ and }v_2=(1,\,-i)\). It's clear that \(v_1\) and \(v_2\) are orthogonal under the complex dot product. To make this basis \(\{v_1,\,v_2\}\) orthonormal we shall divide each of the eigenvectors by their magnitudes. Hence,

\[C=\left\{\left( \frac{1}{\sqrt{2}}, \, \frac{i}{\sqrt{2}} \right), \, \left(\frac{1}{\sqrt{2}} ,\, -\frac{i}{\sqrt{2}} \right) \right\}\]

Now the transformation matrix from basis \(C\) to \(B\) would be, \(\begin{pmatrix}\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\\frac{i}{\sqrt{2}}&-\frac{i}{\sqrt{2}}\end{pmatrix}\). Therefore,

\[A_{f,\,C}=\begin{pmatrix}\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\\frac{i}{\sqrt{2}}&-\frac{i}{\sqrt{2}}\end{pmatrix}\begin{pmatrix}3&-i\\i&3\end{pmatrix}\begin{pmatrix}\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\\frac{i}{\sqrt{2}}&-\frac{i}{\sqrt{2}}\end{pmatrix}^{-1}=\begin{pmatrix}3&1\\1&3\end{pmatrix}\]
 
  • #4
Hmm, the matrix of A with respect to an orthonormal basis of eigenvectors should be diagonal...
 
  • #5
I like Serena said:
Hmm, the matrix of A with respect to an orthonormal basis of eigenvectors should be diagonal...

Thanks much for the reply, but now I couldn't find the mistake. Is my whole approach wrong? :)
 
  • #6
Sudharaka said:
Thanks much for the reply, but now I couldn't find the mistake. Is my whole approach wrong? :)

Neh, you're approach is completely right. ;)
Just a small mistake with big consequences.

Suppose S is the transformation matrix from C to B.
Then
$$A_{f,B} = S\ A_{f,C}\ S^{-1}$$

It follows that:
$$A_{f,C} = S^{-1}\ A_{f,B}\ S$$

But I'm afraid that is not what you have...
 
  • #7
I like Serena said:
Neh, you're approach is completely right. ;)
Just a small mistake with big consequences.

Suppose S is the transformation matrix from C to B.
Then
$$A_{f,B} = S\ A_{f,C}\ S^{-1}$$

It follows that:
$$A_{f,C} = S^{-1}\ A_{f,B}\ S$$

But I'm afraid that is not what you have...

Ah... another careless mistake trying to do everything in one step... :p

\[A_{f,\,C}=T^{-1}_{C,\,B}A_{f,\,B}T_{C,\,B}\]

\[A_{f,\,C}=\begin{pmatrix}\frac{1}{\sqrt{2}}&\frac{ 1}{\sqrt{2}}\\\frac{i}{\sqrt{2}}&-\frac{i}{\sqrt{2}}\end{pmatrix}^{-1}\begin{pmatrix}3&-i\\i&3\end{pmatrix}\begin{pmatrix}\frac{1}{\sqrt{2 }}&\frac{1}{\sqrt{2}}\\\frac{i}{\sqrt{2}}&-\frac{i}{\sqrt{2}}\end{pmatrix}=\begin{pmatrix}4&0\\0&2\end{pmatrix}\]

So I pretty much get it now. Is it always that the transformation matrix with respect to a orthonormal basis is a diagonal matrix with the eigenvalues in it's diagonal? :)
 
  • #8
Good!

Sudharaka said:
Is it always that the transformation matrix with respect to a orthonormal basis is a diagonal matrix with the eigenvalues in it's diagonal?

Yep.
That is assuming that you have a transformation matrix consisting of eigenvectors.

Edit: you may want to polish that statement a little bit though, since it's not the transformation matrix that has the eigenvalues on its diagonal.
 
  • #9
I like Serena said:
Good!
Yep.
That is assuming that you have a transformation matrix consisting of eigenvectors.

Edit: you may want to polish that statement a little bit though, since it's not the transformation matrix that has the eigenvalues on its diagonal.

It's the matrix of the linear transformation with respect to the orthonormal basis. Isn't? Well isn't that called the transformation matrix, although I believe the word "transformation" matrix could be a little ambiguous since it could also mean the basis transformation matrix \(T\). Am I correct?
 
  • #10
Sudharaka said:
It's the matrix of the linear transformation with respect to the orthonormal basis. Isn't? Well isn't that called the transformation matrix, although I believe the word "transformation" matrix could be a little ambiguous since it could also mean the basis transformation matrix \(T\). Am I correct?

Correct.
Note that the matrix consisting of the eigenvectors does not have to be orthonormal.
Any set of independent eigenvectors will do.

If the matrix contains orthonormal eigenvectors that merely means that its inverse is the same as its conjugate transpose. Such a matrix is called unitary.

There is also another theorem called the spectral theorem.
If we have a matrix that is equal to its conjugate transpose (called hermitian), then it has an orthonormal basis of eigenvectors and all its eigenvalues are real.
This is applicable to your current problem.
 
  • #11
I like Serena said:
Correct.
Note that the matrix consisting of the eigenvectors does not have to be orthonormal.
Any set of independent eigenvectors will do.

If the matrix contains orthonormal eigenvectors that merely means that its inverse is the same as its conjugate transpose. Such a matrix is called unitary.

There is also another theorem called the spectral theorem.
If we have a matrix that is equal to its conjugate transpose (called hermitian), then it has an orthonormal basis of eigenvectors and all its eigenvalues are real.
This is applicable to your current problem.

Thanks so much for all your help. I truly appreciate every bit and piece of it. :)

Yes, indeed. So the matrix of linear transformation, \(A_{f,\, B}\) is Hermitian and hence it has an orthonormal basis. I will look into the Spectral Theorem later. It's not covered in our class so not something that we need to know immediately. :)
 
  • #12
Suppose $A$ is an $n \times n$ matrix with an eigenbasis $v_1,\dots,v_n$. Let $D$ be the diagonal matrix $D = \text{diag}(\lambda_1,\dots,\lambda_n)$ where $\lambda_i$ is the eigenvalue corresponding to $v_i$.

If $P$ is the matrix whose columns are $v_1^T,\dots,v_n^T$, it is trivial to see that:

$AP = PD$, which tells us that $P^{-1}AP = D$ (that is, $P$ diagonalizes $A$).

In this particular case, we have 2x2 matrix with 2 distinct eigenvalues, so we're going to get an eigenbasis.

You should become comfortable with the fact that an expression like $B^{-1}AB$ (a SIMILARITY transform) essentially represents changing a matrix $A$ (which represents some linear transformation relative to some basis) to its matrix in another basis (and $B$ is called a "change-of-basis matrix"). The goal of employing such a technique is usually to get an easier form of $A$ to calculate with. In real-life situations, this often means we can recover matrix information by just measuring scalars (because linear transformations just scale eigenvectors).

If (as unfortunately happens) we don't have an eigenbasis, we can still (via the Jordan form) use a similaritly transform to bring $A$ to a form $D + N$, where $D$ is diagonal, and $N$ is nilpotent. In a sense, $N$ measures "how far from diagonal" our "standardized" form for $A$ is, and $P$ is then composed of eigenvectors and generalized eigenvectors.
 

FAQ: Transforming a Linear Transformation Matrix to an Orthonormal Basis

What is an orthonormal basis?

An orthonormal basis is a set of vectors in a vector space that are orthogonal (perpendicular) to each other and have a length of 1. This means that they are not only linearly independent, but also have a magnitude of 1, making them useful for many mathematical and scientific applications.

Why is finding an orthonormal basis important?

Finding an orthonormal basis is important because it allows us to simplify and solve complex mathematical problems, particularly those involving vector spaces. It also helps us to understand and visualize geometric concepts in a more intuitive way.

How do you find an orthonormal basis?

To find an orthonormal basis, we first need to find a set of linearly independent vectors in a vector space. Then, we use a process called Gram-Schmidt orthogonalization to transform these vectors into an orthogonal set. Finally, we normalize the vectors by dividing each one by its magnitude to create an orthonormal basis.

What are the applications of orthonormal bases?

Orthonormal bases have many applications in mathematics, physics, and engineering. They are commonly used in linear algebra, signal processing, data compression, and quantum mechanics, among others. They are also essential for solving systems of equations and performing transformations in coordinate systems.

Can an orthonormal basis be used in any vector space?

Yes, an orthonormal basis can be used in any vector space, as long as the space has a defined inner product. This means that the space must have a way to measure the angle between two vectors and the length of a vector, both of which are necessary for the concepts of orthogonality and normalization.

Similar threads

Replies
34
Views
2K
Replies
52
Views
3K
Replies
10
Views
1K
Replies
6
Views
1K
Replies
9
Views
3K
Replies
4
Views
2K
Back
Top