Finding the Inverse of a Matrix Mapping on a Linear Subspace

In summary: Can you please elaborate on what you want to do?In summary, G maps states in R^4->W where G^-1 does not exist. Gv1 and Gv2 are linearly dependent and can be expressed as linear combinations of v1 and v2. However, I am unable to "extract" the restriction of G to W. Gv1 and Gv2 can be expressed as linear combinations of v1 and v2, but I am not able to "extract" the restriction of G to W.
  • #1
Kreizhn
743
1

Homework Statement


Let's say I'm given two vectors
[tex] v_1 = \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \end{pmatrix}, v_2 = \begin{pmatrix} 0 \\ 1 \\ 1 \\ 0 \end{pmatrix} \in \mathbb R^4[/tex].
Let W be subspace spanned by these vectors, and define [itex] G = v_1 v_1^T + v_2 v_2^T [/itex] a matrix mapping [itex]\mathbb R^4 \to \mathbb R^4 [/itex]. Find [itex] G' [/itex] such that [itex] \left. G' = G^{-1} \right|_{W} [/itex].

Homework Equations


The Attempt at a Solution



Since [itex] v_1, v_2 [/itex] are linearly independent, the dimension of W is 2. Furthermore, since G is composed of these vectors, we can be guaranteed that that an inverse exists on W. Let W' be the image of W under G. That is, since [itex] v_1, v_2 [/itex] generate W, then [itex] G v_1, Gv_2 [/itex] should generate W'.

I've computed that
[tex] G = \begin{pmatrix}1 & 1 & 0 & 0 \\ 1 & 2 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, Gv_1 = \begin{pmatrix} 2\\ 3\\ 1 \\0 \end{pmatrix}, Gv_2 = \begin{pmatrix} 1 \\ 3 \\ 2 \\ 0 \end{pmatrix} [/tex].

Now maybe it's because it's been so long since I did any linear algebra, but I can't for the life of me figure out how to "extract" the restriction of G to W. Given this information, it should then be simple to construct the inverse and hence make G'.
 
Physics news on Phys.org
  • #2
Just express Gv1 and Gv2 as linear combinations of v1 and v2. At some point you should figure out you really are doing this the long way around. Gv1=v1(v1)^Tv1+v2(v2)^Tv1. I see dot products in there.
 
  • #3
So what I think you're saying is that I have

[tex] v_1' = G v_1 = 2 v_1 + v_2 [/tex]
[tex] v_2' = G v_2 = v_1 + 2v_2 [/tex]

So I should associate [itex] v_1' = (2,1), v_2' = (1,2) [/itex]. In this case then, if G is determined by its action on the basis states [itex] v_1, v_2 [/itex] then G acting on W can be written as

[tex] G_W = \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix} [/tex]

Which in turn has inverse

[tex] G_W^{-1} = \frac13 \begin{pmatrix} 2 & -1 \\ -1 & 2 \end{pmatrix} [/tex]

But then how do I make this four-dimensional again?
 
  • #4
You can make it four dimensional any way you want. You know how (G_W)^(-1) acts on v1 and v2. You'll have to pick v3 and v4 so that {v1,v2,v3,v4} is a basis for R^4. Now you can define (G_W)^(-1)(v3) and (G_W)^(-1)(v4) to be anything you want. You know you can't extend it to be G^(-1) on R^4, since G isn't invertible, right?
 
  • #5
Is there no loss of information or structure defined by the original G by choosing v3 and v4 as such?
 
  • #6
Okay, let me see if I understand this then:

[tex] G_W^{-1} v_1 = \frac13 \begin{pmatrix} 2 \\ -1 \end{pmatrix} = 2 v_1 -v_2 [/tex]
[tex] G_W^{-1} v_2 = \frac13 \begin{pmatrix} -1 \\ 2 \end{pmatrix} = -v_1 + 2 v_2 [/tex]

Thus we use our original definitions of
[tex] v_1 = \begin{pmatrix} 1 \\ 1\\ 0 \\ 0 \end{pmatrix}, v_2 = \begin{pmatrix} 0 \\ 1 \\ 1 \\ 0 \end{pmatrix} [/tex]
To find that the first two columns of G' should be

[tex] \frac13 \begin{pmatrix} 2 & - 1 \\ 1 & 1 \\ -1 & 2 \\ 0 & 0 \end{pmatrix} [/tex]

But now you say that [itex] \{ v_1, v_2, v_3, v_4 \} [/itex] must form a basis for [itex] \mathbb R^4 [/itex], but that I can choose [itex] G_W^{-1} v_3 , G_W^{-1} v_4 [/itex] to be anything I want. This seems contradictory, since I could certainly choose these such that [itex] v_3, v_4 [/itex] did not help form a basis. So should I choose them such that they form a basis? Or choose them arbitrarily?
 
  • #7
I'm not really sure what you are trying to do. [itex]\left. G' = G^{-1} \right|_{W}[/itex], doesn't make much sense because [itex]G^{-1}[/itex] doesn't exist. G maps R^4->W. [itex]{G \right|_W}^{-1}[/itex] exists, but it doesn't have any unique extension to R^4.
 
  • #8
Sorry, I wrote that backwards. It should be

[tex] \left. G_W^{-1} = G' \right|_{W} [/tex]
 
  • #9
Kreizhn said:
Sorry, I wrote that backwards. It should be

[tex] \left. G_W^{-1} = G' \right|_{W} [/tex]

I still don't see how that would be unique if you want G' to be 4x4.
 
  • #10
Perhaps some perspective will help. I have a series of density matrices [itex] \{ \rho_i \} [/itex] and my goal is to construct a positive operator valued measure from these states to form a measure on [itex] (\mathbb C^3)^{\otimes 3} = \mathbb C^9 [/itex]. This is done by defining [itex] M = \sum_i p_i \rho_i [/itex] where [itex] p_i [/itex] are the associated density matrix probabilities. Then the positive operator valued measure is defined as

[tex] E_i = p_i M^{-\frac12} \rho_i M^{-\frac12} [/tex]

where [itex] \left( M^{-\frac12} \right)^2 M [/itex] is the projection operator onto the image of M.
 
  • #11
Sorry, I really don't know that formalism.
 

FAQ: Finding the Inverse of a Matrix Mapping on a Linear Subspace

What is a matrix on a linear subspace?

A matrix on a linear subspace is a representation of a linear transformation between two vector spaces. It is a rectangular array of numbers that can be used to perform calculations on vectors in the subspace.

How are matrices and linear subspaces related?

Matrices and linear subspaces are closely related because matrices can be used to represent linear transformations between vector spaces. The columns of the matrix represent the images of the basis vectors in the subspace, and the rows represent the coordinates of the image vectors in the basis of the codomain.

What is the dimension of a matrix on a linear subspace?

The dimension of a matrix on a linear subspace is determined by the dimension of the subspace. For example, if the subspace has dimension n, then the matrix will have n columns and m rows, where m is the dimension of the codomain.

What is the importance of matrices on linear subspaces?

Matrices on linear subspaces are important because they allow for the efficient representation and calculation of linear transformations. They are also useful in solving systems of linear equations and studying the properties of vector spaces and linear transformations.

How are matrices on linear subspaces used in practical applications?

Matrices on linear subspaces are used in a variety of practical applications, such as image and signal processing, data compression, and machine learning. They are also essential in fields such as physics, engineering, and economics for modeling and analyzing real-world systems.

Similar threads

Replies
2
Views
763
Replies
6
Views
849
Replies
2
Views
2K
Replies
6
Views
1K
Replies
7
Views
2K
Replies
1
Views
480
Replies
8
Views
2K
Back
Top