Linear transformation given a nullspace and a solution space.

In summary: There is not a general rule, but usually it is easier to work with matrices that are invertible from the start.
  • #1
Gramsci
66
0

Homework Statement



Find if possible a linear transformation R^4-->R^3 so that the nullspace is [(1,2,3,4),(0,1,2,3)] and the range the solutions to x_1+x_2+x_3=0.

Homework Equations


-


The Attempt at a Solution


So I thought I should start with trying to find what kind of matrix we have from the information about the nullspace we have. I start with interpreting it as:
x_1=t, x_2 = 2t+s , x_3 = 3t+2s , x_4 = t+3s . Then I'm completely stuck. Any help wuld be nice!
 
Physics news on Phys.org
  • #2
Let's simplify the problem for the time being. Suppose that your null-space is (1,0,0,0) and (0,1,0,0). Can you construct a matrix for this problem? (Hint: how do the entries of a matrix relate to the image of the linear transformation?)
 
  • #3
owlpride: I do know how to change bases, but I'm not that good at it. How do you mean with the image?
 
  • #4
The image of a function is just it's range. Sorry for the confusion.
 
  • #5
Forget about changing bases for a minute. We will do this in the end.

For the time being, we only want a linear map that maps (1,0,0,0) and (0,1,0,0) to (0,0,0). Where could you map the remaining two basis vectors (0,0,1,0) and (0,0,0,1) to in order to generate the correct range?
 
  • #6
(0,0,0) ?
 
  • #7
If you sent them to (0,0,0), then all of R^4 would be in the kernel of your map. But you want the kernel to be a two-dimensional subspace (we temporarily picked (1,0,0,0) and (0,1,0,0) as generators for the kernel instead of (1,2,3,4) and (0,1,2,3)).

Can you name some vectors that lie in the plane x1 + x2 + x3 =0? Because that's the plane that the range is supposed to cover, right?
 
Last edited:
  • #8
yeah, you're right.
I guess:
(1,-1,0) would be a vector there, as well would (1,0,-1), right?
 
  • #9
Good choices, because they are linearly independent!

What does the matrix look like for a linear map that does the following?
(1,0,0,0) -> (0,0,0)
(0,1,0,0) -> (0,0,0)
(0,0,1,0) -> (1,-1,0)
(0,0,0,1) -> (1,0,-1)
 
  • #10
Isn't it simply the latter matrix that does it?
 
  • #11
It is! That's why it is easier to solve the problem with the standard basis first. And the best part is that we can use this matrix to compute the matrix you are really interested in!

Suppose you found a second matrix that sends (1,2,3,4) to (1,0,0,0) and (0,1,2,3) to (0,1,0,0). Then the product of this matrix with the one we already have gives you the matrix you were initially looking for! In other words, you can construct the map you are looking for as the composition of two maps:

(1,2,3,4) -> (1,0,0,0) -> (0,0,0)
(0,1,2,3) -> (0,1,0,0) -> (0,0,0)
___v3___-> (0,0,1,0) -> (1,-1,0)
___v4___-> (0,0,0,1) -> (1,0,-1)

There are two things you still need to do: find appropriate vectors v3 and v4 (what is important about them?), and find a matrix for the first map (what does its inverse look like?).
 
  • #12
owlpride: Hmm, regarding the vectors v3 and v4, should they simply be (0, 0,1,0) and (0,0,0,01) ? I think not, but I can't come up with any better criterion... That they're in the plane?
 
  • #13
(0,0,1,0) and (0,0,0,1) would work. They only have to form a basis for R^4 together with (1,2,3,4) and (0,1,2,3). If you used linearly dependent vectors, you would run into trouble constructing the map; linear maps always send dependent vectors to dependent vectors, and the four standard basis vectors are linearly independent.

If your problem was more restrictive, it could determine exactly what v3 and v4 should be. But the current statement it only specifies the range, not which vectors in the domain should be sent to which vector in the range. So you get to pick!
 
  • #14
OK, so I have to find the product of the inverse of (1,2,3,4) (0,1,2,3) ,(0,0,1,0),(0,0,0,1) and the matrix we found in the beginning? :)
 
  • #15
Yes! :) Just pay attention to the order in which you multiply the matrices.
 
  • #16
Call the matrix that we found last for B, the one that we found first for A.
Then:
inv(B)*A would give it, no?
 
  • #17
The other way round :)

A* inv(B) * (1,2,3,4) = A* (1,0,0,0) = (0,0,0).
 
  • #18
Ah, how come we take it the otherway around though? And why the (1,2,3,4)? :)
 
  • #19
(1,2,3,4) was just an example vector. I wanted to show you why you had to multiply the matrices the other way round. The matrix you are looking for (call it M) should satisfy M*(1,2,3,4) = (0,0,0), among other things.

If M = inv(B)*A, then M*(1,2,3,4) = inv(B)*A*(1,2,3,4). But A does not know what to do with the vector (1,2,3,4) - well, it does, but it does not do anything "nice" with it. We constructed A so that it sent the standard basis vectors to vectors that we controlled.

inv(B), on the other hand, knows exactly what to do with (1,2,3,4). It was constructed so that inv(B)*(1,2,3,4) = (1,0,0,0). When we compose the matrices, we get

A* inv(B) * (1,2,3,4) = A* (1,0,0,0) = (0,0,0).

That's exactly what we want M to do!
 
  • #20
Is there any general rule regarding which way to multiplcate composed matrices like this? :) Thanks for all your help!
 
  • #21
There are rules but I keep confusing them. The safest bet seems to be to think about what each matrix does individually and test the outcome of the different orders of compositions, like I did in the previous post.

In this example, you could consider inv(B) a change of basis, and those usually come first. Another good thing to look out for are dimensions. inv(B) sends R^4 -> R^4, and A sends R^4 -> R^3. Hence inv(B) has to act first (i.e. B is the right-most matrix in the product) so that you get maps R^4 -> R^4 -> R^3. Applying A first would leave inv(B) with a vector in R^3, which makes no sense...
 
Last edited:
  • #22
When I do as we said, I get the matrix:
3 -1 2
-5 2 -3
1 -1 0
1 0 1
But my matrix should be the transpose of that. Anyone willing to help?
 
  • #23
A is
(0,0,1,1)
(0,0,-1,0)
(0,0,0,-1)

B is
(1,0,0,0)
(2,1,0,0)
(3,2,1,0)
(4,3,0,1)

A*inv(B) gives me the correct matrix.
 

FAQ: Linear transformation given a nullspace and a solution space.

What is a linear transformation given a nullspace and a solution space?

A linear transformation is a mathematical function that maps one vector space to another while preserving the basic structure of the space. The nullspace of a linear transformation is the set of all vectors that are mapped to the zero vector, while the solution space is the set of all vectors that are mapped to a specific target vector. Therefore, a linear transformation given a nullspace and a solution space is a function that maps vectors to the zero vector or a specific target vector.

How do you find the nullspace and solution space of a linear transformation?

To find the nullspace of a linear transformation, you can set up a system of equations using the matrix representation of the transformation and solve for the variables that result in the zero vector. The solution space can then be found by setting the target vector as the right-hand side of the system of equations and solving for the remaining variables.

Can a linear transformation have multiple nullspaces or solution spaces?

Yes, a linear transformation can have multiple nullspaces and solution spaces. This is because there can be multiple vectors that are mapped to the zero vector by the transformation and multiple vectors that are mapped to the same target vector.

How does the dimension of the nullspace and solution space relate to the dimension of the vector spaces involved?

The dimension of the nullspace is equal to the number of free variables in the system of equations used to find the nullspace, while the dimension of the solution space is equal to the number of dependent variables. This means that the sum of the dimensions of the nullspace and solution space is equal to the number of columns in the matrix representation of the transformation, which is also equal to the dimension of the vector space that the transformation is mapping to.

How can linear transformations with different nullspaces and solution spaces be compared?

Linear transformations with different nullspaces and solution spaces can be compared by examining the dimensions of these spaces and the structure of the vectors that make up these spaces. Additionally, the effects of the transformation on the input vectors can be compared to see how they differ in terms of mapping to the zero vector or a specific target vector.

Similar threads

Replies
1
Views
1K
Replies
1
Views
2K
Replies
10
Views
25K
Replies
3
Views
1K
Replies
18
Views
2K
Back
Top