Prove infinitely many left inverses

  • Thread starter vintwc
  • Start date
In summary, the conversation discusses the properties of the linear maps set L(V) and how it can be proven to be a ring. It also explores the concept of a direct sum of vector spaces and how it relates to the dimension of V. The conversation also touches on the role of linear bijections and isomorphisms in this context. It is shown that there are infinitely many elements of L(V) that satisfy xf=1_R, but there is no element y in L(V) such that fy=1_R.
  • #1
vintwc
24
0

Homework Statement



Let V be a vector space over K. Let L(V) be the set of all linear maps V->V. Prove that L(V) is a ring under the operations:
f+g:x -> f(x)+g(x) and fg:x -> f(g(x))

Now, let V=U+W be the direct sum of two vector spaces over K such that the dimension of both U and W are countable. Then V has countable dimension. Choosing a linear bijection between U and V gives us an element f:V->U of L(V). Prove that there are infinitely many [tex]x \in R = L(V)[/tex] such that xf=1_R. Prove that there is no [tex]y \in R[/tex] such that fy=1_R.

Homework Equations


Direct sum of two vector spaces U and W is the set U+W of pairs of vectors (u,w) in U and W with operations:
a(u,w)+b(u'+w')=(au+bu',aw+bw')

The Attempt at a Solution


For the first bit, I managed to show that L(V) is indeed a ring. In the second part, I'm not sure how to approach this problem. Should I define a bijective function f such that xf=1_R? Also, is linear bijection essentially means an isomorphism?
 
Physics news on Phys.org
  • #2
ok so linear bijection is an isomorphism. i define f(v_1,v_2)=(u(v_1,v_2),0) but I'm still not sure how to proceed from there.
 
  • #3
Pick bases for U, {u_i} and W, {w_i}. A basis for V, {v_i} is then the union of the two. And the two are disjoint. That's really what direct sum means in this case. You don't have to worry about the ordered pair definition business. Let's also pick the bases so that f(v_i)=u_i is your bijection. Do you see it now? In the case xf=1_R, f maps everything 1-1 onto U. To undo that, you just have to make sure everything in U goes back to the corresponding vector in V. What you define x to be for elements of W doesn't matter (hence the infinite number). Can you see why fy can't be 1_R?
 

FAQ: Prove infinitely many left inverses

What does it mean to have infinitely many left inverses?

Having infinitely many left inverses means that for a given element or function, there are an infinite number of other elements or functions that, when composed with the original element or function, result in the identity element or function. In other words, there are numerous ways to "undo" the original element or function.

How is this concept relevant in mathematics?

The concept of having infinitely many left inverses is relevant in various areas of mathematics, such as abstract algebra and linear algebra. It is used to define and understand important properties of mathematical structures, such as groups and vector spaces.

Can you provide an example of a function with infinitely many left inverses?

Yes, the function f(x) = x^3 has infinitely many left inverses. Some examples of left inverses for this function are f^{-1}(x) = x^{\frac{1}{3}}, f^{-1}(x) = \sqrt[3]{x}, and f^{-1}(x) = \frac{1}{3\sqrt[3]{x^2}}.

Are there any conditions for a function to have infinitely many left inverses?

Yes, for a function to have infinitely many left inverses, it must be a one-to-one function. This means that each input has a unique output. If a function is not one-to-one, it may have some left inverses, but not an infinite number.

How is the concept of infinitely many left inverses related to the concept of right inverses?

The concepts of infinitely many left inverses and right inverses are closely related. A function has infinitely many left inverses if and only if it has infinitely many right inverses. This is because if a function has infinitely many left inverses, each left inverse can also be a right inverse and vice versa.

Back
Top