Universal Mapping Property of a Direct Sum - Knapp Pages 60-61

In summary: L: \Bbb R \to \Bbb R$ is an injective linear map...then the "vector" components of $L$ are actually "linear combinations" of the vector components of $L_1$ and the vector components of $L_2$.In summary, Knapp writes that if two vector spaces $U$ and $V$ are over the same field, then there is a unique linear map $L$ from $U$ to $V$ such that $p_1\circ L = L_1$ and $p_2\circ L = L_2$. This map is often written as $L_1 + L_2$, or $L
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Chapter 2: Vector Spaces over \(\displaystyle \mathbb{Q}, \mathbb{R} \text{ and } \mathbb{C}\) of Anthony W. Knapp's book, Basic Algebra.

I need some help with some issues regarding the Universal Mapping Property of direct sums of vector spaces as dealt with by Knapp of pages 60-61. I am not quite sure what Knapp is "getting at" or meaning in introducing the idea of the Universal Mapping Property (UMP) ... ...

Writing on the UMP for direct sums of vector spaces (pages 60-61), Knapp writes:

https://www.physicsforums.com/attachments/2926
View attachment 2927In the above text on the UMP, Knapp defines \(\displaystyle U, V\) as vector spaces over \(\displaystyle \mathbb{F}\) and let's \(\displaystyle L_1\) and \(\displaystyle L_2\) be linear maps as follows:

\(\displaystyle L_1 : \ U \to V_1\) and \(\displaystyle L_2 : \ U \to V_2\)

He then says that we can define a map \(\displaystyle L : \ U \to V\) as follows:

\(\displaystyle L(u) = (i_1L_1 + i_2L_2) (u) = (L_1(u), L_2(u))
\)

He then says that "we can recover \(\displaystyle L_1\) and \(\displaystyle L_2\) from \(\displaystyle L_1 = p_1L\) and \(\displaystyle L_2 = p_2L\)"My question is how exactly does this "recovery" work and then (more importantly) what has this got to do with any universal mapping property of direct sums?I suspect that maybe (?) the "recovery" of \(\displaystyle L_1\) works something like this ...

\(\displaystyle L(u) = v = v_1 + v_2\) where \(\displaystyle v_1 \in V_1\) and \(\displaystyle v_2 \in V_2\)

Then it would follow that ...

\(\displaystyle L_1(u) = p_1L(u) = p_1(v) = p_1(v_1 + v_2) = p_1(v_1, v_2) = v_1
\)But ... firstly ... is this what is meant by "recovering" \(\displaystyle L_1\) from L ... doesn't seem so ... so what is meant by it?Secondly, in the above how would one justify writing \(\displaystyle p_1(v_1 + v_2) = p_1(v_1, v_2)\) ... Mind you, I am somewhat confused and would appreciate help generally on the topic of the UMP for direct sums of vector spaces ...

Peter
 
Physics news on Phys.org
  • #2
The formal statement of the Universal Mapping Property is this:

If $U$ is ANY other vector space (over the same field, of course) with ANY OTHER pair of linear maps:

$L_1:U \to V_1$
$L_2:U \to V_2$

then there is a UNIQUE linear map: $L: U \to V_1 \oplus V_2$ such that:

$p_1\circ L = L_1$
$p_2\circ L = L_2$.

This map is often written as $L_1 + L_2$, or $L_1 \oplus L_2$ or even as $L_1 \times L_2$.

********

To see $V_1 \oplus V_2$ actually posseses this property, suppose that $U,L_1,L_2$ are given.

Define $L(u) = (L_1(u),L_2(u))$. It is straight-forward to verify this $L$ is linear, and:

$(p_1 \circ L)(u) = p_1(L_1(u),L_2(u)) = L_1(u)$, for all $u \in U$. A similar statement holds for $p_2$.

Moreover, if $L'$ is any other linear map $U \to V_1 \oplus V_2$ satisfying the UMP, we have:

$p_1(L(u) - L'(u)) = p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u) = 0$.

Hence $L(u) - L'(u) \in \text{ker }p_1$, so $L(u) - L'(u) \in \{0\}\oplus V_2$.

Similarly, $L(u) - L'(u) \in V_1 \oplus \{0\}$.

Thus $L(u) - L'(u) \in \{0\} \oplus \{0\} = 0_{V_1 \oplus V_2}$, for ALL $u \in U$, that is to say:

$L - L'$ is the 0-map. Hence $L = L'$ (we are leveraging the fact that linear transformations themselves have a vector space structure on identical domains and co-domains).

So we found such a map exists, and is unique.

********

There is a "dual" to the UMP, which basically "reverses the directions of the mapping arrows":

For any other vector space $U$, and any pair of maps:

$L_1:V_1 \to U$
$L_2:V_2 \to U$

there is a unique map $L:V_1\oplus V_2 \to U$ with:

$L \circ i_1 = L_1$
$L \circ i_2 = L_2$

This map is given explicitly by:

$L(v_1,v_2) = L_1(v_1) + L_2(v_2)$.

********

Now, to be fair, one doesn't "need" to talk about this UMP property to discuss the direct sum of two vector spaces, an "element-wise" definition is sufficient for many "practical" applications. But there is something subtle going on, here: we've shifted our focus away from "vectors", and are focusing instead on "linear maps". In other words, we're not concerned with "calculation", but with "behavior". This is a more abstract point of view, and generalizes better to other structures with "different axioms". In particular, this characterization in terms of a UMP is CATEGORICAL, and in fact holds for any category in which the (binary) product and co-product coincide.

********

It often helps to see how this works "in practice". This should be a familiar example:

The real numbers $\Bbb R$ form a field, which is to say, a vector space of dimension 1 over themselves. We can visualize this field as a "number line". Suppose we have two "separate" number lines, and we wish to create a vector space from the pair. We want the individual lines to keep their "vector-space-ness" as subspaces of this new space unimpaired, so we want an injective linear map from each one into our new space.

We also want them to be "independent", so that changes in one line, do not affect values in the other. We accomplish this by "pairing", and stipulate that when we add pairs, its only "each to each":

$(x,y) + (x',y') = (x+x',y+y')$.

The injections we have in mind are THESE ones:

$x \mapsto (x,0)$
$y \mapsto (0,y)$.

It should be clear, then, that what we now call $\Bbb R \oplus \Bbb R$ is simply $\Bbb R^2$, the real plane.

Our two "number lines" have been embedded in the plane as the $x$ and $y$ axis.

Note that merely taking the UNION of said lines, doesn't work, because if both "coordinates" are non-zero, then $(x,y)$ isn't on either axis.

Now we can "reverse" this point of view, and START with the plane, in which case we recover our original lines by PROJECTING on to one axis, or the other. This, in effect, "zeros out" one of the coordinates. At this point, the 0-coordinate is just excess baggage, and we think of the relevant (non-zero coordinate) axis as "a line unto itself".

The key linkage between the "pair" view, and the "sum" view is THIS identity:

$(x,y) = (x,0) + (0,y)$ <---think about this, for a bit.

This only defines the "abelian group structure", can you see a natural way to define "scalar multiplication"?

********

For an idea of how this concept plays out in groups (where it is similar), take a look at:

http://mathhelpboards.com/math-notes-49/universal-property-direct-product-groups-11546.html

You may also want to look at this:

http://mathhelpboards.com/potw-graduate-students-45/problem-week-113-july-28th-2014-a-11541.html

Which uses this property of the direct sum of vector spaces in a fairly sophisticated way.
 
  • #3
Deveno said:
The formal statement of the Universal Mapping Property is this:

If $U$ is ANY other vector space (over the same field, of course) with ANY OTHER pair of linear maps:

$L_1:U \to V_1$
$L_2:U \to V_2$

then there is a UNIQUE linear map: $L: U \to V_1 \oplus V_2$ such that:

$p_1\circ L = L_1$
$p_2\circ L = L_2$.

This map is often written as $L_1 + L_2$, or $L_1 \oplus L_2$ or even as $L_1 \times L_2$.

********

To see $V_1 \oplus V_2$ actually posseses this property, suppose that $U,L_1,L_2$ are given.

Define $L(u) = (L_1(u),L_2(u))$. It is straight-forward to verify this $L$ is linear, and:

$(p_1 \circ L)(u) = p_1(L_1(u),L_2(u)) = L_1(u)$, for all $u \in U$. A similar statement holds for $p_2$.

Moreover, if $L'$ is any other linear map $U \to V_1 \oplus V_2$ satisfying the UMP, we have:

$p_1(L(u) - L'(u)) = p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u) = 0$.

Hence $L(u) - L'(u) \in \text{ker }p_1$, so $L(u) - L'(u) \in \{0\}\oplus V_2$.

Similarly, $L(u) - L'(u) \in V_1 \oplus \{0\}$.

Thus $L(u) - L'(u) \in \{0\} \oplus \{0\} = 0_{V_1 \oplus V_2}$, for ALL $u \in U$, that is to say:

$L - L'$ is the 0-map. Hence $L = L'$ (we are leveraging the fact that linear transformations themselves have a vector space structure on identical domains and co-domains).

So we found such a map exists, and is unique.

********

There is a "dual" to the UMP, which basically "reverses the directions of the mapping arrows":

For any other vector space $U$, and any pair of maps:

$L_1:V_1 \to U$
$L_2:V_2 \to U$

there is a unique map $L:V_1\oplus V_2 \to U$ with:

$L \circ i_1 = L_1$
$L \circ i_2 = L_2$

This map is given explicitly by:

$L(v_1,v_2) = L_1(v_1) + L_2(v_2)$.

********

Now, to be fair, one doesn't "need" to talk about this UMP property to discuss the direct sum of two vector spaces, an "element-wise" definition is sufficient for many "practical" applications. But there is something subtle going on, here: we've shifted our focus away from "vectors", and are focusing instead on "linear maps". In other words, we're not concerned with "calculation", but with "behavior". This is a more abstract point of view, and generalizes better to other structures with "different axioms". In particular, this characterization in terms of a UMP is CATEGORICAL, and in fact holds for any category in which the (binary) product and co-product coincide.

********

It often helps to see how this works "in practice". This should be a familiar example:

The real numbers $\Bbb R$ form a field, which is to say, a vector space of dimension 1 over themselves. We can visualize this field as a "number line". Suppose we have two "separate" number lines, and we wish to create a vector space from the pair. We want the individual lines to keep their "vector-space-ness" as subspaces of this new space unimpaired, so we want an injective linear map from each one into our new space.

We also want them to be "independent", so that changes in one line, do not affect values in the other. We accomplish this by "pairing", and stipulate that when we add pairs, its only "each to each":

$(x,y) + (x',y') = (x+x',y+y')$.

The injections we have in mind are THESE ones:

$x \mapsto (x,0)$
$y \mapsto (0,y)$.

It should be clear, then, that what we now call $\Bbb R \oplus \Bbb R$ is simply $\Bbb R^2$, the real plane.

Our two "number lines" have been embedded in the plane as the $x$ and $y$ axis.

Note that merely taking the UNION of said lines, doesn't work, because if both "coordinates" are non-zero, then $(x,y)$ isn't on either axis.

Now we can "reverse" this point of view, and START with the plane, in which case we recover our original lines by PROJECTING on to one axis, or the other. This, in effect, "zeros out" one of the coordinates. At this point, the 0-coordinate is just excess baggage, and we think of the relevant (non-zero coordinate) axis as "a line unto itself".

The key linkage between the "pair" view, and the "sum" view is THIS identity:

$(x,y) = (x,0) + (0,y)$ <---think about this, for a bit.

This only defines the "abelian group structure", can you see a natural way to define "scalar multiplication"?

********

For an idea of how this concept plays out in groups (where it is similar), take a look at:

http://mathhelpboards.com/math-notes-49/universal-property-direct-product-groups-11546.html

You may also want to look at this:

http://mathhelpboards.com/potw-graduate-students-45/problem-week-113-july-28th-2014-a-11541.html

Which uses this property of the direct sum of vector spaces in a fairly sophisticated way.

Thank you for this post, Deveno ... it is a very important post for me since I have struggled to get a real sense of, and indeed, full understanding of the UMP for direct products ...

So ... I am now working through your post very carefully ... thanks again,

Peter
 
  • #4
Deveno said:
The formal statement of the Universal Mapping Property is this:

If $U$ is ANY other vector space (over the same field, of course) with ANY OTHER pair of linear maps:

$L_1:U \to V_1$
$L_2:U \to V_2$

then there is a UNIQUE linear map: $L: U \to V_1 \oplus V_2$ such that:

$p_1\circ L = L_1$
$p_2\circ L = L_2$.

This map is often written as $L_1 + L_2$, or $L_1 \oplus L_2$ or even as $L_1 \times L_2$.

********

To see $V_1 \oplus V_2$ actually posseses this property, suppose that $U,L_1,L_2$ are given.

Define $L(u) = (L_1(u),L_2(u))$. It is straight-forward to verify this $L$ is linear, and:

$(p_1 \circ L)(u) = p_1(L_1(u),L_2(u)) = L_1(u)$, for all $u \in U$. A similar statement holds for $p_2$.

Moreover, if $L'$ is any other linear map $U \to V_1 \oplus V_2$ satisfying the UMP, we have:

$p_1(L(u) - L'(u)) = p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u) = 0$.

Hence $L(u) - L'(u) \in \text{ker }p_1$, so $L(u) - L'(u) \in \{0\}\oplus V_2$.

Similarly, $L(u) - L'(u) \in V_1 \oplus \{0\}$.

Thus $L(u) - L'(u) \in \{0\} \oplus \{0\} = 0_{V_1 \oplus V_2}$, for ALL $u \in U$, that is to say:

$L - L'$ is the 0-map. Hence $L = L'$ (we are leveraging the fact that linear transformations themselves have a vector space structure on identical domains and co-domains).

So we found such a map exists, and is unique.

********

There is a "dual" to the UMP, which basically "reverses the directions of the mapping arrows":

For any other vector space $U$, and any pair of maps:

$L_1:V_1 \to U$
$L_2:V_2 \to U$

there is a unique map $L:V_1\oplus V_2 \to U$ with:

$L \circ i_1 = L_1$
$L \circ i_2 = L_2$

This map is given explicitly by:

$L(v_1,v_2) = L_1(v_1) + L_2(v_2)$.

********

Now, to be fair, one doesn't "need" to talk about this UMP property to discuss the direct sum of two vector spaces, an "element-wise" definition is sufficient for many "practical" applications. But there is something subtle going on, here: we've shifted our focus away from "vectors", and are focusing instead on "linear maps". In other words, we're not concerned with "calculation", but with "behavior". This is a more abstract point of view, and generalizes better to other structures with "different axioms". In particular, this characterization in terms of a UMP is CATEGORICAL, and in fact holds for any category in which the (binary) product and co-product coincide.

********

It often helps to see how this works "in practice". This should be a familiar example:

The real numbers $\Bbb R$ form a field, which is to say, a vector space of dimension 1 over themselves. We can visualize this field as a "number line". Suppose we have two "separate" number lines, and we wish to create a vector space from the pair. We want the individual lines to keep their "vector-space-ness" as subspaces of this new space unimpaired, so we want an injective linear map from each one into our new space.

We also want them to be "independent", so that changes in one line, do not affect values in the other. We accomplish this by "pairing", and stipulate that when we add pairs, its only "each to each":

$(x,y) + (x',y') = (x+x',y+y')$.

The injections we have in mind are THESE ones:

$x \mapsto (x,0)$
$y \mapsto (0,y)$.

It should be clear, then, that what we now call $\Bbb R \oplus \Bbb R$ is simply $\Bbb R^2$, the real plane.

Our two "number lines" have been embedded in the plane as the $x$ and $y$ axis.

Note that merely taking the UNION of said lines, doesn't work, because if both "coordinates" are non-zero, then $(x,y)$ isn't on either axis.

Now we can "reverse" this point of view, and START with the plane, in which case we recover our original lines by PROJECTING on to one axis, or the other. This, in effect, "zeros out" one of the coordinates. At this point, the 0-coordinate is just excess baggage, and we think of the relevant (non-zero coordinate) axis as "a line unto itself".

The key linkage between the "pair" view, and the "sum" view is THIS identity:

$(x,y) = (x,0) + (0,y)$ <---think about this, for a bit.

This only defines the "abelian group structure", can you see a natural way to define "scalar multiplication"?

********

For an idea of how this concept plays out in groups (where it is similar), take a look at:

http://mathhelpboards.com/math-notes-49/universal-property-direct-product-groups-11546.html

You may also want to look at this:

http://mathhelpboards.com/potw-graduate-students-45/problem-week-113-july-28th-2014-a-11541.html

Which uses this property of the direct sum of vector spaces in a fairly sophisticated way.

Hi Deveno,

I just had a quick look at the Problem of the Week you mentioned. In Opalg's answer he writes:

" ... ... The key fact here is that if [FONT=MathJax_Math-italic-Web]Y[/FONT] is a [FONT=MathJax_Math-italic-Web]K[/FONT] -vector subspace of [FONT=MathJax_Math-italic-Web]X[/FONT] then there exists a [FONT=MathJax_Math-italic-Web]K[/FONT] -vector subspace [FONT=MathJax_Math-italic-Web]Z[/FONT] of [FONT=MathJax_Math-italic-Web]X[/FONT] such that [FONT=MathJax_Math-italic-Web]X[/FONT] is canonically isomorphic to the direct sum[FONT=MathJax_Math-italic-Web] Y[/FONT][FONT=MathJax_Main-Web]⊕[/FONT][FONT=MathJax_Math-italic-Web]Z[/FONT] ... ... etc "

Can you please explain what is meant by the term "canonically isomorphic"?

Peter
 
  • #5
Deveno said:
The formal statement of the Universal Mapping Property is this:

If $U$ is ANY other vector space (over the same field, of course) with ANY OTHER pair of linear maps:

$L_1:U \to V_1$
$L_2:U \to V_2$

then there is a UNIQUE linear map: $L: U \to V_1 \oplus V_2$ such that:

$p_1\circ L = L_1$
$p_2\circ L = L_2$.

This map is often written as $L_1 + L_2$, or $L_1 \oplus L_2$ or even as $L_1 \times L_2$.

********

To see $V_1 \oplus V_2$ actually posseses this property, suppose that $U,L_1,L_2$ are given.

Define $L(u) = (L_1(u),L_2(u))$. It is straight-forward to verify this $L$ is linear, and:

$(p_1 \circ L)(u) = p_1(L_1(u),L_2(u)) = L_1(u)$, for all $u \in U$. A similar statement holds for $p_2$.

Moreover, if $L'$ is any other linear map $U \to V_1 \oplus V_2$ satisfying the UMP, we have:

$p_1(L(u) - L'(u)) = p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u) = 0$.

Hence $L(u) - L'(u) \in \text{ker }p_1$, so $L(u) - L'(u) \in \{0\}\oplus V_2$.

Similarly, $L(u) - L'(u) \in V_1 \oplus \{0\}$.

Thus $L(u) - L'(u) \in \{0\} \oplus \{0\} = 0_{V_1 \oplus V_2}$, for ALL $u \in U$, that is to say:

$L - L'$ is the 0-map. Hence $L = L'$ (we are leveraging the fact that linear transformations themselves have a vector space structure on identical domains and co-domains).

So we found such a map exists, and is unique.

********

There is a "dual" to the UMP, which basically "reverses the directions of the mapping arrows":

For any other vector space $U$, and any pair of maps:

$L_1:V_1 \to U$
$L_2:V_2 \to U$

there is a unique map $L:V_1\oplus V_2 \to U$ with:

$L \circ i_1 = L_1$
$L \circ i_2 = L_2$

This map is given explicitly by:

$L(v_1,v_2) = L_1(v_1) + L_2(v_2)$.

********

Now, to be fair, one doesn't "need" to talk about this UMP property to discuss the direct sum of two vector spaces, an "element-wise" definition is sufficient for many "practical" applications. But there is something subtle going on, here: we've shifted our focus away from "vectors", and are focusing instead on "linear maps". In other words, we're not concerned with "calculation", but with "behavior". This is a more abstract point of view, and generalizes better to other structures with "different axioms". In particular, this characterization in terms of a UMP is CATEGORICAL, and in fact holds for any category in which the (binary) product and co-product coincide.

********

It often helps to see how this works "in practice". This should be a familiar example:

The real numbers $\Bbb R$ form a field, which is to say, a vector space of dimension 1 over themselves. We can visualize this field as a "number line". Suppose we have two "separate" number lines, and we wish to create a vector space from the pair. We want the individual lines to keep their "vector-space-ness" as subspaces of this new space unimpaired, so we want an injective linear map from each one into our new space.

We also want them to be "independent", so that changes in one line, do not affect values in the other. We accomplish this by "pairing", and stipulate that when we add pairs, its only "each to each":

$(x,y) + (x',y') = (x+x',y+y')$.

The injections we have in mind are THESE ones:

$x \mapsto (x,0)$
$y \mapsto (0,y)$.

It should be clear, then, that what we now call $\Bbb R \oplus \Bbb R$ is simply $\Bbb R^2$, the real plane.

Our two "number lines" have been embedded in the plane as the $x$ and $y$ axis.

Note that merely taking the UNION of said lines, doesn't work, because if both "coordinates" are non-zero, then $(x,y)$ isn't on either axis.

Now we can "reverse" this point of view, and START with the plane, in which case we recover our original lines by PROJECTING on to one axis, or the other. This, in effect, "zeros out" one of the coordinates. At this point, the 0-coordinate is just excess baggage, and we think of the relevant (non-zero coordinate) axis as "a line unto itself".

The key linkage between the "pair" view, and the "sum" view is THIS identity:

$(x,y) = (x,0) + (0,y)$ <---think about this, for a bit.

This only defines the "abelian group structure", can you see a natural way to define "scalar multiplication"?

********

For an idea of how this concept plays out in groups (where it is similar), take a look at:

http://mathhelpboards.com/math-notes-49/universal-property-direct-product-groups-11546.html

You may also want to look at this:

http://mathhelpboards.com/potw-graduate-students-45/problem-week-113-july-28th-2014-a-11541.html

Which uses this property of the direct sum of vector spaces in a fairly sophisticated way.
Hi Deveno,

Thanks for the extensive help ... but ... just a clarification:

You write:

" ... ... Moreover, if $L'$ is any other linear map $U \to V_1 \oplus V_2$ satisfying the UMP, we have:

$p_1(L(u) - L'(u)) = p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u) = 0$. ... ... ... "I cannot follow exactly why \(\displaystyle p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u)\) ...

Could you please explain why this follows?

Peter***EDIT***

I have done some more reflecting on $p_1(L(u) - L'(u)) = p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u) = 0$ ... ... and think I see why this is the case ...

Here is my thinking ... ...

We have ...

\(\displaystyle L(u) = ( L_1(u), L_2(u) ) = (v_1, v_2) \)

where \(\displaystyle u \in U, v_1 \in V_1\) and \(\displaystyle v_2 \in V_2\)

and, also, we have ...

\(\displaystyle L'(u) = ( L_1(u), L_2(u) ) = (v_1, v_2) \) and that is, to be clear, the same point \(\displaystyle (v_1, v_2) \)as for \(\displaystyle L(u)\) ...

so that \(\displaystyle L(u) = L'(u) = (v_1, v_2) \) since the functional values of \(\displaystyle L, L'\) depend on the values of the same co-ordinate functions \(\displaystyle L_1, L_2\)

Therefore, \(\displaystyle p_1(L(u)) = p_1(L'(u)) = v_1\)

and so \(\displaystyle p_1(L(u)) - p_1(L'(u)) = 0\) as you say.

Can you confirm that my thinking is correct?

Peter
 
Last edited:
  • #6
Yes, if $p_1(L(u)) = L_1(u)$ then the "first coordinate" of $L(u)$ is $L_1(u)$.

Similarly the "second coordinate" of $L(u)$ is $L_2(u)$.

But that completely specifies $L$!

This is one of those things that "seems hard" until you realize what is being said:

If we know the first coordinate function, and second coordinate function of a function whose image has two coordinates, we know the function. Because all we are doing with the direct sum is "pairing coordinates".

So basically, we're making a bigger (algebraic thing) by putting two (algebraic things) "side by side" (in parallel, so to speak).

The two "$p$" functions just peel off one side or the other. We do this "externally" by "chopping off", and "internally" by "zeroing out". The two ways of doing this have a one-to-one correspondence (we have some isomorphism, which isn't very hard to find).

The condition $V_1 \cap V_2 = \{0\}$ ensures we can "cut cleanly".

Let's look at a space where $V = V_1 + V_2$ but not $V_1 \oplus V_2$.

Consider $V = \Bbb R^3$, where $V_1 = \{(x,y,0): ,x,y \in \Bbb R\}$ and $V_2 = \{(x,0,z): x,z \in \Bbb R\}$, that is, the $xy$-plane, and the $xz$-plane.

We see that $V = V_1 + V_2$, since for any $(a,b,c) \in V$, we have:

$(a,b,c) = (2a,b,0) + (-a,0,c)$ and $(2a,b,0) \in V_1$, and $(-a,0,c) \in V_2$.

Now $V_1 \cap V_2 = \{(x,0,0): x \in \Bbb R\}$, the $x$-axis. Now, we CAN'T "chop cleanly" because if we try to separate $V_1$ from $V_1 + V_2$, we "cripple" $V_2$, since we're taking out part of it when we remove $V_1$.

We could still form the quotient, $V/V_1$, but the resulting space is a line (whose "points" are parallel planes), and we can't form a (linear) bijection from a line to a plane, so it's definitely NOT isomorphic to $V_2$.

This is sort of like the difference between dividing 4*3 by 4 and dividing 2*6 by 4. In the first case, dividing by 4 leaves 3 untouched. In the second case, neither factor escapes unscathed.

So the utility of the direct sum in vector spaces is this: If we do something to "the parent space" (which may be large, and complicated), we can do the same thing to the "baby spaces" (the factors in the direct sum) and then just "add the results".

In fact, this is just what we do with a BASIS. If we have a basis $B = \{b_1,\dots,b_n\}$ for a vector space $V$, we have:

$V = \langle b_1\rangle \oplus \cdots \oplus \langle b_n\rangle$ which allows us to represent an element of $V$ as:

$v = (\alpha_1,\dots,\alpha_n)$ IN THAT BASIS.

Each of the subspaces $\langle b_j\rangle$ is very simple: it's just a field! (one has to be a "tiny" bit careful, here, the field multiplication in $\langle b_j\rangle$ may differ from another multiplication we have defined in $V$).

This means we can discover everything about $V$ (as a vector space, it may have additional structure) just by looking at a set of one-dimensional linearly independent subspaces that sum to $V$. This makes everything "easy".

To make things "even easier", we often choose the element of $\langle b_j \rangle$ that corresponds to the IDENTITY of the field. Such animals are called UNIT vectors. This often has the effect of "making the basis invisible", for example, in the basis:

$\{(1,0,0),(0,1,0),(0,0,1)\}$

the vector $(x,y,z) \in \Bbb R^3$ has representation: $(x,y,z)$.

Or, in the basis $\{1,x,...,x^n\}$ the vector $c_0 + c_1x + \cdots + c_nx^n \in P_n$ has representation:

$(c_0,c_1,\dots,c_n)$.

The fact that vector spaces admit a direct sum decompostion like this, makes them very nice to work with. We can, if we choose, forget about "typical vectors" and focus just on "basis vectors". One sees this a LOT in multi-variate calculus, where one reduces a problem involving several variables to several problems each involving ONE variable, where we understand the situation much more clearly.
 

FAQ: Universal Mapping Property of a Direct Sum - Knapp Pages 60-61

1. What is the Universal Mapping Property of a Direct Sum?

The Universal Mapping Property of a Direct Sum is a property that characterizes the direct sum of two vector spaces. It states that for any vector space V and two subspaces U and W, there exists a unique linear map from V to U⊕W that maps U and W onto their respective subspaces in the direct sum. This map is called the projection map.

2. Why is the Universal Mapping Property important?

The Universal Mapping Property is important because it allows us to define the direct sum of vector spaces in a precise and consistent manner. It also provides a way to construct the direct sum without having to explicitly define the elements of the direct sum. This makes it easier to work with and generalize the concept of the direct sum to other mathematical structures.

3. How is the Universal Mapping Property used in linear algebra?

In linear algebra, the Universal Mapping Property is used to define and characterize the direct sum of vector spaces. It is also used to prove the existence and uniqueness of the projection map from a vector space to its direct sum. Additionally, it is used in the construction of direct sums and in the proof of various theorems related to direct sums.

4. Can the Universal Mapping Property be extended to more than two vector spaces?

Yes, the Universal Mapping Property can be extended to any finite number of vector spaces. For example, the Universal Mapping Property of a direct sum of three vector spaces V, U, and W states that there exists a unique linear map from V to U⊕W that maps V onto U and W, and similarly for the other vector spaces. This can also be extended to an infinite number of vector spaces, known as an infinite direct sum.

5. What are some applications of the Universal Mapping Property?

The Universal Mapping Property has various applications in mathematics, especially in areas such as linear algebra, group theory, and category theory. It is also used in physics and engineering, particularly in the study of vector spaces and their properties. Additionally, the Universal Mapping Property has applications in computer science, specifically in data compression and coding theory.

Back
Top