Direct Products and External Direct Sums

  • MHB
  • Thread starter Math Amateur
  • Start date
  • Tags
    Sums
In summary: Now, suppose we have an "external" direct sum $M \oplus N$ of countably infinitely many $R$-modules.The "external" direct sum is just a $R$-module:$M = M' \oplus N$where:$M' = \{(k,m): k \in \Bbb Z, m \in M\}$and:$N = \{(k,n): k \
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading An Introduction to Rings and Modules With K-Theory in View by A.J. Berrick and M.E. Keating (B&K).

I an trying to gain a full understanding of direct products and external direct sums of modules and need some help in this matter ... ...

B&K define the external sum of an arbitrary finite set of modules as follows:
https://www.physicsforums.com/attachments/3357Now the above definition of an external direct sum seem to me to be identical to the definition of a direct product of a finite set of right \(\displaystyle R\)-modules ... ...

... ... So ... in the finite case it is not just that the external direct sum and the direct product are isomorphic or equal in some particular way ... they are defined the same way ... no difference at all, even in the definition!Am I understanding things correctly?

Then in B&K Section 2.1.11 "Infinite Direct Sums" we read:https://www.physicsforums.com/attachments/3358Well of course, the direct product and external direct sum for the infinite case are defined differently ... ... so, of course, they are different ... indeed it is claimed that (and is intuitively likely that) in the infinite case as above, the external direct sum is a submodule of the direct product ... ... ...

OK ... ... but the differences in the infinite case follow from the different definitions ... BUT ... WHY are B&K doing this! ... what is their motivation and what are the benefits of having these two cases as defined above ...

Can someone please help in this matter ...

Peter
 
Physics news on Phys.org
  • #2
Here is the difference between "direct products" and "direct sums" (we will deal with the two-factor case, first).

Suppose we have two (right) $R$-modules $M$ and $N$. They may have nothing to do with each other; for example $M$ might be the abelian group $\Bbb Z_4$ and $N$ might be the additive subgroup of $\Bbb R[x]$.

From these, we create a "bigger" $R$-module: $M \times N$. In the example I gave above, an element of $M \times N$ might look like:

$([3],x^2 + 2)$

Another element might be: $([2],x^3 - 1)$. We can add these:

$([3],x^2 + 2) + ([2],x^3 - 1) = ([1],x^3 + x^2 + 1)$

and "scale" them by any integer:

$([3],x^2 + 2)\cdot (-17) = ([1],-17x^2 - 34)$

This is the "external direct product", in effect we treat $M$ and $N$ as if they live "in different worlds". This product comes with two "factor maps":

$M \times N \to M$ ("only look at $M$")
$M \times N \to N$ ('only look at $N$").

By contrast, note that we might have a (right) $R$-module $K$ with SUBMODULES $M'$ and $N'$ such that:

$K = M' + N'$ ($M'$ and $N'$ GENERATE $K$)
$M' \cap N' = \{0\}$

so that $K$ splits "cleanly" into an "$M'$-component" and an "$N'$-component". This is an "internal direct sum decomposition", we START with the larger thing, and resolve it into two smaller things.

If it happened to be the case that $K = M \times N$, say as in our example above, then:

$M' = \{([k],0): k \in \Bbb Z\}$
$N' = \{[0],f(x): f(x) \in \Bbb R[x]\}$.

Note $M$ and $M'$ (and similarly with the $N$'s) are not "too much different". Note also, that although this is ONE way to decompose $K$, it may not be the ONLY way; an "external" construction is going to be "essentially unique", but an internal decomposition may NOT be.

For example, consider the $\Bbb Z$-module $\Bbb Z_2 \times \Bbb Z_2$.

We can write this as an INTERNAL direct sum $\Bbb Z_2 \times \Bbb Z_2 = A \oplus B$ by taking:

$A = \Bbb Z_2 \times \{0\}$
$B = \{0\} \times \Bbb Z_2$

but we can ALSO write this as the internal direct sum:

$\Bbb Z_2 \times \Bbb Z_2 = A \oplus C$, where:

$C = \{(0,0),(1,1)\}$. For clarity's sake, let's write:

$(1,0) = a_1$
$(1,1) = c_1$.

Then the unique $A\oplus C$ forms of $\Bbb Z_2 \times \Bbb Z_2$ are:

$(0,0) = 0a_1 + 0c_1$
$(1,0) = 1a_1 + 0c_1$
$(0,1) = 1a_1 + 1c_1$
$(1,1) = 0a_1 + 1c_1$

Note $B$ is not the same submodule as $C$.

**********************

If we start with the "smaller things", and make a "conglomerate", the properties the "conglomerate" has should be defined in terms of mappings of the "conglomerate" to the "pieces".

On the other hand, if we start with the "larger thing", and somehow "chop it up", these pieces should have well-defined "places" within the "larger thing".

This may seem "backwards", but it turns out to be "the right way to do it" for generalizing to an infinite number of "factors".

To see why the distinction becomes necessary, let's look at a "simple" infinite case.

Suppose we have countably infinitely many copies of $\Bbb Z$. We can make this into a "super-module" containing each copy (in its own "coordinate-space"), by regarding the elements of our "super-module" as infinite sequences of integers:

$m = (k_1,k_2,k_3,\dots)$

If you think about it, $m$ is just a function $\Bbb Z^+ \to \Bbb Z$:

$n \mapsto k_n$.

We can add these functions "point-wise":

$(m+m')(n) = m(n)+m'(n)$, or, perhaps more typically:

$(k_1,k_2,k_3,\dots) + (k_1',k_2',k_3',\dots) = (k_1+k_1',k_2+k_2',k_3+k_3',\dots)$

It is clear that if we "just look at the $i$-th coordinate" (for any $i \in \Bbb Z^+$), all we see is "something that behaves like the integers".

So far, so good, this all seems straight-forward. But suppose that we were given the "super-module" to start with as:

$M = \{f: \Bbb Z^+ \to \Bbb Z\}$, a set of functions.

Suppose we want to find submodules $A_1,A_2,A_3,\dots$ with:

1)$\displaystyle M = \sum_{i = 1}^{\infty} A_i$

2)$A_i \cap (A_1 +\cdots A_{i-1} + A_{i+1} + \cdots) = \{0\}$, for each $i$.

In other words, we want to try to find a(n internal) direct sum decomposition of $M$. At first glance, this does not appear too hard.

Here's the trouble, we would like there to be a UNIQUE representation:

$m = a_1 + a_2 + a_3 + \cdots$, with each $a_i \in A_i$.

We're going to have trouble with this: we can't "evaluate" the above sum, because it never "ends". How are we going to show equality?

We CAN do this, but only if we consider a smaller portion of $M$, namely, functions with finite support (that is: functions such that $f(n) = 0$ for all but finitely many $n$).

In other words, we can "add infinite things" (to each other), but we can't "add infinitely". If you recall from calculus, "infinite series" are not, in and of themselves, MEANINGFUL, what is meaningful is the LIMIT of the PARTIAL SUMS, and each of those partial sums is FINITE.

Now in a module, in general, we don't have any way to check "convergence" (these problems pop up in infinite-dimensional vector spaces, as well), so even if we have "infinitely many coordinates"; to define SUMS, we must have a finite number of SUMMANDS. So our usual approach to defining the direct sum FAILS when we have infinitely many modules to "sum up".

***********************

You can also think of it this way: the direct sum is generated by the summands, in such a way that they are "independent". It's MINIMAL, in some sense.

The direct product, on the other hand, is "everything and the kitchen sink, too".

***********************

Another way to better understand a direct product is in terms of the "canonical projection maps". These possesses a universal mapping property which defines (up to isomorphism) the direct product.

The relevant property is this:

A direct product $M$ of modules $M_i$ (where $i \in I$, an arbitrary indexing set) is a $I$-indexed family of module homomorphisms:

$\pi_i: M \to M_i$ such that, for ANY module $N$, and an $I$-indexed family of homomorphisms:

$f_i: N \to M_i$

there is a UNIQUE mapping:

$\phi:N \to M$ such that $f_i = \pi_i \circ \phi$ for all $i \in I$.

this unique mapping is often denoted $\prod f_i$.

If one thinks of $\pi_i$ as "picking out the $i$-th coordinate", then what $\phi$ does is "put the value of $f_i$ in the $i$-the coordinate place".

With a direct SUM, we "reverse the arrow-direction", a direct sum is a module $M$ and an $I$-indexed family of module homomorphisms:

$\iota_i: M_i \to M$

such that if $N$ is any other module, with an $I$-indexed family of homomorphisms:

$g_i: M_i \to N$

there is a unique homomorphism $\psi:M \to N$ such that $f_i = \psi \circ \iota_i$, for all $i \in I$.

The mapping $\psi$ is often written $\oplus f_i$.

Surprisingly enough, the direct product of an infinite indexed set, together with the usual inclusions, is NOT a direct sum. To see this, consider the following example:

Suppose we have: $\displaystyle M = \prod_{n \in \Bbb N} F$, where $F$ is a field, and $N = F[x]$. So $M$ is the $F$-module of all infinite sequences in $F$.

Define $f_n: F \to N$ by $f_n(a) = ax^n$.

The usual inclusions are:

$\iota_n: F \to M$ by $\iota_n(1) = (0,\dots,0,1,0,\dots)$ where $1$ is in the $n$-th place, and everything else is 0. Let's call this $e_n$.

Now $\displaystyle m = \sum_{n= 0}^{\infty} e_n \in M$, so $\psi(m) \in F[x]$. Suppose $\text{deg}(\psi(m)) = k$.

Now we must have: $\psi(e_n) = \psi(\iota_n(1)) = f_n(1) = x^n$, for every $n$, by the UMP of a direct sum. So, for example:

$\psi(e_{k+1}) = x^{k+1}$.

But this means $\psi(m)$ has non-zero terms of ALL degree, there is no $k$ that will work.
 
  • #3
(cont'd from prev. post):

It might be useful to see how this plays out in SETS, where we don't have the module axioms to worry about:

The direct product of two sets is just their cartesian product: given two sets $A,B$ it is clear that the mappings:

$\pi_1: A\times B \to A$ given by $\pi_1((a,b)) = a$
$\pi_2: A\times B \to B$ given by $\pi_2((a,b)) = b$

are such that given any two functions:

$f: X \to A$
$g: X \to B$

we can defined a unique function: $\phi: X \to A\times B$, namely:

$\phi(x) = (f(x),g(x))$

This function is often written $f \times g$, and we clearly have:

$\pi_1 \circ \phi = f$
$\pi_2 \circ \phi = g$

What gets interesting is trying to find a set (we'll call it $A+B$, but for now, we don't know what that "means") with two functions:

$\iota_1:A \to A+B$
$\iota_2:B \to A+B$

such that if

$f:A \to X$
$g:B \to X$ are any pair of functions to $X$ we have a unique function $\psi:A+B \to X$ with:

$f = \psi\circ \iota_1$ and $g = \psi\circ \iota_2$.

Our first thought might be to pick the set generated by $A,B$ (the smallest set containing both) which is $A\cup B$, together with the mappings:

$\iota_1(a) = a$ for all $a \in A$
$\iota_2(b) = b$ for all $b \in B$.

Unfortunately, that doesn't work. Here's a "toy example" to show why:

Let $A = \{a,c\}$, and $B = \{b,c\}$ with $X = \{x,y\}$.

We can make the two functions:

$f:A \to X$ with $f(a) = x$ and $f(c) = y$,
$g:B \to X$ with $g(b) = y$ and $g(c) = x$.

We need to find a function $\psi:A \cup B \to X$ with:

$\psi \circ \iota_1 = f$
$\psi \circ \iota_2 = g$.

Now the element $a$ of $A\cup B$ poses no problem: we want $\psi(a) = \psi(\iota_1(a)) = f(a) = x$.

Similarly, the element $b$ poses no problem, we must have: $\psi(b) = \psi(\iota_2(b)) = g(b) = y$.

However, with $c$, we have a problem: on one hand, we want:

$\psi(c) = \psi(\iota_1(c)) = f(c) = y$, and on the other, we want:

$\psi(c) = \psi(\iota_2(c)) = g(c) = x$.

The problem stems from the fact that $c \in A \cap B$, and so we don't know "which $\iota$" to use.

There is, however, an easy fix to this: instead of using $A \cup B$, we "tag" each set $A,B$ like so, we take:

$(A \times \{1\}) \cup (B \times \{2\})$.

So $A+B = \{(a,1),(c,1),(b,2),(c,2)\}$.

Then we can set:

$\psi((a,1)) = x$
$\psi((c,1)) = y$
$\psi((b,2)) = y$
$\psi((c,2)) = x$, and everything works out as desired.

This construction is called the disjoint union, and turns out to be the desired set $A+B$ for any sets $A,B$.

Note that for finite sets:

$|A \times B| = |A|\cdot|B|$
$|A + B| = |A| + |B|$

so that these are the "set-theoretic" versions of multiplication and addition of natural numbers.

For most algebraic objects, the "product" (which is the same CONSTRUCTION as the cartesian product, except we replace the projection maps with homomorphisms) is a direct product.

The construction with the arrows reversed is called a "co-product" and these can get hairy, depending on the structure. For example, the co-product is groups is somewhat intractible, and is called the "free product". In "nice" structures (technically, in abelian categories) the product and co-product coincide for finitely-indexed sets, and is called a "biproduct".
 

FAQ: Direct Products and External Direct Sums

1. What is the difference between a direct product and an external direct sum?

The main difference between a direct product and an external direct sum is how the elements are combined. In a direct product, the elements are combined by taking the Cartesian product of the underlying sets, while in an external direct sum, the elements are combined by taking the direct sum of the underlying vector spaces. Essentially, a direct product combines elements from different sets, while an external direct sum combines elements from different vector spaces.

2. How is the direct product related to the direct sum?

The direct product and the direct sum are related in that they both involve combining elements from different sets or vector spaces. However, the direct product is a more general concept, as it can be applied to any sets, while the direct sum is specifically used for vector spaces. Additionally, the direct product is a type of binary operation, while the direct sum is a type of n-ary operation.

3. What is the significance of the direct product and external direct sum in mathematics?

The direct product and external direct sum are important concepts in many areas of mathematics, including group theory, ring theory, and linear algebra. They provide a way to combine different structures and create new ones, allowing for a deeper understanding of these structures and their properties. The direct product and external direct sum also have applications in physics and engineering, particularly in the study of symmetry and vector spaces.

4. How do I compute the direct product and external direct sum of two structures?

To compute the direct product of two structures, you simply take the Cartesian product of their underlying sets and define the relevant operations and properties. For example, the direct product of two groups would be the set of ordered pairs with the operation defined component-wise. To compute the external direct sum of two vector spaces, you take the direct sum of their underlying vector spaces and define the relevant operations and properties. For example, the external direct sum of two vector spaces would be the set of formal linear combinations of elements from each vector space.

5. Are there any real-world applications of the direct product and external direct sum?

Yes, there are many real-world applications of the direct product and external direct sum. In physics, the direct product is used to describe composite systems, such as particles with spin and position. In engineering, the external direct sum is used in the construction of vector spaces to model physical systems. Additionally, these concepts have applications in computer science, particularly in the design of algorithms and data structures.

Back
Top