Paul E Bland's "Direct Product of Modules" Definition - Category-Oriented

In summary: In this way, a category is not something that is found out, but something that is *made*, as one becomes more and more familiar with the underlying structure.Rings and their modules are a good example of a category that arises naturally in the study of mathematics.In summary, Bland defines a direct product of modules as a Cartesian product plus componentwise addition and scalar multiplication. He provides the notation and a previous section from Chapter 0 where he explains the basics of his notation.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Paul E. Bland's book: Rings and Their Modules and am currently focused on Section 2.1 Direct Products and Direct Sums ... ...

I am trying to fully understand Bland's definition of a direct product ... and to understand the motivation for the definition ... and the implications of the definition ... BUT ... ... I am finding it a real struggle ... ... :confused: ... ...

Bland uses Proposition 2.1.1 to define the direct product \(\displaystyle \prod_\Delta M_\alpha\) ...

I will provide the text of Proposition 2.1.1 in this post ... and to give MHB readers the full context and notation, I will also provide the following (at the end of the post):

* Bland's brief introductory section on direct products

* Bland's actual definition based on Proposition 2.1.1

* a Previous section from Chapter 0 where Bland explains the basics of his notation
The text of Proposition 2.1.1 reads as follows:
View attachment 4892

I find Proposition 2.1.1 (on which the definition of a direct product is based - see actual definition below) a rather puzzling and obscurely abstract way of defining a direct product ... I even find it difficult to frame questions about it ... BUT ... here are some questions coming out of a state of perplexity ... ... :confused: ... ...
Questions

(i) what is the point of defining the direct product \(\displaystyle \prod_\Delta M_\alpha\) this way ... ... ?(ii) why is it important that a map \(\displaystyle f\) exist from every \(\displaystyle R\)-Module \(\displaystyle N\) ... ... ?(iii) is the ... "every R-module N" ... part of the definition the reason that this property is sometimes referred to as the Universal Mapping Property? (iv) Proposition 2.1.2 uses the symbol \(\displaystyle \prod_\Delta M_\alpha\) suggesting a family of modules \(\displaystyle \{ M_\alpha \}\) together with a Cartesian product plus componentwise addition and scalar multiplication defined as

\(\displaystyle ( x_\alpha ) + ( y_\alpha ) = ( x_\alpha + y_\alpha )\)

and

\(\displaystyle ( x_\alpha ) a = ( x_\alpha a )\)

With reference to the above ... my question is as follows:

Do we regard the symbol \(\displaystyle \prod_\Delta M_\alpha\) as an object with no preliminary structure (which we could have simply called \(\displaystyle P\)) ... ... ... OR ... ... ... do we regard \(\displaystyle \prod_\Delta M_\alpha \)as a Cartesian product of a family of modules made into an \(\displaystyle R\)-module by componentwise addition and scalar multiplication ... ... ... in other words ... ... does \(\displaystyle \prod_\Delta M_\alpha\) go into the proposition with the structure I have just described ... or does the structure come out of Proposition 2.1.1 ...===========================================================

===========================================================

Bland's actual definition of a direct product of a family of modules (based on Proposition 2.1.1) reads as follows:
View attachment 4893

Bland's introduction to direct products reads as follows:View attachment 4894

Bland's explanation of his notation in Chapter 0 reads as follows:http://mathhelpboards.com/attachments/linear-abstract-algebra-14/4895-paul-e-blands-definition-direct-product-modules-category-oriented-definition-bland-basics-notation-ch-0-png

Any help with the above issues will be very much appreciated ... ...

Peter
 
Last edited:
Physics news on Phys.org
  • #2
Algebra is a story. It's a story about structure. The story has its roots in trying to solve numerical problems like: I had 5 oranges yesterday, and I ate 2 this morning, how many are left?

Well, one could go to the pantry and count the remaining oranges, but if one has command of simple arithmetic, deductive reasoning eventually leads to the conclusion there are 3 oranges remaining. It eventually becomes clear that the "hero" of this story is not the oranges themselves, but the arithmetical system.

So it is with more involved structures, all of which had their humble origins in solving various practical mysteries. Eventually, this became a game humans played divorced from the applications of its methods to actual dilemmas, studying *just* the consequences of rules which had a great deal of generality (they are abstract).

Often, the rules of this game begin:

Let $S$ be a set, together with some operations...

It starts to get interesting when we consider "structure-preserving functions", for example, if we have a binary operation:

$\ast: S \times S \to S$, we are often interested in mappings $f: S \to T$, where $T$ has a similar kind of operation $\ast '$, and $f(s_1\ast s_2) = f(s_1) \ast' f(s_2)$. The existence of one (or more) such $f$ often indicate a hitherto obscured *relationship* between $S$ and $T$, for example, $T$ may contain a "copy" of something like $S$ within it, or $T$ may be a kind of "condensed" version of $S$, with certain information "weeded out".

It turns out that the mappings $f$ are "where the action is", in terms of understanding the internal structure of $S$ and $T$. For example, when studying prime factorization of integers, the chief weapon used in assaults on various problems is *divisibility*. The relation $a\sim b$ if and only if $a|b$ is an order relation (on $\Bbb Z$), and such questions can be illuminated by studying "order-preserving functions".

Category theory seeks to strip away the "particular-ness" of structural rules, and expose the general "meta-patterns" that underlie our abstract structural systems. One way of accomplishing this, is by shifting the focus away from algebraic objects, and instead looking at algebraic mappings.

These mappings have different names in different "arenas". For example, in topology the "structure-preserving maps" are the continuous functions, in studying partially-ordered sets, the mappings are order-preserving maps, in linear algebra, they are linear transformations, in groups, they are homomorphisms.

The "meta-constructions" of product (which in many categories *is* the direct product) and co-product (which in certain "nice" categories is the direct sum) are possible for "many arenas at once" and all one has to do is change the type of objects the mappings go between, and the kind of mappings we take this to be. This is a natural extension of the process we began when we replaced counting with arithmetic, and then arithmetic with the study of abelian groups or groups in general, or more involved structures such as rings, $R$-modules, polynomials, $k$-algebras, and the like.

The key idea is that in the case of $R$-modules, we have replace the "specific" construction of the direct product as $M_1 \times M_2$ with a certain definition of the module operations, to a "mapping property" involving certain $R$-linear maps (the "projection" mappings), which does not *require* us to look at ELEMENTS of our modules anymore, but just the module homomorphisms. It's an "added layer of sophistication" which has the advantage of being PORTABLE (we can make the same construction in other structures, just by "changing the names"). I hope this gives a partial answer to point (i).

The answer to (ii) is, as you surmise, that any family of maps $f_{\alpha}: N \to M_{\alpha}$, for any $N$ "factors through the projections":

$f_{\alpha} = \pi_{\alpha} \circ f$

It is a worthwhile exercise to convince yourself that the usual direct product $M_1 \times M_2$ together with the usual projection maps $\pi_1(m_1,m_2) = m_1$ and $\pi_2(m_1,m_2) = m_2$ does indeed have this property, when given $f_1: N \to M_1$ and $f_2:N \to M$, we take $f(n)$ to be $(f_1(n),f_2(n))$ (is it even possible that any other definition for $f$ will work? That is, is $f$, so defined, unique?).

Given any two $R$-modules $M_1, M_2$, we might decide to use $N = M_1$ with $f_1(n) = f(m_1) = m_1$, which is certainly an $R$-module homomorphism (the identity map, in fact). And we can always define $f_2(n) = f_2(m_1) = 0_{M_2}$ (the trivial homomorphism, or $0$-map). What happens if we do a similar construction with $N = M_2$? Can you use this to define a short exact sequence:

$0 \to M_1 \to M_1 \times M_2 \to M_2 \to 0$?

There are a few reasons why we might prefer this "more general" construction-

1. The simplest reason is we might "re-arrange" the indices, but this shouldn't change the structure. For example, when labeling the axes in the Euclidean plane, we might decide the $y$-coordinate goes first, but this isn't really anything more than a convention (like writing left-to-right), and the algebra stays "pretty much the same". The isomorphism involved here should be obvious.

2. It is possible to have an "internal direct product" (also called a direct product decomposition), and an "external direct product". These "look" different (in one we have "ordinary sums" $m_1 + m_2$, in the other pairs $(m_1,m_2)$ ), but they act so much the same it really doesn't do much for the theory to carefully distinguish between them.

3. Isomorphic structures, in general, don't gives us "any different information", it's just packaged differently. If you work in $\Bbb R \times \Bbb R$, and I work in $\Bbb R^2$, there's going to be a 1-1 correspondence between everything you do, and everything I do. That "your" $x$ and $y$ live in "different universes, and never talk to each other" whereas mine live "under the same roof" turns out not to matter that much.

4. In proofs, it's better to rely on *properties* of things, rather than drawn-out explicit computations. It makes for easier reading.

In any case, the caveat that this mapping property holds for an arbitrary module $N$ and a family of maps $N \to M_{\alpha}$ allows us to characterize the direct product *solely in terms of mappings", and by judiciously choosing what $N$ may be, allows us to derive the usual familiar properties we're used to. Direct products aren't alone in this respect: kernels, quotients and free objects can also be characterized by universal mapping properties (the respective "canonical mappings" the UMP for each depends on are, respectively, the kernel injection ("inclusion"), the canonical projection onto the quotient, and injection of generators).

To answer (iv), the correct interpretation is to regard $\Pi_{\alpha} M_{\alpha}$ as just some $R$-module $P$, the reason being is that the Cartesian product "explicit construction" is just one of many possible "direct products" (all of which are isomorphic). The abstract characterization here is only given "up to unique isomorphism". In general, it is non-constructive-Bland's description does not even assure us that a direct product as he defines it even *exists*, whereas building one "from the ground up", answers *that* issue, but does not tell us immediately how it behaves with respect to homomorphisms (so, in many texts, a definition of direct product is then followed by several simple theorems about it).
 
  • #3
Deveno said:
Algebra is a story. It's a story about structure. The story has its roots in trying to solve numerical problems like: I had 5 oranges yesterday, and I ate 2 this morning, how many are left?

Well, one could go to the pantry and count the remaining oranges, but if one has command of simple arithmetic, deductive reasoning eventually leads to the conclusion there are 3 oranges remaining. It eventually becomes clear that the "hero" of this story is not the oranges themselves, but the arithmetical system.

So it is with more involved structures, all of which had their humble origins in solving various practical mysteries. Eventually, this became a game humans played divorced from the applications of its methods to actual dilemmas, studying *just* the consequences of rules which had a great deal of generality (they are abstract).

Often, the rules of this game begin:

Let $S$ be a set, together with some operations...

It starts to get interesting when we consider "structure-preserving functions", for example, if we have a binary operation:

$\ast: S \times S \to S$, we are often interested in mappings $f: S \to T$, where $T$ has a similar kind of operation $\ast '$, and $f(s_1\ast s_2) = f(s_1) \ast' f(s_2)$. The existence of one (or more) such $f$ often indicate a hitherto obscured *relationship* between $S$ and $T$, for example, $T$ may contain a "copy" of something like $S$ within it, or $T$ may be a kind of "condensed" version of $S$, with certain information "weeded out".

It turns out that the mappings $f$ are "where the action is", in terms of understanding the internal structure of $S$ and $T$. For example, when studying prime factorization of integers, the chief weapon used in assaults on various problems is *divisibility*. The relation $a\sim b$ if and only if $a|b$ is an order relation (on $\Bbb Z$), and such questions can be illuminated by studying "order-preserving functions".

Category theory seeks to strip away the "particular-ness" of structural rules, and expose the general "meta-patterns" that underlie our abstract structural systems. One way of accomplishing this, is by shifting the focus away from algebraic objects, and instead looking at algebraic mappings.

These mappings have different names in different "arenas". For example, in topology the "structure-preserving maps" are the continuous functions, in studying partially-ordered sets, the mappings are order-preserving maps, in linear algebra, they are linear transformations, in groups, they are homomorphisms.

The "meta-constructions" of product (which in many categories *is* the direct product) and co-product (which in certain "nice" categories is the direct sum) are possible for "many arenas at once" and all one has to do is change the type of objects the mappings go between, and the kind of mappings we take this to be. This is a natural extension of the process we began when we replaced counting with arithmetic, and then arithmetic with the study of abelian groups or groups in general, or more involved structures such as rings, $R$-modules, polynomials, $k$-algebras, and the like.

The key idea is that in the case of $R$-modules, we have replace the "specific" construction of the direct product as $M_1 \times M_2$ with a certain definition of the module operations, to a "mapping property" involving certain $R$-linear maps (the "projection" mappings), which does not *require* us to look at ELEMENTS of our modules anymore, but just the module homomorphisms. It's an "added layer of sophistication" which has the advantage of being PORTABLE (we can make the same construction in other structures, just by "changing the names"). I hope this gives a partial answer to point (i).

The answer to (ii) is, as you surmise, that any family of maps $f_{\alpha}: N \to M_{\alpha}$, for any $N$ "factors through the projections":

$f_{\alpha} = \pi_{\alpha} \circ f$

It is a worthwhile exercise to convince yourself that the usual direct product $M_1 \times M_2$ together with the usual projection maps $\pi_1(m_1,m_2) = m_1$ and $\pi_2(m_1,m_2) = m_2$ does indeed have this property, when given $f_1: N \to M_1$ and $f_2:N \to M$, we take $f(n)$ to be $(f_1(n),f_2(n))$ (is it even possible that any other definition for $f$ will work? That is, is $f$, so defined, unique?).

Given any two $R$-modules $M_1, M_2$, we might decide to use $N = M_1$ with $f_1(n) = f(m_1) = m_1$, which is certainly an $R$-module homomorphism (the identity map, in fact). And we can always define $f_2(n) = f_2(m_1) = 0_{M_2}$ (the trivial homomorphism, or $0$-map). What happens if we do a similar construction with $N = M_2$? Can you use this to define a short exact sequence:

$0 \to M_1 \to M_1 \times M_2 \to M_2 \to 0$?

There are a few reasons why we might prefer this "more general" construction-

1. The simplest reason is we might "re-arrange" the indices, but this shouldn't change the structure. For example, when labeling the axes in the Euclidean plane, we might decide the $y$-coordinate goes first, but this isn't really anything more than a convention (like writing left-to-right), and the algebra stays "pretty much the same". The isomorphism involved here should be obvious.

2. It is possible to have an "internal direct product" (also called a direct product decomposition), and an "external direct product". These "look" different (in one we have "ordinary sums" $m_1 + m_2$, in the other pairs $(m_1,m_2)$ ), but they act so much the same it really doesn't do much for the theory to carefully distinguish between them.

3. Isomorphic structures, in general, don't gives us "any different information", it's just packaged differently. If you work in $\Bbb R \times \Bbb R$, and I work in $\Bbb R^2$, there's going to be a 1-1 correspondence between everything you do, and everything I do. That "your" $x$ and $y$ live in "different universes, and never talk to each other" whereas mine live "under the same roof" turns out not to matter that much.

4. In proofs, it's better to rely on *properties* of things, rather than drawn-out explicit computations. It makes for easier reading.

In any case, the caveat that this mapping property holds for an arbitrary module $N$ and a family of maps $N \to M_{\alpha}$ allows us to characterize the direct product *solely in terms of mappings", and by judiciously choosing what $N$ may be, allows us to derive the usual familiar properties we're used to. Direct products aren't alone in this respect: kernels, quotients and free objects can also be characterized by universal mapping properties (the respective "canonical mappings" the UMP for each depends on are, respectively, the kernel injection ("inclusion"), the canonical projection onto the quotient, and injection of generators).

To answer (iv), the correct interpretation is to regard $\Pi_{\alpha} M_{\alpha}$ as just some $R$-module $P$, the reason being is that the Cartesian product "explicit construction" is just one of many possible "direct products" (all of which are isomorphic). The abstract characterization here is only given "up to unique isomorphism". In general, it is non-constructive-Bland's description does not even assure us that a direct product as he defines it even *exists*, whereas building one "from the ground up", answers *that* issue, but does not tell us immediately how it behaves with respect to homomorphisms (so, in many texts, a definition of direct product is then followed by several simple theorems about it).
Hi Deveno,

Thanks so much for the extensive help/tutorial ...

I very much appreciate it ...

Now working through your post carefully ... reflecting on what you have said ...

Thanks again ...

Peter
 

FAQ: Paul E Bland's "Direct Product of Modules" Definition - Category-Oriented

What is a direct product of modules?

A direct product of modules is a mathematical concept in the category of modules, which are algebraic structures similar to vector spaces. It is a way of combining two or more modules to create a new module.

What is the definition of a direct product of modules?

The definition of a direct product of modules is a module M that is constructed from a set of modules {M_i} for i in some index set I, such that every element of M is a tuple (m_i) with m_i in M_i. The operations of addition and scalar multiplication are defined componentwise, meaning that (m_i) + (n_i) = (m_i + n_i) and r(m_i) = (rm_i) for all m_i, n_i in M_i and r in the scalar field.

How is a direct product of modules different from a direct sum?

A direct product and a direct sum are two different ways of combining modules. While a direct sum requires that only finitely many elements of the direct product are nonzero, a direct product allows for infinitely many nonzero elements. Additionally, the direct sum has a strict ordering of its components, while the direct product does not.

What are some examples of direct products of modules?

Some examples of direct products of modules include the Cartesian product of vector spaces, the product of rings, and the product of groups. In each case, the direct product combines the underlying sets and operations of the individual structures to create a new structure.

Why is the direct product of modules important in mathematics?

The direct product of modules is important in mathematics because it allows for the creation of new structures from existing ones. This concept is used in many areas of mathematics, including algebra, topology, and functional analysis. It also has applications in other fields such as physics and computer science.

Back
Top