Localization in Commutative Ring Theory

  • MHB
  • Thread starter Math Amateur
  • Start date
  • Tags
    Ring Theory
In summary: T^{-1}R$ to contain the following six "fractions":0/1,1/1,2/1,3/1,4/1,5/1.But this is not what happens! $T^{-1}R$ actually contains the following eight fractions:0/1,1/1,2/1,3/1,4/1,5/1,6/1,7/1,8/1.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Watson: Topics in Commutative Ring Theory.

in Ch 3: Localization, Watson defines the quotient field of an integral domain as follows:

--------------------------------------------------------------------------------------------------

We begin by defining an equivalence relation on an integral domain D. Let D be an integral domain. We define an equivalence relation on the set S of "fractions" using elements of D,

S = {a/b | a, b \(\displaystyle \in \) D , \(\displaystyle b \ne 0 \)} ,

by a/b \(\displaystyle \cong \) c/d if and only if ad = bc

... ...

we turn the set of equivalence classes into a ring (actually a field) by defining addition and multiplication as follws:

[a/b] + [c/d] = [(ad + bc)/bd]

and

[a/b] . [c/d] = [ac/bd].

(***Note that the right hand sides of these expressions make sense because D is a domain and so \(\displaystyle bd \ne 0 \) ***)

In this way, Watson has explained his Definition 6.3 which reads as follows:

Definition 6.3 Let D be an integral domain. The above field F of equivalence classes of fractions from D, with addition and multiplication defined as above, is called the quotient field of D.

Watson, then generalises the above process from an integral domain to a ring R by restricting the denominators to regular elements (ie elements that are not zero divisors)). This results in the formation of the total quotient ring defined as follows:

Definition 6.5 Let R be a ring. The ring Q(R) of of equivalent classes of fractions from R whose denominators are regular elements, with addition and multiplication defined as above is called a total quotient ring of R.

Watson then generalises the process further in a process called "localization" where now denominators are allowed to be any element from a multiplicative system (or multiplicatively closed set) defined as follws:

Definition 6.6 Let R be a ring. A subset T of R is a multiplicative system if \(\displaystyle 1 \in T \) and if \(\displaystyle a, b \in T \) implies that \(\displaystyle ab \in T \) - that is T is multiplicatively closed and contains 1.

Watson then writes:

If T is a multiplicative system of a ring R, then an equivalence relation can be defined on an appropriate set S of "fractions" using elements of R

S = {a/b | a,b \(\displaystyle \in \) R, and b \(\displaystyle \in \) T}

and

a/b \(\displaystyle \cong \) c/d if and only if r(ad - bc) =0

for some \(\displaystyle t \in T \) ... ...

Watson argues that this also results in a ring ... but my problem with this construction of a ring of fractions is that the original ring R is a multiplicative system - so then one possibility is that T = R - but then how do we avoid the problem of zero divisors -i.e. in addition and multiplication we may end up with bd = 0 for b and d not equal to zero.

Can someone please clarify this issue for me?

Peter
 
Last edited:
Physics news on Phys.org
  • #2
The answer is simpler than you may suppose:

If $T$ is such a subset, then the resulting ring is denoted $T^{-1}R$ (usually S is used instead of T).

We have the following results:

$0 \in T \implies T^{-1}R$ is trivial.

the associated ring homomorphism:

$f: R \to T^{-1}R$ given by:$f(r) = (rt)/t$ (which is the equivalence class of (rt,t))

is injective if and only if $T$ does not contain any zero-divisors.

(if $1 \in T$, we can simplify this to $r \mapsto r/1$...note that in the definition given above, it doesn't matter "which" $t$ we use, since for any element $t_0 \in T$, we have:

$0 = t_00 = t_0(rtt' - rtt')$ which shows that: $rt/t \sim rt'/t'$).

If $0 \in T$, $f$ is really, really non-injective, because using $t = 0$, we get:

$a/b \sim c/d$ for ANY $a,b,c,d \in R$.

In other words, the "niceness" of $T$ provides some sort of measure as to how well we can "preserve" $R$ in the localization. If $T$ doesn't have any zero-divisors, then the localization embeds $R$ in the localization, otherwise we just get some quotient ring of $R$ instead (which might be the trivial quotient).

An interesting example:

Let $R = \Bbb Z$, and let $T = (2)$ (the ideal generated by 2). Note that $1 \not \in T$, so we can't embed $\Bbb Z$ in this localization by sending $k \to k/1$, but we CAN embed it as the set:

$\{2k/2 : k \in \Bbb Z\}$

(Note that we have $2k/2 \sim (2km)/2m$ for any integer $m$).

One can think of this as the "subring of $\Bbb Q$ consisting of fractions with even denominators" (one can use any ideal in place of (2), can you see this?).

Another interesting example:

Let $R = \Bbb Z_6$. We have the multiplicatively closed subset:

$T = \{1,3\}$

What does $T^{-1}R$ look like?

Well, naively, we have 12 possible "fractions":

0/1,1/1,2/1,3/1,4/1,5/1,0/3,1/3,2/3,4/3,5/3. Some $T$ contains a zero-divisor (namely 3), we should expect that some of the first 6 (the image of $R$) are equal.

Now a/1 = 0/1 if there is some b in T with: b(a - 0) = ba = 0.

Thus:

0/1 = 2/1, since 3(2) = 0.

On the other hand, we see 1/1 is not equal to 0/1.

a/1 = 1/1 implies 3(a - 1) = 0, so we see that:

1/1 = 3/1.

Finally, note that 4/1 = 0/1 (since 3(4-0) = 0), and:

5/1 = 1/1 (since 3(5-1) = 0). So the image of $R$ is:

{0/1,1/1}.

What about the other 6 fractions?.

Clearly, 0/3 = 0/1. Just as clearly, 3/3 = 1/1.

Now 1/3 = 1/1, since 3(3-1) = 0.

2/3 = 0/1, since 3(2-0) = 0.

4/3 = 0/1, since 3(4-0) = 0, and finally,

5/3 = 1/1, since 3(5-3) = 0.

So we have just two elements in $T^{-1}R$, namely:

{0/1,1/1}.

What does the multiplication in this ring look like?

(0/1)*(0/1) = 0/1
(0/1)*(1/1) = 0/1
(1/1)*(1/1) = 1/1

What about addition (which is the hard part)?

(0/1) + (0/1) = (1*0 + 0*1)/(1*1) = (0+0)/1 = 0/1
(0/1) + (1/1) = (1*0 + 1*1)/(1*1) = (0+1)/1 = 1/1
(1/1) + (1/1) = (1*1 + 1*1)/(1*1) = (1+1)/1 = 2/1 = 0/1.

We could use "other forms" for these:

(3/1) + (3/3) = (3*3 + 1*3)/(1*3) = (3 + 3)/3 = 0/3 = 0/1, or:

(2/3) + (5/1) = (1*2 + 3*5)/(3*1) = (2 + 3)/3 = 5/3 = 1/1.

So in this case, it turns out that $T^{-1}R \cong \Bbb Z_2$

In general, we can take any ring $R$ and any prime ideal $P$, and consider:

$T = R - P$

This is called the localization of $R$ AT $P$.

A rather nifty example is afforded by the integers localized at the powers of 2 (the multiplicative set generated by 2). In this case we get dyadic fractions, or as we more commonly know them: "ruler numbers".
 
  • #3
Deveno said:
The answer is simpler than you may suppose:

If $T$ is such a subset, then the resulting ring is denoted $T^{-1}R$ (usually S is used instead of T).

We have the following results:

$0 \in T \implies T^{-1}R$ is trivial.

the associated ring homomorphism:

$f: R \to T^{-1}R$ given by:$f(r) = (rt)/t$ (which is the equivalence class of (rt,t))

is injective if and only if $T$ does not contain any zero-divisors.

(if $1 \in T$, we can simplify this to $r \mapsto r/1$...note that in the definition given above, it doesn't matter "which" $t$ we use, since for any element $t_0 \in T$, we have:

$0 = t_00 = t_0(rtt' - rtt')$ which shows that: $rt/t \sim rt'/t'$).

If $0 \in T$, $f$ is really, really non-injective, because using $t = 0$, we get:

$a/b \sim c/d$ for ANY $a,b,c,d \in R$.

In other words, the "niceness" of $T$ provides some sort of measure as to how well we can "preserve" $R$ in the localization. If $T$ doesn't have any zero-divisors, then the localization embeds $R$ in the localization, otherwise we just get some quotient ring of $R$ instead (which might be the trivial quotient).

An interesting example:

Let $R = \Bbb Z$, and let $T = (2)$ (the ideal generated by 2). Note that $1 \not \in T$, so we can't embed $\Bbb Z$ in this localization by sending $k \to k/1$, but we CAN embed it as the set:

$\{2k/2 : k \in \Bbb Z\}$

(Note that we have $2k/2 \sim (2km)/2m$ for any integer $m$).

One can think of this as the "subring of $\Bbb Q$ consisting of fractions with even denominators" (one can use any ideal in place of (2), can you see this?).

Another interesting example:

Let $R = \Bbb Z_6$. We have the multiplicatively closed subset:

$T = \{1,3\}$

What does $T^{-1}R$ look like?

Well, naively, we have 12 possible "fractions":

0/1,1/1,2/1,3/1,4/1,5/1,0/3,1/3,2/3,4/3,5/3. Some $T$ contains a zero-divisor (namely 3), we should expect that some of the first 6 (the image of $R$) are equal.

Now a/1 = 0/1 if there is some b in T with: b(a - 0) = ba = 0.

Thus:

0/1 = 2/1, since 3(2) = 0.

On the other hand, we see 1/1 is not equal to 0/1.

a/1 = 1/1 implies 3(a - 1) = 0, so we see that:

1/1 = 3/1.

Finally, note that 4/1 = 0/1 (since 3(4-0) = 0), and:

5/1 = 1/1 (since 3(5-1) = 0). So the image of $R$ is:

{0/1,1/1}.

What about the other 6 fractions?.

Clearly, 0/3 = 0/1. Just as clearly, 3/3 = 1/1.

Now 1/3 = 1/1, since 3(3-1) = 0.

2/3 = 0/1, since 3(2-0) = 0.

4/3 = 0/1, since 3(4-0) = 0, and finally,

5/3 = 1/1, since 3(5-3) = 0.

So we have just two elements in $T^{-1}R$, namely:

{0/1,1/1}.

What does the multiplication in this ring look like?

(0/1)*(0/1) = 0/1
(0/1)*(1/1) = 0/1
(1/1)*(1/1) = 1/1

What about addition (which is the hard part)?

(0/1) + (0/1) = (1*0 + 0*1)/(1*1) = (0+0)/1 = 0/1
(0/1) + (1/1) = (1*0 + 1*1)/(1*1) = (0+1)/1 = 1/1
(1/1) + (1/1) = (1*1 + 1*1)/(1*1) = (1+1)/1 = 2/1 = 0/1.

We could use "other forms" for these:

(3/1) + (3/3) = (3*3 + 1*3)/(1*3) = (3 + 3)/3 = 0/3 = 0/1, or:

(2/3) + (5/1) = (1*2 + 3*5)/(3*1) = (2 + 3)/3 = 5/3 = 1/1.

So in this case, it turns out that $T^{-1}R \cong \Bbb Z_2$

In general, we can take any ring $R$ and any prime ideal $P$, and consider:

$T = R - P$

This is called the localization of $R$ AT $P$.

A rather nifty example is afforded by the integers localized at the powers of 2 (the multiplicative set generated by 2). In this case we get dyadic fractions, or as we more commonly know them: "ruler numbers".

Thanks for the extensive help Deveno.

Just working carefully through your post now

Peter
 
  • #4
Deveno said:
The answer is simpler than you may suppose:

If $T$ is such a subset, then the resulting ring is denoted $T^{-1}R$ (usually S is used instead of T).

We have the following results:

$0 \in T \implies T^{-1}R$ is trivial.

the associated ring homomorphism:

$f: R \to T^{-1}R$ given by:$f(r) = (rt)/t$ (which is the equivalence class of (rt,t))

is injective if and only if $T$ does not contain any zero-divisors.

(if $1 \in T$, we can simplify this to $r \mapsto r/1$...note that in the definition given above, it doesn't matter "which" $t$ we use, since for any element $t_0 \in T$, we have:

$0 = t_00 = t_0(rtt' - rtt')$ which shows that: $rt/t \sim rt'/t'$).

If $0 \in T$, $f$ is really, really non-injective, because using $t = 0$, we get:

$a/b \sim c/d$ for ANY $a,b,c,d \in R$.

In other words, the "niceness" of $T$ provides some sort of measure as to how well we can "preserve" $R$ in the localization. If $T$ doesn't have any zero-divisors, then the localization embeds $R$ in the localization, otherwise we just get some quotient ring of $R$ instead (which might be the trivial quotient).

An interesting example:

Let $R = \Bbb Z$, and let $T = (2)$ (the ideal generated by 2). Note that $1 \not \in T$, so we can't embed $\Bbb Z$ in this localization by sending $k \to k/1$, but we CAN embed it as the set:

$\{2k/2 : k \in \Bbb Z\}$

(Note that we have $2k/2 \sim (2km)/2m$ for any integer $m$).

One can think of this as the "subring of $\Bbb Q$ consisting of fractions with even denominators" (one can use any ideal in place of (2), can you see this?).

Another interesting example:

Let $R = \Bbb Z_6$. We have the multiplicatively closed subset:

$T = \{1,3\}$

What does $T^{-1}R$ look like?

Well, naively, we have 12 possible "fractions":

0/1,1/1,2/1,3/1,4/1,5/1,0/3,1/3,2/3,4/3,5/3. Some $T$ contains a zero-divisor (namely 3), we should expect that some of the first 6 (the image of $R$) are equal.

Now a/1 = 0/1 if there is some b in T with: b(a - 0) = ba = 0.

Thus:

0/1 = 2/1, since 3(2) = 0.

On the other hand, we see 1/1 is not equal to 0/1.

a/1 = 1/1 implies 3(a - 1) = 0, so we see that:

1/1 = 3/1.

Finally, note that 4/1 = 0/1 (since 3(4-0) = 0), and:

5/1 = 1/1 (since 3(5-1) = 0). So the image of $R$ is:

{0/1,1/1}.

What about the other 6 fractions?.

Clearly, 0/3 = 0/1. Just as clearly, 3/3 = 1/1.

Now 1/3 = 1/1, since 3(3-1) = 0.

2/3 = 0/1, since 3(2-0) = 0.

4/3 = 0/1, since 3(4-0) = 0, and finally,

5/3 = 1/1, since 3(5-3) = 0.

So we have just two elements in $T^{-1}R$, namely:

{0/1,1/1}.

What does the multiplication in this ring look like?

(0/1)*(0/1) = 0/1
(0/1)*(1/1) = 0/1
(1/1)*(1/1) = 1/1

What about addition (which is the hard part)?

(0/1) + (0/1) = (1*0 + 0*1)/(1*1) = (0+0)/1 = 0/1
(0/1) + (1/1) = (1*0 + 1*1)/(1*1) = (0+1)/1 = 1/1
(1/1) + (1/1) = (1*1 + 1*1)/(1*1) = (1+1)/1 = 2/1 = 0/1.

We could use "other forms" for these:

(3/1) + (3/3) = (3*3 + 1*3)/(1*3) = (3 + 3)/3 = 0/3 = 0/1, or:

(2/3) + (5/1) = (1*2 + 3*5)/(3*1) = (2 + 3)/3 = 5/3 = 1/1.

So in this case, it turns out that $T^{-1}R \cong \Bbb Z_2$

In general, we can take any ring $R$ and any prime ideal $P$, and consider:

$T = R - P$

This is called the localization of $R$ AT $P$.

A rather nifty example is afforded by the integers localized at the powers of 2 (the multiplicative set generated by 2). In this case we get dyadic fractions, or as we more commonly know them: "ruler numbers".
Hi Deveno,

I have been reflecting on your post but am still struggling to understand the issue of zero divisors in the denominators of "fractions" in the localization process.

I will try to explain, this time not using Watson's notation, but using the notation of Dummit and Foote's Section 15.4, pages 706 - 730.

On page 706 of D&F we find the following:

"Let D be a multiplicatively closed subset of R containing 1 (i.e. \(\displaystyle 1 \in D \) and \(\displaystyle ab \in D \) if \(\displaystyle a, b \in D \))

We construct a new ring \(\displaystyle D^{-1} R \) which is the "smallest" ring in which elements of D become units. ( ? question - how exactly do the elements of D become units ?) This generalises the construction of rings by allowing D to contain zero or zero divisors, and so in this case R need not embed as a subring of \(\displaystyle D^{-1} R \)." (? but how exactly does it allow D to contain zero or zero divisors ?)Then subsequently, on page 707 D&F define equivalence classes on R x D by the following:

\(\displaystyle (r,d) \sim (s/e) \) if and only if \(\displaystyle x(er - ds) = 0 \) for some \(\displaystyle x \in D \). ... ...

... ...

Let r/d denote the equivalence class of (r, d) under \(\displaystyle \sim \) and let \(\displaystyle D^{-1} R \) be the set of these equivalence classes. Define addition and multiplication in \(\displaystyle D^{-1} R \) by

a/b + c/d = (ad + bc)/bd

and

a/b x c/d = ac/bd

... ... "

Now my question/problem is this: How does the above construction ensure that bd is not zero? It seems to me that b and d could be zero divisors in R and hence give us a denominator of 0?

You example concerning \(\displaystyle \mathbb{Z}_6 \) gives me an idea how this may work, but I do not fully understand the situation.

Can you help clarify the above issue?

Peter
 
Last edited:
  • #5
The case where $D$ has no zero-divisors is a SPECIAL CASE. IF this is so, then the equation:

$x(er - ds) = 0 \implies er - ds = 0$

(or else $x \in D$ is a zero divisor).

So in this *special case*, we get that:

$r \mapsto r/1 = [(r,1)]$ is a monomorphism. To see this, suppose:

$r/1 = r'/1$ (that is: $(r,1) \sim (r',1)$).

By the definition of the equivalence, this means:

$x(r1 - r'1) = 0$, and since $D$ has no zero divisors, $r - r' = 0$,

that is: $r = r'$, so the mapping $r \mapsto r/1$ is injective.

On the other hand, suppose $D$ contains SOME zero divisor, say $z$.

Then there exists SOME $s \neq 0 \in R$ with $zs = 0$.

Clearly, for any $r \in R$, we have:

$(r+s)/1 = r/1$, since:

$z((r+s) - r) = zs = 0$ and it is evident that $r+s \neq r$, which shows that if $D$ contains ANY zero divisor at all, the mapping $r \mapsto r/1$ is NOT injective, because $r,r+s$ both map to the same element of $D^{-1}R$.

Now, it may be that $0 \in D$. If this is so, then the only "fraction with 0 denominator" that makes sense is $0/0 = 0 = 1$, and indeed, it turns out that $D^{-1}R$ is trivial in this case.

BUT...if $D$ contains zero divisors, but not 0, it doesn't contain "all" zero divisors.

For example, if $0 \not \in D$ but $a \in D$ is a zero divisor, with $ab = 0$, then we cannot have $b \in D$, since otherwise the closure of $D$ under multiplication would imply $0 = ab \in D$.

Usually, $D$ is chosen to not contain all zero divisors. A trivial quotient of $R$ isn't all that useful.

Examine closely what happens to 3 in my example of $\Bbb Z_6$. Is its image a unit in $D^{-1}R$?
 
Last edited:
  • #6
Sorry for the double-post, but I thought I would try to explain in more basic terms what is going on:

We start with two objects, $R$ and $S$.

$R$ is a (commutative) ring, and $S$ is a subset of $R$ that is a semi-group (or, if $S$ contains unity, a monoid).

Our goal is to make $R \times S$ into a ring.

Multiplication poses no problem, we can just set:

$(r,s)\ast(r',s') = (rr',ss')$ using the normal operation of $R$.

Addition is the problem, we have no guarantee that $s+s'$ is even in $S$. To see "how to get around this" let's look at how we add fractions in the rationals:

To add $\frac{a}{b} + \frac{c}{d}$, we "find a common denominator". Now the smallest denominator that will work is lcm(b,d), but if b and d are co-prime, this will be $bd$, and in any case, $bd$ will always work.

Now to "convert" $\frac{a}{b}$ to a fraction in terms of $bd$ in the denominator, we multiply by $\frac{d}{d}$, which we already know how to do (see above).

And this shows what is missing in our naive attempt to just make $R \times S$ into a ring: namely, we feel that "d of d parts" should be unity. In other words:

$\frac{d}{d} \sim 1$ for ANY $d$. So we need to make an equivalence relation on $R \times S$ that let's us "cancel things in $S$".

Often, the equivalence relation:

$(r,s) \sim (r',s') \iff rs' - r's = 0$ will work. Let's see what goes wrong if $S$ contains a zero divisor, $z$.

Clearly, we always have:

$(r,s) \sim (r,s)$, since $rs - rs = 0$.

And if $(r,s) \sim (r's')$, it is easily seen that $(r',s') \sim (r,s)$.

Now, suppose:

$(r,s) \sim (r',s')$ and $(r',s') \sim (r'',s'')$.

For us to be able to derive $(r,s) \sim (r'',s'')$, we need to derive:

$rs'' - r''s = 0$ from

$rs' - r's = 0$ and $r's'' - r''s' = 0$.

Now we of course have:

$rs's'' = r'ss'' = sr's'' = sr''s' = r''s's$

That is: $s'(rs'' - r''s) = 0$.

If we could "cancel" the $s'$, we'd be there...but...if $s' = z$ our zero divisor, we have a problem. We can't necessarily conclude $rs'' - r''s = 0$.

Of course, if $S$ HAS no zero-divisors, this problem goes away. But $S$ might have zero divisors, so we need a way around this, somehow.

So, the idea is, just build the problem into our equivalence relation:

We say that:

$(r,s) \sim (r',s') \iff t(rs' - r's) = 0$ for some $t \in S$.

Clearly, reflexiveness and symmetry holds just as before. Now to establish transitivity:

If, $(r,s) \sim (r',s')$ and $(r',s') \sim (r'',s'')$, we have:

$t(rs') = t(r's)$ and $t'(r's'') = t'(r''s')$, for some $t,t' \in S$.

Thus:

$(tt's')(rs'') = (trs')(t's'') = (tr's)(t's'') = (ts)(t'r's'') = (ts)(t'r''s') = (tt's')(r''s)$

so that:

$(tt's')(rs'' - r''s) = 0$. Note that $tt's' \in S$ (by closure).

and $(r,s) \sim (r'',s'')$, as desired.

Having established the "proper" kind of equivalence on $R \times S$, we can now define addition:

$r/s + r'/s' = (rs' + r's)/(ss')$.

There is one more caveat: since we've defined operations on equivalence classes using representatives of that class, we need to ensure these operations "respect" the equivalence. That is:

If $(r,s) \sim (r',s')$ and $(u,v) \sim (u',v')$, we need to show that:

$r/s \ast u/v = r'/s'\ast u'/v'$ and:

$r/s + u/v = r'/s' + u'/v'$

(in other words, that these operations are "well-defined" and only depend on the equivalence class, not the representatives).

This is, by and large, a (somewhat tedious) "mechanical process" of just working through the definitions, which I leave to the reader.
 
  • #7
Deveno said:
Sorry for the double-post, but I thought I would try to explain in more basic terms what is going on:

We start with two objects, $R$ and $S$.

$R$ is a (commutative) ring, and $S$ is a subset of $R$ that is a semi-group (or, if $S$ contains unity, a monoid).

Our goal is to make $R \times S$ into a ring.

Multiplication poses no problem, we can just set:

$(r,s)\ast(r',s') = (rr',ss')$ using the normal operation of $R$.

Addition is the problem, we have no guarantee that $s+s'$ is even in $S$. To see "how to get around this" let's look at how we add fractions in the rationals:

To add $\frac{a}{b} + \frac{c}{d}$, we "find a common denominator". Now the smallest denominator that will work is lcm(b,d), but if b and d are co-prime, this will be $bd$, and in any case, $bd$ will always work.

Now to "convert" $\frac{a}{b}$ to a fraction in terms of $bd$ in the denominator, we multiply by $\frac{d}{d}$, which we already know how to do (see above).

And this shows what is missing in our naive attempt to just make $R \times S$ into a ring: namely, we feel that "d of d parts" should be unity. In other words:

$\frac{d}{d} \sim 1$ for ANY $d$. So we need to make an equivalence relation on $R \times S$ that let's us "cancel things in $S$".

Often, the equivalence relation:

$(r,s) \sim (r',s') \iff rs' - r's = 0$ will work. Let's see what goes wrong if $S$ contains a zero divisor, $z$.

Clearly, we always have:

$(r,s) \sim (r,s)$, since $rs - rs = 0$.

And if $(r,s) \sim (r's')$, it is easily seen that $(r',s') \sim (r,s)$.

Now, suppose:

$(r,s) \sim (r',s')$ and $(r',s') \sim (r'',s'')$.

For us to be able to derive $(r,s) \sim (r'',s'')$, we need to derive:

$rs'' - r''s = 0$ from

$rs' - r's = 0$ and $r's'' - r''s' = 0$.

Now we of course have:

$rs's'' = r'ss'' = sr's'' = sr''s' = r''s's$

That is: $s'(rs'' - r''s) = 0$.

If we could "cancel" the $s'$, we'd be there...but...if $s' = z$ our zero divisor, we have a problem. We can't necessarily conclude $rs'' - r''s = 0$.

Of course, if $S$ HAS no zero-divisors, this problem goes away. But $S$ might have zero divisors, so we need a way around this, somehow.

So, the idea is, just build the problem into our equivalence relation:

We say that:

$(r,s) \sim (r',s') \iff t(rs' - r's) = 0$ for some $t \in S$.

Clearly, reflexiveness and symmetry holds just as before. Now to establish transitivity:

If, $(r,s) \sim (r',s')$ and $(r',s') \sim (r'',s'')$, we have:

$t(rs') = t(r's)$ and $t'(r's'') = t'(r''s')$, for some $t,t' \in S$.

Thus:

$(tt's')(rs'') = (trs')(t's'') = (tr's)(t's'') = (ts)(t'r's'') = (ts)(t'r''s') = (tt's')(r''s)$

so that:

$(tt's')(rs'' - r''s) = 0$. Note that $tt's' \in S$ (by closure).

and $(r,s) \sim (r'',s'')$, as desired.

Having established the "proper" kind of equivalence on $R \times S$, we can now define addition:

$r/s + r'/s' = (rs' + r's)/(ss')$.

There is one more caveat: since we've defined operations on equivalence classes using representatives of that class, we need to ensure these operations "respect" the equivalence. That is:

If $(r,s) \sim (r',s')$ and $(u,v) \sim (u',v')$, we need to show that:

$r/s \ast u/v = r'/s'\ast u'/v'$ and:

$r/s + u/v = r'/s' + u'/v'$

(in other words, that these operations are "well-defined" and only depend on the equivalence class, not the representatives).

This is, by and large, a (somewhat tedious) "mechanical process" of just working through the definitions, which I leave to the reader.

Thanks for the informative post Deveno.

Your post explains why Watson, Dummit and Foote and others define the equivalence relation on R x S as

a/b \(\displaystyle \sim \) c/d if and only if t(ad - bc) = 0

Nothing else I have read adressed this point explicitly ... !

Just one issue however ...

As I mentioned previously D&F use D for the the multiplicatively closed subset of R and so define a relation on RxD by

\(\displaystyle (r,d) \sim (s,e) \) if and only if \(\displaystyle x(er - ds) =0 \) for some \(\displaystyle x \in D \)

They thus follow your construction of the ring R x D (R x S in your notation)

But then they refer to the ring of fractions, not as the ring as established (ie R x D) but as \(\displaystyle D^{-1}R \)!

Can you clarify what is going on?

I presume that what is going on is just notational and also that \(\displaystyle RD^{-1} \) would do just as well as \(\displaystyle D^{-1}R \) for the set of "fractions".

I note in passing that when we use the notation \(\displaystyle D^{-1}R \) we are using the notation of a right ideal of R generated by the set \(\displaystyle D^{-1} \). (D&F page 251).

Peter
 
  • #8
Well, as I touched on before, the set $R \times D$ is "too big" to be the ring we want...in lay terms: "fractions aren't unique (representations)".

In general, when we add two fractions:

$r/d + r'/d'$

we want to "reduce to simplest terms" (which involves cancelling "common factors in $D$").

This is so we can think of a fraction $rd/d$ as just being the corresponding thing to $r$ as a fraction (but be careful, there may be some "condensation" going on, because $D$ might have "undesirable elements" (zero divisors, for example)). If $D$ is zero divisor free (as is often, but not always, the case...but certainly IS the case if $R$ is an integral domain, which will be for many rings we are particularly interested in, such as polynomial rings over a field), what we accomplish is embedding $R$ in a larger ring where all of the elements of $D$ have inverses:

$d/1\ast1/d = d/d$ and it is easy to see that:

$r/d' \ast d/d = r/d'$

(since $[(rd)d' - r(dd')] = 0$).

Of course one possible notation might be $R/D$, but this could be confused with the notation for quotient ring, and $D$ is not necessarily an ideal; for example, in the ring of integers, we could have $D = \{2^k: k \in \Bbb Z^+\}$, and:

$2 + 4 = 6 \not \in D$

So the notation $D^{-1}R$ is meant to suggest a PRODUCT set, consisting of "numerators" in $R$, and "denominators" in $D$ (hence the notation $D^{-1} = \{1/d: d \in D\}$.

Why $D^{-1}R$ has been accepted instead of $RD^{-1}$ is frankly, a mystery to me, but if $R$ is commutative, it makes no difference. Perhaps there is a generalization to non-commutative rings in which this matters (and then this would make sense,as per the comment in D&F).

An elaboration on my earlier example:

Consider the ring: $\Bbb Z_{ab}$ where gcd(a,b) = 1. By the Chinese Remainder Theorem, we know that:

$k\text{ (mod ab)} \mapsto (k\text{ (mod a)},k\text{ (mod b)})$

is an isomorphism of:

$\Bbb Z_{ab} \cong \Bbb Z_a \times \Bbb Z_{b}$.

Consider the pre-image of the set $\{(1,1),(0,1)\}$. For example, with a = 3, b = 7, this set is {1,15}. What we are doing here is taking two identities, one in the "whole ring" and one in the sub-ring $\{0\} \times \Bbb Z_b$. It is clear that the element of $\Bbb Z_{ab}$ we get besides 1 will be a multiple of $a$, say $na$.

Now, first consider the elements $k/1$. When will:

$k/1 = k'/1$?

Well, this means we have either:

$1(k - k') = 0$ in which case $k = k'$, OR:

$na(k - k') = 0$.

In terms of (actual) INTEGERS, this means $b$ divides $na(k - k')$, and since $n < b$ (and gcd(a,b) = 1), we must have $b$ dividing $k - k'$, that is: $k = k'\text{ (mod b)}$.

And, indeed if $k = k'\text{ (mod b)}$, we see that $k/1 = k'/1$.

So the image of $\Bbb Z_{ab}$ in the localization is (isomorphic to) $\Bbb Z_b$.

What about the "other fractions" $k/(na)$?

If $k/(na) = k'/(na)$, and $k \neq k'$ (which is one possibility, but not very interesting), we get:

$(na)^2(k - k') = 0$.

But, we have chosen $na$ so that, (mod ab), $(na)^2 = na$, so we get exactly $b$ equivalence classes here, as well.

So, it is natural to ask:

$k/1 = k/(na)$?

and the answer is YES:

$(na)((na)k - k) = (na)^2k - (na)k = (na)k - (na)k = 0$.

That is, if $\phi$ is the isomorphism alluded to above:

$\{\phi^{-1}(1,1),\phi^{-1}(0,1)\}^{-1}\Bbb Z_{ab} \cong \Bbb Z_b$
 
  • #9
Deveno said:
Well, as I touched on before, the set $R \times D$ is "too big" to be the ring we want...in lay terms: "fractions aren't unique (representations)".

In general, when we add two fractions:

$r/d + r'/d'$

we want to "reduce to simplest terms" (which involves cancelling "common factors in $D$").

This is so we can think of a fraction $rd/d$ as just being the corresponding thing to $r$ as a fraction (but be careful, there may be some "condensation" going on, because $D$ might have "undesirable elements" (zero divisors, for example)). If $D$ is zero divisor free (as is often, but not always, the case...but certainly IS the case if $R$ is an integral domain, which will be for many rings we are particularly interested in, such as polynomial rings over a field), what we accomplish is embedding $R$ in a larger ring where all of the elements of $D$ have inverses:

$d/1\ast1/d = d/d$ and it is easy to see that:

$r/d' \ast d/d = r/d'$

(since $[(rd)d' - r(dd')] = 0$).

Of course one possible notation might be $R/D$, but this could be confused with the notation for quotient ring, and $D$ is not necessarily an ideal; for example, in the ring of integers, we could have $D = \{2^k: k \in \Bbb Z^+\}$, and:

$2 + 4 = 6 \not \in D$

So the notation $D^{-1}R$ is meant to suggest a PRODUCT set, consisting of "numerators" in $R$, and "denominators" in $D$ (hence the notation $D^{-1} = \{1/d: d \in D\}$.

Why $D^{-1}R$ has been accepted instead of $RD^{-1}$ is frankly, a mystery to me, but if $R$ is commutative, it makes no difference. Perhaps there is a generalization to non-commutative rings in which this matters (and then this would make sense,as per the comment in D&F).

An elaboration on my earlier example:

Consider the ring: $\Bbb Z_{ab}$ where gcd(a,b) = 1. By the Chinese Remainder Theorem, we know that:

$k\text{ (mod ab)} \mapsto (k\text{ (mod a)},k\text{ (mod b)})$

is an isomorphism of:

$\Bbb Z_{ab} \cong \Bbb Z_a \times \Bbb Z_{b}$.

Consider the pre-image of the set $\{(1,1),(0,1)\}$. For example, with a = 3, b = 7, this set is {1,15}. What we are doing here is taking two identities, one in the "whole ring" and one in the sub-ring $\{0\} \times \Bbb Z_b$. It is clear that the element of $\Bbb Z_{ab}$ we get besides 1 will be a multiple of $a$, say $na$.

Now, first consider the elements $k/1$. When will:

$k/1 = k'/1$?

Well, this means we have either:

$1(k - k') = 0$ in which case $k = k'$, OR:

$na(k - k') = 0$.

In terms of (actual) INTEGERS, this means $b$ divides $na(k - k')$, and since $n < b$ (and gcd(a,b) = 1), we must have $b$ dividing $k - k'$, that is: $k = k'\text{ (mod b)}$.

And, indeed if $k = k'\text{ (mod b)}$, we see that $k/1 = k'/1$.

So the image of $\Bbb Z_{ab}$ in the localization is (isomorphic to) $\Bbb Z_b$.

What about the "other fractions" $k/(na)$?

If $k/(na) = k'/(na)$, and $k \neq k'$ (which is one possibility, but not very interesting), we get:

$(na)^2(k - k') = 0$.

But, we have chosen $na$ so that, (mod ab), $(na)^2 = na$, so we get exactly $b$ equivalence classes here, as well.

So, it is natural to ask:

$k/1 = k/(na)$?

and the answer is YES:

$(na)((na)k - k) = (na)^2k - (na)k = (na)k - (na)k = 0$.

That is, if $\phi$ is the isomorphism alluded to above:

$\{\phi^{-1}(1,1),\phi^{-1}(0,1)\}^{-1}\Bbb Z_{ab} \cong \Bbb Z_b
Thanks for the helpful post Deveno

Working through this now - will also be revising your other posts as well

Thanks for the extensive guidance regarding localization ...

Just to add ... Your examples have been extremely helpful

Peter
 
  • #10
Peter said:
Deveno said:
Well, as I touched on before, the set $R \times D$ is "too big" to be the ring we want...in lay terms: "fractions aren't unique (representations)".

In general, when we add two fractions:

$r/d + r'/d'$

we want to "reduce to simplest terms" (which involves cancelling "common factors in $D$").

This is so we can think of a fraction $rd/d$ as just being the corresponding thing to $r$ as a fraction (but be careful, there may be some "condensation" going on, because $D$ might have "undesirable elements" (zero divisors, for example)). If $D$ is zero divisor free (as is often, but not always, the case...but certainly IS the case if $R$ is an integral domain, which will be for many rings we are particularly interested in, such as polynomial rings over a field), what we accomplish is embedding $R$ in a larger ring where all of the elements of $D$ have inverses:

$d/1\ast1/d = d/d$ and it is easy to see that:

$r/d' \ast d/d = r/d'$

(since $[(rd)d' - r(dd')] = 0$).

Of course one possible notation might be $R/D$, but this could be confused with the notation for quotient ring, and $D$ is not necessarily an ideal; for example, in the ring of integers, we could have $D = \{2^k: k \in \Bbb Z^+\}$, and:

$2 + 4 = 6 \not \in D$

So the notation $D^{-1}R$ is meant to suggest a PRODUCT set, consisting of "numerators" in $R$, and "denominators" in $D$ (hence the notation $D^{-1} = \{1/d: d \in D\}$.

Why $D^{-1}R$ has been accepted instead of $RD^{-1}$ is frankly, a mystery to me, but if $R$ is commutative, it makes no difference. Perhaps there is a generalization to non-commutative rings in which this matters (and then this would make sense,as per the comment in D&F).

An elaboration on my earlier example:

Consider the ring: $\Bbb Z_{ab}$ where gcd(a,b) = 1. By the Chinese Remainder Theorem, we know that:

$k\text{ (mod ab)} \mapsto (k\text{ (mod a)},k\text{ (mod b)})$

is an isomorphism of:

$\Bbb Z_{ab} \cong \Bbb Z_a \times \Bbb Z_{b}$.

Consider the pre-image of the set $\{(1,1),(0,1)\}$. For example, with a = 3, b = 7, this set is {1,15}. What we are doing here is taking two identities, one in the "whole ring" and one in the sub-ring $\{0\} \times \Bbb Z_b$. It is clear that the element of $\Bbb Z_{ab}$ we get besides 1 will be a multiple of $a$, say $na$.

Now, first consider the elements $k/1$. When will:

$k/1 = k'/1$?

Well, this means we have either:

$1(k - k') = 0$ in which case $k = k'$, OR:

$na(k - k') = 0$.

In terms of (actual) INTEGERS, this means $b$ divides $na(k - k')$, and since $n < b$ (and gcd(a,b) = 1), we must have $b$ dividing $k - k'$, that is: $k = k'\text{ (mod b)}$.

And, indeed if $k = k'\text{ (mod b)}$, we see that $k/1 = k'/1$.

So the image of $\Bbb Z_{ab}$ in the localization is (isomorphic to) $\Bbb Z_b$.

What about the "other fractions" $k/(na)$?

If $k/(na) = k'/(na)$, and $k \neq k'$ (which is one possibility, but not very interesting), we get:

$(na)^2(k - k') = 0$.

But, we have chosen $na$ so that, (mod ab), $(na)^2 = na$, so we get exactly $b$ equivalence classes here, as well.

So, it is natural to ask:

$k/1 = k/(na)$?

and the answer is YES:

$(na)((na)k - k) = (na)^2k - (na)k = (na)k - (na)k = 0$.

That is, if $\phi$ is the isomorphism alluded to above:

$\{\phi^{-1}(1,1),\phi^{-1}(0,1)\}^{-1}\Bbb Z_{ab} \cong \Bbb Z_b
Thanks for the helpful post Deveno

Working through this now - will also be revising your other posts as well

Thanks for the extensive guidance regarding localization ...

Just to add ... Your examples have been extremely helpful

Peter

Hi Deveno,

Still reflecting on your posts ...

Just another issue ... you may have covered this before ... I am not sure so ... forgive me if you have answered this question ...

Consider the case where T is a multiplicatively closed subset of a commutative ring with unity, R. Consider further that R has zero divisors and that some of these zero divisors are in T.

... a worry of mine has been the following ...

Consider a/b x c/d = ac/bd where a,b,c,d are in R and b,d are in T.

My concern has been with the situation where bd = 0 ... that is b,d are zero divisors in T ( I amassuming that a fraction with a zero denominator is meaningless and therefore not acceptable as a member of a ring of fractions ...)

My understanding of this situation is that

\(\displaystyle bd = 0 \Longrightarrow 0 \in T \) since b,d are in T and T is multiplicatively closed.

But as you have demonstrated \(\displaystyle 0 \in T \Longrightarrow T^{-1}R \) is trivial.

BUT ... if my analysis above is correct (I suspect something is wrong with it!) then we must choose T with no zero divisors to get a 'meaningful' ring \(\displaystyle T^{-1}R \).

Can you please clarify this situation? If correct it would be very limiting for the above construction ... but as I indicated above I am very unsure of my analysis ... particularly as your example of \(\displaystyle R = \mathbb{Z}_6 \) with \(\displaystyle T = \{ 1,3 \} \) had a zero divisor in T and yet \(\displaystyle T^{-1}R \ne \emptyset \)

Hoping you can help

Peter
 
Last edited:
  • #11
Peter said:
Peter said:
Hi Deveno,

Still reflecting on your posts ...

Just another issue ... you may have covered this before ... I am not sure so ... forgive me if you have answered this question ...

Consider the case where T is a multiplicatively closed subset of a commutative ring with unity, R. Consider further that R has zero divisors and that some of these zero divisors are in T.

... a worry of mine has been the following ...

Consider a/b x c/d = ac/bd where a,b,c,d are in R and b,d are in T.

My concern has been with the situation where bd = 0 ... that is b,d are zero divisors in T ( I amassuming that a fraction with a zero denominator is meaningless and therefore not acceptable as a member of a ring of fractions ...)

My understanding of this situation is that

\(\displaystyle bd = 0 \Longrightarrow 0 \in T \) since b,d are in T and T is multiplicatively closed.

But as you have demonstrated \(\displaystyle 0 \in T \Longrightarrow T^{-1}R \) is trivial.

BUT ... if my analysis above is correct (I suspect something is wrong with it!) then we must choose T with no zero divisors to get a 'meaningful' ring \(\displaystyle T^{-1}R \).

Can you please clarify this situation? If correct it would be very limiting for the above construction ... but as I indicated above I am very unsure of my analysis ... particularly as your example of \(\displaystyle R = \mathbb{Z}_6 \) with \(\displaystyle T = \{ 1,3 \} \) had a zero divisor in T and yet \(\displaystyle T^{-1}R \ne \emptyset \)

Hoping you can help

Peter

Hi Deveno,

I have been re-working your posts and have, I believe, found the answer to the issue I have asked about in the above post.

In an earlier post you wrote:

"if D contains zero divisors, but not 0, it doesn't contain "all" zero divisors.

For example, if 0D but aD is a zero divisor, with ab=0, then we cannot have bD, since otherwise the closure of D under multiplication would imply 0=abD."

Thus, I think the issue I pose above would not occur as b and d could not both be in the multiplicatively closed set.

Is it as simple as that?

Peter
 
  • #12
Deveno said:
Well, as I touched on before, the set $R \times D$ is "too big" to be the ring we want...in lay terms: "fractions aren't unique (representations)".

In general, when we add two fractions:

$r/d + r'/d'$

we want to "reduce to simplest terms" (which involves cancelling "common factors in $D$").

This is so we can think of a fraction $rd/d$ as just being the corresponding thing to $r$ as a fraction (but be careful, there may be some "condensation" going on, because $D$ might have "undesirable elements" (zero divisors, for example)). If $D$ is zero divisor free (as is often, but not always, the case...but certainly IS the case if $R$ is an integral domain, which will be for many rings we are particularly interested in, such as polynomial rings over a field), what we accomplish is embedding $R$ in a larger ring where all of the elements of $D$ have inverses:

$d/1\ast1/d = d/d$ and it is easy to see that:

$r/d' \ast d/d = r/d'$

(since $[(rd)d' - r(dd')] = 0$).

Of course one possible notation might be $R/D$, but this could be confused with the notation for quotient ring, and $D$ is not necessarily an ideal; for example, in the ring of integers, we could have $D = \{2^k: k \in \Bbb Z^+\}$, and:

$2 + 4 = 6 \not \in D$

So the notation $D^{-1}R$ is meant to suggest a PRODUCT set, consisting of "numerators" in $R$, and "denominators" in $D$ (hence the notation $D^{-1} = \{1/d: d \in D\}$.

Why $D^{-1}R$ has been accepted instead of $RD^{-1}$ is frankly, a mystery to me, but if $R$ is commutative, it makes no difference. Perhaps there is a generalization to non-commutative rings in which this matters (and then this would make sense,as per the comment in D&F).

An elaboration on my earlier example:

Consider the ring: $\Bbb Z_{ab}$ where gcd(a,b) = 1. By the Chinese Remainder Theorem, we know that:

$k\text{ (mod ab)} \mapsto (k\text{ (mod a)},k\text{ (mod b)})$

is an isomorphism of:

$\Bbb Z_{ab} \cong \Bbb Z_a \times \Bbb Z_{b}$.

Consider the pre-image of the set $\{(1,1),(0,1)\}$. For example, with a = 3, b = 7, this set is {1,15}. What we are doing here is taking two identities, one in the "whole ring" and one in the sub-ring $\{0\} \times \Bbb Z_b$. It is clear that the element of $\Bbb Z_{ab}$ we get besides 1 will be a multiple of $a$, say $na$.

Now, first consider the elements $k/1$. When will:

$k/1 = k'/1$?

Well, this means we have either:

$1(k - k') = 0$ in which case $k = k'$, OR:

$na(k - k') = 0$.

In terms of (actual) INTEGERS, this means $b$ divides $na(k - k')$, and since $n < b$ (and gcd(a,b) = 1), we must have $b$ dividing $k - k'$, that is: $k = k'\text{ (mod b)}$.

And, indeed if $k = k'\text{ (mod b)}$, we see that $k/1 = k'/1$.

So the image of $\Bbb Z_{ab}$ in the localization is (isomorphic to) $\Bbb Z_b$.

What about the "other fractions" $k/(na)$?

If $k/(na) = k'/(na)$, and $k \neq k'$ (which is one possibility, but not very interesting), we get:

$(na)^2(k - k') = 0$.

But, we have chosen $na$ so that, (mod ab), $(na)^2 = na$, so we get exactly $b$ equivalence classes here, as well.

So, it is natural to ask:

$k/1 = k/(na)$?

and the answer is YES:

$(na)((na)k - k) = (na)^2k - (na)k = (na)k - (na)k = 0$.

That is, if $\phi$ is the isomorphism alluded to above:

$\{\phi^{-1}(1,1),\phi^{-1}(0,1)\}^{-1}\Bbb Z_{ab} \cong \Bbb Z_b$

Hi Deveno,

I wish to closely follow your example regarding the ring \(\displaystyle \mathbb{Z}_{ab} \), but I am having trouble fully understanding the example. I am hoping that you can clarify the following issues/questions ... ...

Firstly, I am assuming the localization concerned is \(\displaystyle R = \mathbb{Z}_{ab} \) where T, the multiplicatively closed subset of R is the pre-image of {(1,1), (0,1)}. Can you confirm?

You write:

"What we are doing here is taking two identities, one in the "whole ring" and one in the sub-ring \(\displaystyle \{0\} \times \Bbb Z_b \)."

What exactly are the two identities?

Further, why is it the case that the element of $\Bbb Z_{ab}$ we get besides 1 will be a multiple of $a$, say $na$?

Can you please help?

Peter
 
  • #13
Peter said:
Hi Deveno,

I wish to closely follow your example regarding the ring \(\displaystyle \mathbb{Z}_{ab} \), but I am having trouble fully understanding the example. I am hoping that you can clarify the following issues/questions ... ...

Firstly, I am assuming the localization concerned is \(\displaystyle R = \mathbb{Z}_{ab} \) where T, the multiplicatively closed subset of R is the pre-image of {(1,1), (0,1)}. Can you confirm?

Yes, that is correct.

You write:

"What we are doing here is taking two identities, one in the "whole ring" and one in the sub-ring \(\displaystyle \{0\} \times \Bbb Z_b \)."

What exactly are the two identities?

One is the unity in the ring (the multiplicative identity), and one is the unity of the subring.

For example, we have $\Bbb Z_{21} \cong \Bbb Z_3 \times \Bbb Z_7$.

Explicitly, this isomorphism is given by:

0 -- (0,0) 7 -- (1,0) 14 -- (2,0)
1 -- (1,1) 8 -- (2,1) 15 -- (0,1)
2 -- (2,2) 9 -- (0,2) 16 -- (1,2)
3 -- (0,3) 10 -- (1,3) 17 -- (2,3)
4 -- (1,4) 11 -- (2,4) 18 -- (0,4)
5 -- (2,5) 12 -- (0,5) 19 -- (1,5)
6 -- (0,6) 13 -- (1,6) 20 -- (2,6)

so $\Bbb Z_3 \times \{0\}$ is isomorphic to $\{0,7,14\}$ and $\{0\} \times\Bbb Z_7$ is isomorphic to $\{0,3,6,9,12,15,18\}$. The multiplicative identity in the first sub-ring is 7, and the identity in the second sub-ring is 15 (neither sub-ring contains the identity 1 of the entire ring).

Further, why is it the case that the element of $\Bbb Z_{ab}$ we get besides 1 will be a multiple of $a$, say $na$?

Can you please help?

Peter

Suppose $x \in \Bbb Z_{ab}$ maps to $(0,1)$ in $\Bbb Z_a \times \Bbb Z_b$, This tells us:

$x = 0 \text{ (mod a)}$
$x = 1 \text{ (mod b)}$

and the former tells us $a|x$.
 
  • #14
Re: Localization - clarifying the trivial case

Deveno said:
The case where $D$ has no zero-divisors is a SPECIAL CASE. IF this is so, then the equation:

$x(er - ds) = 0 \implies er - ds = 0$

(or else $x \in D$ is a zero divisor).

So in this *special case*, we get that:

$r \mapsto r/1 = [(r,1)]$ is a monomorphism. To see this, suppose:

$r/1 = r'/1$ (that is: $(r,1) \sim (r',1)$).

By the definition of the equivalence, this means:

$x(r1 - r'1) = 0$, and since $D$ has no zero divisors, $r - r' = 0$,

that is: $r = r'$, so the mapping $r \mapsto r/1$ is injective.

On the other hand, suppose $D$ contains SOME zero divisor, say $z$.

Then there exists SOME $s \neq 0 \in R$ with $zs = 0$.

Clearly, for any $r \in R$, we have:

$(r+s)/1 = r/1$, since:

$z((r+s) - r) = zs = 0$ and it is evident that $r+s \neq r$, which shows that if $D$ contains ANY zero divisor at all, the mapping $r \mapsto r/1$ is NOT injective, because $r,r+s$ both map to the same element of $D^{-1}R$.

Now, it may be that $0 \in D$. If this is so, then the only "fraction with 0 denominator" that makes sense is $0/0 = 0 = 1$, and indeed, it turns out that $D^{-1}R$ is trivial in this case.

BUT...if $D$ contains zero divisors, but not 0, it doesn't contain "all" zero divisors.

For example, if $0 \not \in D$ but $a \in D$ is a zero divisor, with $ab = 0$, then we cannot have $b \in D$, since otherwise the closure of $D$ under multiplication would imply $0 = ab \in D$.

Usually, $D$ is chosen to not contain all zero divisors. A trivial quotient of $R$ isn't all that useful.

Examine closely what happens to 3 in my example of $\Bbb Z_6$. Is its image a unit in $D^{-1}R$?

Hi Deveno,

In the above post you write:

"Now, it may be that 0D. If this is so, then the only "fraction with 0 denominator" that makes sense is 0/0=0=1, and indeed, it turns out that D1R is trivial in this case."

As I worked through and reflected on your meaning I became aware that I was not entirely sure of the meaning of the above statement. I would be most grateful if you could clarify some issues/questions for me ... issues/questions follow ...

How exactly (formally and rigorously) do we depict/describe the elements of \(\displaystyle D^{-1}R \) in this case?

How exactly (again rigorously) does one establish that the only "fraction with 0 denominator" that makes sense is 0/0=0=1?

What are the elements of \(\displaystyle D^{-1}R \) in this case - that is, is the only element 0/0 = 0 =1? Note that based on the fact that using t=0, we get:

a/bc/d for ANY a,b,c,dR

... so, then it seems we can use any element to represent the one element of the ring \(\displaystyle D^1R \)? Is this right?

Do we actually mean there is only one element of \(\displaystyle D^{-1}R \) and it behaves like 0? or 1? or anything? What exactly do we know about the behaviour of the element(s) of \(\displaystyle D^{-1}R \) in the case where \(\displaystyle 0 \in D^{-1}R \)

What then, in summary, do we mean by the statement that \(\displaystyle D^1R \) is trivial?

In example 2 in Dummit and Foote, page 708 (Section 15.4 Localization), D&F use the notation \(\displaystyle D^{-1}R = 0 \). Do they actually mean \(\displaystyle D^{-1}R = \{ 0 \} \) or perhaps \(\displaystyle D^{-1}R = \{ 0/0 \} \)

Can you clarify what D&F mean?

Would appreciate your help on this matter.

Peter
 
Last edited:
  • #15
There is, up to isomorphism, only one trivial ring. This ring is denoted various ways, but usually as $\{0\}$. For any ring $R$, this is also the quotient ring $R/R$.

If $D$ contains 0, then all fractions are equivalent. This is not hard to see (we can use 0 as the "$t$" in the relation:

$a/b \sim c/d$ if $\exists t \in D: t(ad - bc) = 0$).

If all fractions are equivalent in $D^{-1}R$, then there is only one equivalence class, and it DOES NOT MATTER which "representative" you pick. Picking $0/1$ makes the most "intuitive sense". The behavior of a trivial ring is more "zero-like" than "unity-like", because for its single element, $r$, we have:

$r+r = r$
$r^2 = r$

The point being, we usually don't consider multiplicative systems that contain 0. They don't tell us anything about $R$ (because EVERY ring has an additive identity). Any one of your notations for a trivial $D^{-1}R$ could be considered correct, what we "name" the single element of a trivial ring isn't THAT important. We could, if we were so inclined, call it "Fred".
 
  • #16
Deveno said:
There is, up to isomorphism, only one trivial ring. This ring is denoted various ways, but usually as $\{0\}$. For any ring $R$, this is also the quotient ring $R/R$.

If $D$ contains 0, then all fractions are equivalent. This is not hard to see (we can use 0 as the "$t$" in the relation:

$a/b \sim c/d$ if $\exists t \in D: t(ad - bc) = 0$).

If all fractions are equivalent in $D^{-1}R$, then there is only one equivalence class, and it DOES NOT MATTER which "representative" you pick. Picking $0/1$ makes the most "intuitive sense". The behavior of a trivial ring is more "zero-like" than "unity-like", because for its single element, $r$, we have:

$r+r = r$
$r^2 = r$

The point being, we usually don't consider multiplicative systems that contain 0. They don't tell us anything about $R$ (because EVERY ring has an additive identity). Any one of your notations for a trivial $D^{-1}R$ could be considered correct, what we "name" the single element of a trivial ring isn't THAT important. We could, if we were so inclined, call it "Fred".

Thanks Deveno.

I am still not entirely sure of this, so forgive me pursuing this further ...

You point out that when D contains 0 then all fractions in D are equivalent, and so I presume behave the same.

So if we choose 0/1 as our representative then certainly we have for the single element that

\(\displaystyle r + r = r \)

and \(\displaystyle r^2 = r \)

since of course

0/1 + 0/1 = (0.1 + 1.0)/1.1 = 0/1 and

(0/1)*(0/1) = 0/1

But if we choose 1/1 as our element (presumably we can!??) then

\(\displaystyle 1/1 +1/1 = (1.1 + 1.1)/1.1 = 2/1 \ne 1/1 \)?

So presumably in \(\displaystyle D^{-1}R \) we have that all elements must behave like 0/1 - but why? and how would we rigorously prove this ... that is show that \(\displaystyle r +r = r \) and \(\displaystyle r^2 = r \) without using the special properties of 0/1.

If we use the special properties of 0/1 in \(\displaystyle D^{-1}R \) it seems like we are privileging this element in some way.

Can you please clarify this?

Peter
 
  • #17
There is only ONE possible binary operation on a set with only one element. On such a set, addition and multiplication coincide.

You're over-thinking this. Trivial rings are...well, trivial. The "representatives" of an equivalence class don't represent very well, if our partition is coarse.

When we form a quotient ring, $R/I$, we only preserve "some" of the properties of $R$. The larger the ideal $I$ is, the more of the flavor of $R$ is lost.

For example, if we localize the integers "away from" 2 (this is just the ring $\Bbb Z[1/2]$), we wind up with fractions of odd numerator. This is because the even factors "cancel":

3/16 + 7/16 = 10/16 = 5/8.

The only time this fails to happen is if our denominator HAS no factors of 2, that is to say:

a/b = k/1,

so that a/b is an integer (if a = bk, it is clear b divides a).

Similarly, we can form the ring of "fractions with odd denominator", which is the integers localized at 2. This ring is (quasi-)local, the unique maximal ideal is fractions of the form:

(2k)/(2m+1).

This previous example is actually the important one: if $P$ is a prime ideal of $R$, then $D = R - P$ is a multiplicative system, and $P(D^{-1}R)$ is the unique maximal ideal of $D^{-1}R$, which means $D^{-1}R$ is a (quasi-)local ring (all the other maximal ideals of $R$ "cancel out" in $D^{-1}R$, by the very way we constructed it).
 

FAQ: Localization in Commutative Ring Theory

What is localization in commutative ring theory?

Localization in commutative ring theory is a process of creating a new ring from a given ring by introducing a new element and inverting a subset of the existing elements. This new element is known as a "denominator" and the subset of elements are known as "multiplicatively closed sets". Localization allows us to study certain properties of a ring in a more controlled and specific environment.

How is localization related to the concept of prime ideals?

Localization is closely related to the concept of prime ideals in commutative ring theory. A prime ideal is a special type of ideal that has the property that the quotient ring obtained by dividing by this ideal is an integral domain. In the process of localization, we invert a multiplicatively closed set which contains all non-zero-divisors of a given ring. This ensures that we are not introducing any zero-divisors in the new ring, making it an integral domain.

What is the significance of localization in algebraic geometry?

Localization plays a crucial role in algebraic geometry as it allows us to define rational functions on a variety. Using localization, we can generalize the notion of a function on a variety to a function on an open subset of the variety. This is achieved by localizing the coordinate ring of the variety at a prime ideal, which gives us a local ring that captures the behavior of the variety around that point. This allows us to study the properties of a variety at a specific point in a more precise manner.

How is localization used in ring homomorphisms?

Localization can be used to extend a given ring homomorphism to a larger domain, known as the localization of the ring. This is done by mapping the denominators to their inverses in the new ring. This allows us to study the behavior of the homomorphism in a more restricted and controlled environment, which can be useful in various applications such as ideal theory and algebraic number theory.

Are there any limitations to localization in commutative ring theory?

Yes, there are some limitations to localization in commutative ring theory. For example, not all rings can be localized. A ring must satisfy certain conditions, such as being a commutative ring with identity, to be able to undergo localization. Additionally, localization may not preserve certain properties of a ring, such as being a unique factorization domain. It is important to carefully consider these limitations when using localization in any mathematical application.

Similar threads

Back
Top