Why Is \(a\) a Unit in \(F[x]\) in Lemma 2 of Nicholson's Book?

In summary, in Section 4.3 of Nicholson's book, Lemma 2 states that if a is a unit in the ring F[x], then a is a unit in F[x] and a + A = 0 + a.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading W. Keith Nicholson's book: Introduction to Abstract Algebra (Third Edition) ...

I am focused on Section 4.3:Factor Rings of Polynomials over a Field.

I need some help with the proof of Lemma 2 on page 223-224.

The relevant text from Nicholson's book is as follows:https://www.physicsforums.com/attachments/4634In the above text we read the following:

" ... ... To see that it is one-to-one let \(\displaystyle \theta (a) = \overline{0}.\) Then \(\displaystyle \overline{a} = \overline{0}\) so \(\displaystyle a + A = 0 + a\); that is \(\displaystyle a \in A\). If \(\displaystyle a \ne 0\), then \(\displaystyle A = F[x]\), because \(\displaystyle a\) is a unit in \(\displaystyle F[x]\) ... ... "I cannot see why a is a unit in F[x] ... can someone please explain why this is the case ...

... it may be quite simple ... but anyway, I hope someone can help ...

Peter
 
Physics news on Phys.org
  • #2
Peter said:
I cannot see why a is a unit in F[x] ... can someone please explain why this is the case ...

Since $a \in F$ and $F\subset F[x]$, then $a\in F[x]$. Further, $a \neq 0$ in the field $F$ implies $a$ is a unit in $F$. So $a$ must be a unit in $F[x]$ (explicitly, $ab = 1$ in $F$ implies $ab = 1$ in $F[x]$).
 
  • #3
The identity element in $F[x]$ is the constant poynomial $1$.

Of course, if $a \in F$ is a unit (i.e., non-zero), then that means that $\dfrac{1}{a} \in F$ (and thus in $F[x]$ as well, as a constant polynomial), and we have:

$a\cdot \dfrac{1}{a} = 1$ in $F[x]$.
 
  • #4
Deveno said:
The identity element in $F[x]$ is the constant poynomial $1$.

Of course, if $a \in F$ is a unit (i.e., non-zero), then that means that $\dfrac{1}{a} \in F$ (and thus in $F[x]$ as well, as a constant polynomial), and we have:

$a\cdot \dfrac{1}{a} = 1$ in $F[x]$.
My thanks to Deveno and Euge for their help ...

Peter

- - - Updated - - -

Euge said:
Since $a \in F$ and $F\subset F[x]$, then $a\in F[x]$. Further, $a \neq 0$ in the field $F$ implies $a$ is a unit in $F$. So $a$ must be a unit in $F[x]$ (explicitly, $ab = 1$ in $F$ implies $ab = 1$ in $F[x]$).

Euge,

Maybe it is obvious somehow ... BUT ... formally and rigorously ...

... how would you show that if \(\displaystyle a\) is a unit in \(\displaystyle F\) then \(\displaystyle a\) is a unit in \(\displaystyle F[x]\) ...

... ... why, exactly is this the case ...

Can you help?

Peter*** EDIT/NOTE ***

Maybe the argument depends on there being a homomorphism \(\displaystyle \phi\) between \(\displaystyle F\) and \(\displaystyle F[x]\) ... so that if we have

If \(\displaystyle ab = 1\) in \(\displaystyle F\)

then

\(\displaystyle \phi (ab) = \phi(a) \phi(b) = \phi(1)\) in \(\displaystyle F[x] \)Is that correct?

Peter
 
Last edited:
  • #5
Peter, by definition of a field (in fact, an axiom for a field), every nonzero element of a field is invertible, i.e. a unit.
 
  • #6
Euge said:
Peter, by definition of a field (in fact, an axiom for a field), every nonzero element of a field is invertible, i.e. a unit.
Hi Euge,

Yes I understand that because \(\displaystyle a\) is in a field \(\displaystyle F\) that a is a unit in \(\displaystyle F\) ... but I was puzzled as to why it was then necessarily also a unit in \(\displaystyle F[X]\) ... but suspect that it is also a unit in \(\displaystyle F[x]\) because there is a homomorphism \(\displaystyle \phi: F \rightarrow F[x]\) ...

See my post above ...

Thanks again for your help ... I really appreciate your support ...

Peter
 
  • #7
I wrote in my last post

"explicitly, $ab = 1$ in $F$ implies $ab = 1$ in $F[x]$"

the simple reason being $F$ is a subset of $F[x]$.
 
  • #8
A field is "two (abelian) groups in one".

Explicitly, $F(,+)$ is an abelian group, and $(F - \{0\},\cdot)$ is an abelian group.

We further require that the map $L_a: F \to F$ ("left multiplication by $a$") given by $L_a(b) = a\cdot b$ is an $(F,+)$-homomorphism. This is the familiar distributive law:

$L_a(b+c) = L_a(b) + L_a(c)$, or: $a\cdot (b+c) = a\cdot b + a\cdot c$.

This is a "compatibility" requirement, ensuring our two operations play nice together.

A ring is similar, we still require an additive abelian group. But now, we do not require that $(R - \{0\},\cdot)$ be an abelian group, rather only that $(R,\cdot)$ be a simpler structure, known as a semigroup.

We still require $L_a$ be additive (an additive group homomorphism). However, since semigroups are not, typically, commutative (abelian), we also have to make this requirement of $R_a: R \to R$, where $R_a(b) = b\cdot a$.

Life is easier for us, of course, if $(R,\cdot)$ is commutative, and easier still if $(R,\cdot)$ forms a "commutative semigroup with identity" (the proper term is "commutative monoid").

As in groups, the identity element of a monoid is unique. If the multiplicative semigroup of a ring $R$ forms a commutative monoid, we call $R$ a commutative ring with unity.

Any monoid $M$(whether in a ring, or not) has a group associated with it-its "group of units" (uusally written $U(M)$). These are the "invertible" elements of $M$, that is, those elements $u$ of $M$ for which there exists $v \in M$ such that:

$uv = vu = 1_M$.

It is easy to see that the group of units actually form a sub-monoid of $M$ (which is why we can make a group from them-we have closure: if $u,v \in U(M)$, then $uv$ has the inverse $v^{1}u^{-1}$ and so is likewise in $U(M)$).

In rings, all the "action" is in the multiplicative semigroup. Since rings without unity (where we have *just* a semi-group) can behave somewhat perversely, we usually insist rings have a multiplicative monoid (but this is a point of some contention, even amognst mathematicians).

In rings, one of our "goals" is to "divide as much as we can". We can't *always* divide, like we can in fields, but at least the ability to establish $a|b$ (that is: $b = ac$) let's us "break down" things into hopefully "simpler things" (N.B., this doesn't always work very well). In polynomial rings, this process is called "factoring".

Now the multiplicative identity of a field, $1_F$ is *still" the multiplicative identity of the ring $F[x]$, that is:

$f(x)\cdot 1 = 1\cdot f(x) = f(x)$, for all $f(x) \in F[x]$.

If $a \neq 0 \in F$, by the very definition of a field, $a$ is a unit in $F$: we have $U(F) = F -\{0\}$.

Note that the map $i: F \to F[x]$ given by $i(a) = a$ is an injective ring-homomorphism. This means, essentially, that "constant polynomials" are a COPY of the field $F$ inside the polynomial ring $F[x]$.

Since $i(F)$ is thus *isomorphic* to $F$, it maps units to units. This isomorphism $i$ is "so simple" it's practically transparent, like the emperor's new clothes. In other words, if I instruct you to add $2$ to $x + 4$, and you say, "wait, the NUMBER $2$, or the (constant) polynomial $2$?", I might just smack you.

A final example: in $\Bbb Q[x]$, we have $g(x) = 4$ is a unit. Why? Because the constant polynomial $f(x) = \dfrac{1}{4}$ is also in $\Bbb Q[x]$ and:

$f(x)g(x) = \dfrac{1}{4}\cdot 4 = 1$ (although our product is the "constant polynomial' $1$, remember we IDENTIFY $1$ and $i(1)$ because $i$ is an isomorphism).

Here is another way to look at it:

consider the evaluation map $\phi_0: F[x] \to F$, This is a ring homomorphism, so:

$\phi_0(f(x)g(x)) = \phi_0(f(x))\phi_0(g(x))$, that is:

$(fg)(0) = f(0)g(0)$.

If $fg = 1$, then $f(0)g(0) = 1$, so the constant terms of $f$ and $g$ are units in $F$ (non-zero). We multiply the constant terms of $f$ and $g$ together to get the constant term of $fg$.

Note that $\phi_0 \circ i = \text{id}_F$, the identity map on $F$, and that $i \circ \phi_0$ is the identity map on $i(F)$.

Now, if $f(x)$ "has $x$'s in it", it is NOT invertible (not a unit of $F[x]$). Mostly this is because $\dfrac{1}{x} \not\in F[x]$). But the "rigorous" way to argue is:

If $\text{deg}(f) > 0$ and $fg = 1$, then $\text{deg}(fg) > 0$. But $\text{deg}(1) = 0$, contradiction.

Since $0$ (the $0$-polynomial) is never a unit (in ANY ring), the only possible units in $F[x]$ are the 0-degree polynomials, that is, the elements of $F^{\ast}$. In short:

$U(F[x]) = U(F)$.
 
  • #9
Deveno said:
A field is "two (abelian) groups in one".

Explicitly, $F(,+)$ is an abelian group, and $(F - \{0\},\cdot)$ is an abelian group.

We further require that the map $L_a: F \to F$ ("left multiplication by $a$") given by $L_a(b) = a\cdot b$ is an $(F,+)$-homomorphism. This is the familiar distributive law:

$L_a(b+c) = L_a(b) + L_a(c)$, or: $a\cdot (b+c) = a\cdot b + a\cdot c$.

This is a "compatibility" requirement, ensuring our two operations play nice together.

A ring is similar, we still require an additive abelian group. But now, we do not require that $(R - \{0\},\cdot)$ be an abelian group, rather only that $(R,\cdot)$ be a simpler structure, known as a semigroup.

We still require $L_a$ be additive (an additive group homomorphism). However, since semigroups are not, typically, commutative (abelian), we also have to make this requirement of $R_a: R \to R$, where $R_a(b) = b\cdot a$.

Life is easier for us, of course, if $(R,\cdot)$ is commutative, and easier still if $(R,\cdot)$ forms a "commutative semigroup with identity" (the proper term is "commutative monoid").

As in groups, the identity element of a monoid is unique. If the multiplicative semigroup of a ring $R$ forms a commutative monoid, we call $R$ a commutative ring with unity.

Any monoid $M$(whether in a ring, or not) has a group associated with it-its "group of units" (uusally written $U(M)$). These are the "invertible" elements of $M$, that is, those elements $u$ of $M$ for which there exists $v \in M$ such that:

$uv = vu = 1_M$.

It is easy to see that the group of units actually form a sub-monoid of $M$ (which is why we can make a group from them-we have closure: if $u,v \in U(M)$, then $uv$ has the inverse $v^{1}u^{-1}$ and so is likewise in $U(M)$).

In rings, all the "action" is in the multiplicative semigroup. Since rings without unity (where we have *just* a semi-group) can behave somewhat perversely, we usually insist rings have a multiplicative monoid (but this is a point of some contention, even amognst mathematicians).

In rings, one of our "goals" is to "divide as much as we can". We can't *always* divide, like we can in fields, but at least the ability to establish $a|b$ (that is: $b = ac$) let's us "break down" things into hopefully "simpler things" (N.B., this doesn't always work very well). In polynomial rings, this process is called "factoring".

Now the multiplicative identity of a field, $1_F$ is *still" the multiplicative identity of the ring $F[x]$, that is:

$f(x)\cdot 1 = 1\cdot f(x) = f(x)$, for all $f(x) \in F[x]$.

If $a \neq 0 \in F$, by the very definition of a field, $a$ is a unit in $F$: we have $U(F) = F -\{0\}$.

Note that the map $i: F \to F[x]$ given by $i(a) = a$ is an injective ring-homomorphism. This means, essentially, that "constant polynomials" are a COPY of the field $F$ inside the polynomial ring $F[x]$.

Since $i(F)$ is thus *isomorphic* to $F$, it maps units to units. This isomorphism $i$ is "so simple" it's practically transparent, like the emperor's new clothes. In other words, if I instruct you to add $2$ to $x + 4$, and you say, "wait, the NUMBER $2$, or the (constant) polynomial $2$?", I might just smack you.

A final example: in $\Bbb Q[x]$, we have $g(x) = 4$ is a unit. Why? Because the constant polynomial $f(x) = \dfrac{1}{4}$ is also in $\Bbb Q[x]$ and:

$f(x)g(x) = \dfrac{1}{4}\cdot 4 = 1$ (although our product is the "constant polynomial' $1$, remember we IDENTIFY $1$ and $i(1)$ because $i$ is an isomorphism).

Here is another way to look at it:

consider the evaluation map $\phi_0: F[x] \to F$, This is a ring homomorphism, so:

$\phi_0(f(x)g(x)) = \phi_0(f(x))\phi_0(g(x))$, that is:

$(fg)(0) = f(0)g(0)$.

If $fg = 1$, then $f(0)g(0) = 1$, so the constant terms of $f$ and $g$ are units in $F$ (non-zero). We multiply the constant terms of $f$ and $g$ together to get the constant term of $fg$.

Note that $\phi_0 \circ i = \text{id}_F$, the identity map on $F$, and that $i \circ \phi_0$ is the identity map on $i(F)$.

Now, if $f(x)$ "has $x$'s in it", it is NOT invertible (not a unit of $F[x]$). Mostly this is because $\dfrac{1}{x} \not\in F[x]$). But the "rigorous" way to argue is:

If $\text{deg}(f) > 0$ and $fg = 1$, then $\text{deg}(fg) > 0$. But $\text{deg}(1) = 0$, contradiction.

Since $0$ (the $0$-polynomial) is never a unit (in ANY ring), the only possible units in $F[x]$ are the 0-degree polynomials, that is, the elements of $F^{\ast}$. In short:

$U(F[x]) = U(F)$.

This is going off topic, but after reading this, it made me think it's long overdue for us to write an algebra guide together in the commentary section! (Rofl)
 
Last edited:
  • #10
Euge said:
This is going off topic, but after reading this, it made me think it's long overdue for us to write an algebra guide together in the commentary section! (Rofl)
That would be awesome!
 
  • #11
Deveno said:
A field is "two (abelian) groups in one".

Explicitly, $F(,+)$ is an abelian group, and $(F - \{0\},\cdot)$ is an abelian group.

We further require that the map $L_a: F \to F$ ("left multiplication by $a$") given by $L_a(b) = a\cdot b$ is an $(F,+)$-homomorphism. This is the familiar distributive law:

$L_a(b+c) = L_a(b) + L_a(c)$, or: $a\cdot (b+c) = a\cdot b + a\cdot c$.

This is a "compatibility" requirement, ensuring our two operations play nice together.

A ring is similar, we still require an additive abelian group. But now, we do not require that $(R - \{0\},\cdot)$ be an abelian group, rather only that $(R,\cdot)$ be a simpler structure, known as a semigroup.

We still require $L_a$ be additive (an additive group homomorphism). However, since semigroups are not, typically, commutative (abelian), we also have to make this requirement of $R_a: R \to R$, where $R_a(b) = b\cdot a$.

Life is easier for us, of course, if $(R,\cdot)$ is commutative, and easier still if $(R,\cdot)$ forms a "commutative semigroup with identity" (the proper term is "commutative monoid").

As in groups, the identity element of a monoid is unique. If the multiplicative semigroup of a ring $R$ forms a commutative monoid, we call $R$ a commutative ring with unity.

Any monoid $M$(whether in a ring, or not) has a group associated with it-its "group of units" (uusally written $U(M)$). These are the "invertible" elements of $M$, that is, those elements $u$ of $M$ for which there exists $v \in M$ such that:

$uv = vu = 1_M$.

It is easy to see that the group of units actually form a sub-monoid of $M$ (which is why we can make a group from them-we have closure: if $u,v \in U(M)$, then $uv$ has the inverse $v^{1}u^{-1}$ and so is likewise in $U(M)$).

In rings, all the "action" is in the multiplicative semigroup. Since rings without unity (where we have *just* a semi-group) can behave somewhat perversely, we usually insist rings have a multiplicative monoid (but this is a point of some contention, even amognst mathematicians).

In rings, one of our "goals" is to "divide as much as we can". We can't *always* divide, like we can in fields, but at least the ability to establish $a|b$ (that is: $b = ac$) let's us "break down" things into hopefully "simpler things" (N.B., this doesn't always work very well). In polynomial rings, this process is called "factoring".

Now the multiplicative identity of a field, $1_F$ is *still" the multiplicative identity of the ring $F[x]$, that is:

$f(x)\cdot 1 = 1\cdot f(x) = f(x)$, for all $f(x) \in F[x]$.

If $a \neq 0 \in F$, by the very definition of a field, $a$ is a unit in $F$: we have $U(F) = F -\{0\}$.

Note that the map $i: F \to F[x]$ given by $i(a) = a$ is an injective ring-homomorphism. This means, essentially, that "constant polynomials" are a COPY of the field $F$ inside the polynomial ring $F[x]$.

Since $i(F)$ is thus *isomorphic* to $F$, it maps units to units. This isomorphism $i$ is "so simple" it's practically transparent, like the emperor's new clothes. In other words, if I instruct you to add $2$ to $x + 4$, and you say, "wait, the NUMBER $2$, or the (constant) polynomial $2$?", I might just smack you.

A final example: in $\Bbb Q[x]$, we have $g(x) = 4$ is a unit. Why? Because the constant polynomial $f(x) = \dfrac{1}{4}$ is also in $\Bbb Q[x]$ and:

$f(x)g(x) = \dfrac{1}{4}\cdot 4 = 1$ (although our product is the "constant polynomial' $1$, remember we IDENTIFY $1$ and $i(1)$ because $i$ is an isomorphism).

Here is another way to look at it:

consider the evaluation map $\phi_0: F[x] \to F$, This is a ring homomorphism, so:

$\phi_0(f(x)g(x)) = \phi_0(f(x))\phi_0(g(x))$, that is:

$(fg)(0) = f(0)g(0)$.

If $fg = 1$, then $f(0)g(0) = 1$, so the constant terms of $f$ and $g$ are units in $F$ (non-zero). We multiply the constant terms of $f$ and $g$ together to get the constant term of $fg$.

Note that $\phi_0 \circ i = \text{id}_F$, the identity map on $F$, and that $i \circ \phi_0$ is the identity map on $i(F)$.

Now, if $f(x)$ "has $x$'s in it", it is NOT invertible (not a unit of $F[x]$). Mostly this is because $\dfrac{1}{x} \not\in F[x]$). But the "rigorous" way to argue is:

If $\text{deg}(f) > 0$ and $fg = 1$, then $\text{deg}(fg) > 0$. But $\text{deg}(1) = 0$, contradiction.

Since $0$ (the $0$-polynomial) is never a unit (in ANY ring), the only possible units in $F[x]$ are the 0-degree polynomials, that is, the elements of $F^{\ast}$. In short:

$U(F[x]) = U(F)$.
Thanks so much Deveno ... I have worked through your post several times ... it is extremely instructive and helpful ...

I think I have said this before ... but I'll say it again ... if you ever write a textbook on abstract algebra ... I will be lining up to buy it!

Thanks again for such a helpful post ...

Clearly, you should be a teacher ...

Peter
 

FAQ: Why Is \(a\) a Unit in \(F[x]\) in Lemma 2 of Nicholson's Book?

What is Lemma 2 in the context of factor rings of polynomials over a field?

Lemma 2 in Nicholson's book is a theorem that states that given a field F and a polynomial ring F[x], if I is an ideal in F[x], then the factor ring F[x]/I is also a field if and only if I is a maximal ideal.

How does Lemma 2 relate to factor rings and ideals?

Lemma 2 provides a necessary and sufficient condition for a factor ring to be a field. It states that for a factor ring to be a field, the ideal it is factoring out must be a maximal ideal.

What is the significance of this lemma in abstract algebra?

This lemma is important in understanding the structure and properties of factor rings, which are a key concept in abstract algebra. It helps to determine when a factor ring is actually a field, which has many applications in algebraic geometry and number theory.

Can this lemma be applied to any field and polynomial ring?

Yes, this lemma is a general result and can be applied to any field F and polynomial ring F[x]. As long as the ideal I is a maximal ideal in F[x], the factor ring F[x]/I will also be a field.

Are there any practical applications of Lemma 2 in other fields of science?

Yes, Lemma 2 has applications in fields such as cryptography, coding theory, and engineering. It can also be used in solving equations and systems of equations in various engineering and scientific problems.

Back
Top