Polynomials Acting on Spaces - B&K Ex. 1.2.2 (iv): An Intro by Peter

In summary, the conversation discusses Example 1.2.2 (iv) from the book "An Introduction to Rings and Modules With K-Theory in View" by A.J. Berrick and M.E. Keating. The example involves evaluating a polynomial in the form of $f(A)$, where $A$ is a matrix, and the discussion delves into the use of indeterminate $T$ and its purpose in the example. The conversation also touches on the concept of a minimal polynomial and its role in determining the action of polynomials on vectors.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading An Introduction to Rings and Modules With K-Theory in View by A.J. Berrick and M.E. Keating (B&K).

I need help in order to fully understand Example 1.2.2 (iv) [page 16] ... indeed, I am somewhat overwhelmed by this construction ... ...

Example 1.2.2 (iv) reads as follows:View attachment 5089My question is as follows:

Why do Berrick and Keating bother to use the indeterminate \(\displaystyle T\) in the above ... why not just use \(\displaystyle f(A)\) ... ? what is the point of \(\displaystyle T\) in the above example ...?

By the way ... I am assuming that \(\displaystyle f_0, f_1, \ ... \ ... \ f_r\) are just elements of \(\displaystyle \mathcal{K}\) ... ... is that correct?

Hope someone can help ...

Peter*** EDIT ***

It may make sense if we think of the polynomial \(\displaystyle f \in \mathcal{K} [T]\) being evaluated at \(\displaystyle A\) ... BUT ... when we evaluate a polynomial in \(\displaystyle \mathcal{K} [T]\), don't we take values of \(\displaystyle T\) in \(\displaystyle \mathcal{K}\) ... ... but ... problem ... \(\displaystyle A\) is an \(\displaystyle n \times n\) matrix and hence (of course) \(\displaystyle A \notin \mathcal{K}\) ... ?

... anyway, hope someone can explain exactly how the construction in this example "works" ...

Peter
 
Last edited:
Physics news on Phys.org
  • #2
Given a polynomial in $\mathcal{K}[T]$, say:

$f(T) = f_0 + f_1T + \cdots + f_rT^r$

$T$ is an *indeterminate*, and it is possible to have such $f$ of arbitrarily high degree.

However, in the expression:

$f(A) = If_0 + Af_1 + \cdots + A^rf_r$ (here, we write the $f_j$ on the right, since we are viewing $\mathcal{K}^n$ as a *right* $\mathcal{K}$-module)

it turns out that (for a field $\mathcal{K}$) that the matrix $A$ is actually *algebraic* over $\mathcal{K}$, so that:

$\mathcal{K}[A]$ is a *quotient* of $\mathcal{K}[T]$.

As you may recall, when one has a ring-homomorphism:

$\phi:R \to S$, and an $S$-module $M$, one can turn $M$ into an $R$-module like so:

$m\cdot r = m \cdot \phi(r)$ (the RHS is the right $S$-action).

The homomorphism here is:

$\phi: \mathcal{K}[T] \to \mathcal{K}[A]$,

and since $A \in \text{Hom}_{\mathcal{K}}(\mathcal{K}^n,\mathcal{K}^n)$, we have a natural action of $\mathcal{K}[A]$ on $\mathcal{K}^n$ defined by:

$x\cdot f(A) = (f(A))(x)$.

We then set $x\cdot f(T) = x\cdot \phi(f(T))$ (note $\phi(f(T))$ may have much lower degree than $f$, because if we have $m(A) = 0$, and:

$f(T) =q(T)m(T) + r(T)$ with $\text{deg }r < \text{deg }m$, it follows that:

$f(A) = r(A)$).

What happens, in actual practice, is that one determines the minimal polynomial of $A$ (if $A$ has $n$ distinct eigenvalues this will be the same as the characteristic polynomial $\det(IT - A)$). Knowing the degree of this allows us to choose a *basis* (over $\mathcal{K}$) for $\mathcal{K}[A]$which means we only have to compute a finite number of powers of $A$ to know the action of *any* polynomial $f[T]$ upon a vector $x$.

Note that we get various $\mathcal{K}[T]$-modules this way, depending on *which* matrix $A$ we use. So this tells us more about $A$ than it does about the space $\mathcal{K}^n$ or the polynomial ring $\mathcal{K}[T]$ (although, depending on which field $\mathcal{K}$ we use, we will *also* get different modules, because the minimal polynomial of a matrix can depend on the field being used).

It's not really fair to say "we substitute $A$ for $T$". Polynomial expressions in matrices are a bit different that polynomial expressions in field elements (unless the matrices are 1x1).
 
  • #3
Deveno said:
Given a polynomial in $\mathcal{K}[T]$, say:

$f(T) = f_0 + f_1T + \cdots + f_rT^r$

$T$ is an *indeterminate*, and it is possible to have such $f$ of arbitrarily high degree.

However, in the expression:

$f(A) = If_0 + Af_1 + \cdots + A^rf_r$ (here, we write the $f_j$ on the right, since we are viewing $\mathcal{K}^n$ as a *right* $\mathcal{K}$-module)

it turns out that (for a field $\mathcal{K}$) that the matrix $A$ is actually *algebraic* over $\mathcal{K}$, so that:

$\mathcal{K}[A]$ is a *quotient* of $\mathcal{K}[T]$.

As you may recall, when one has a ring-homomorphism:

$\phi:R \to S$, and an $S$-module $M$, one can turn $M$ into an $R$-module like so:

$m\cdot r = m \cdot \phi(r)$ (the RHS is the right $S$-action).

The homomorphism here is:

$\phi: \mathcal{K}[T] \to \mathcal{K}[A]$,

and since $A \in \text{Hom}_{\mathcal{K}}(\mathcal{K}^n,\mathcal{K}^n)$, we have a natural action of $\mathcal{K}[A]$ on $\mathcal{K}^n$ defined by:

$x\cdot f(A) = (f(A))(x)$.

We then set $x\cdot f(T) = x\cdot \phi(f(T))$ (note $\phi(f(T))$ may have much lower degree than $f$, because if we have $m(A) = 0$, and:

$f(T) =q(T)m(T) + r(T)$ with $\text{deg }r < \text{deg }m$, it follows that:

$f(A) = r(A)$).

What happens, in actual practice, is that one determines the minimal polynomial of $A$ (if $A$ has $n$ distinct eigenvalues this will be the same as the characteristic polynomial $\det(IT - A)$). Knowing the degree of this allows us to choose a *basis* (over $\mathcal{K}$) for $\mathcal{K}[A]$which means we only have to compute a finite number of powers of $A$ to know the action of *any* polynomial $f[T]$ upon a vector $x$.

Note that we get various $\mathcal{K}[T]$-modules this way, depending on *which* matrix $A$ we use. So this tells us more about $A$ than it does about the space $\mathcal{K}^n$ or the polynomial ring $\mathcal{K}[T]$ (although, depending on which field $\mathcal{K}$ we use, we will *also* get different modules, because the minimal polynomial of a matrix can depend on the field being used).

It's not really fair to say "we substitute $A$ for $T$". Polynomial expressions in matrices are a bit different that polynomial expressions in field elements (unless the matrices are 1x1).[/QUOTE

Well ... that has given me a lot to think about ... there is obviously more to the above construction than I was aware of ... glad I asked the question! ... thanks so much Deveno ... really appreciate your help ...

I am now working through your post in detail ... reflecting on all you have written ...

[Sorry to be slow in replying ... had to leave state of Tasmania and travel to regional Victoria about 100 kms outside of Melbourne ... but have arranged Internet connection ... so should be on MHB when I can manage it ...]

Peter
 
  • #4
Deveno said:
Given a polynomial in $\mathcal{K}[T]$, say:

$f(T) = f_0 + f_1T + \cdots + f_rT^r$

$T$ is an *indeterminate*, and it is possible to have such $f$ of arbitrarily high degree.

However, in the expression:

$f(A) = If_0 + Af_1 + \cdots + A^rf_r$ (here, we write the $f_j$ on the right, since we are viewing $\mathcal{K}^n$ as a *right* $\mathcal{K}$-module)

it turns out that (for a field $\mathcal{K}$) that the matrix $A$ is actually *algebraic* over $\mathcal{K}$, so that:

$\mathcal{K}[A]$ is a *quotient* of $\mathcal{K}[T]$.

As you may recall, when one has a ring-homomorphism:

$\phi:R \to S$, and an $S$-module $M$, one can turn $M$ into an $R$-module like so:

$m\cdot r = m \cdot \phi(r)$ (the RHS is the right $S$-action).

The homomorphism here is:

$\phi: \mathcal{K}[T] \to \mathcal{K}[A]$,

and since $A \in \text{Hom}_{\mathcal{K}}(\mathcal{K}^n,\mathcal{K}^n)$, we have a natural action of $\mathcal{K}[A]$ on $\mathcal{K}^n$ defined by:

$x\cdot f(A) = (f(A))(x)$.

We then set $x\cdot f(T) = x\cdot \phi(f(T))$ (note $\phi(f(T))$ may have much lower degree than $f$, because if we have $m(A) = 0$, and:

$f(T) =q(T)m(T) + r(T)$ with $\text{deg }r < \text{deg }m$, it follows that:

$f(A) = r(A)$).

What happens, in actual practice, is that one determines the minimal polynomial of $A$ (if $A$ has $n$ distinct eigenvalues this will be the same as the characteristic polynomial $\det(IT - A)$). Knowing the degree of this allows us to choose a *basis* (over $\mathcal{K}$) for $\mathcal{K}[A]$which means we only have to compute a finite number of powers of $A$ to know the action of *any* polynomial $f[T]$ upon a vector $x$.

Note that we get various $\mathcal{K}[T]$-modules this way, depending on *which* matrix $A$ we use. So this tells us more about $A$ than it does about the space $\mathcal{K}^n$ or the polynomial ring $\mathcal{K}[T]$ (although, depending on which field $\mathcal{K}$ we use, we will *also* get different modules, because the minimal polynomial of a matrix can depend on the field being used).

It's not really fair to say "we substitute $A$ for $T$". Polynomial expressions in matrices are a bit different that polynomial expressions in field elements (unless the matrices are 1x1).
Hi Deveno,

Just a quick question ...

You write:

"... ... ... ... However, in the expression:

$f(A) = If_0 + Af_1 + \cdots + A^rf_r$ (here, we write the $f_j$ on the right, since we are viewing $\mathcal{K}^n$ as a *right* $\mathcal{K}$-module) ... ... ... ...
... ... why exactly are we viewing $\mathcal{K}^n$ as a *right* $\mathcal{K}$-module ... aren't Berrrick and Keating viewing \(\displaystyle \mathcal{K}^n\) as a \(\displaystyle \mathcal{K} [T]\) module ... ?

Can you clarify ... ?

Peter
 
  • #5
I don't know "why", but Berrick and Keating write:

"...It is convenient to view $\mathcal{K}^n$ as a right $\mathcal{K}$-space...", that is, a right $\mathcal{K}$-module.

The only difference here is which side we write the scalar multiplication on, which for all practical purposes makes no difference, since the "scalar matrices":

$\alpha I$ for $\alpha \in \mathcal{K}$

commute with all other $n \times n$ matrices (in fact, they form the *center* of the ring of such matrices).
 
  • #6
Deveno said:
I don't know "why", but Berrick and Keating write:

"...It is convenient to view $\mathcal{K}^n$ as a right $\mathcal{K}$-space...", that is, a right $\mathcal{K}$-module.

The only difference here is which side we write the scalar multiplication on, which for all practical purposes makes no difference, since the "scalar matrices":

$\alpha I$ for $\alpha \in \mathcal{K}$

commute with all other $n \times n$ matrices (in fact, they form the *center* of the ring of such matrices).
Thanks Deveno ...... ... yes, see that ... so we can regard $\mathcal{K}^n$ as a right $\mathcal{K}$-module or we can regard $\mathcal{K}^n$ as a right \(\displaystyle \mathcal{K} [T] \)-module ... ... is that right?

Peter
 
  • #7
Deveno said:
Given a polynomial in $\mathcal{K}[T]$, say:

$f(T) = f_0 + f_1T + \cdots + f_rT^r$

$T$ is an *indeterminate*, and it is possible to have such $f$ of arbitrarily high degree.

However, in the expression:

$f(A) = If_0 + Af_1 + \cdots + A^rf_r$ (here, we write the $f_j$ on the right, since we are viewing $\mathcal{K}^n$ as a *right* $\mathcal{K}$-module)

it turns out that (for a field $\mathcal{K}$) that the matrix $A$ is actually *algebraic* over $\mathcal{K}$, so that:

$\mathcal{K}[A]$ is a *quotient* of $\mathcal{K}[T]$.

As you may recall, when one has a ring-homomorphism:

$\phi:R \to S$, and an $S$-module $M$, one can turn $M$ into an $R$-module like so:

$m\cdot r = m \cdot \phi(r)$ (the RHS is the right $S$-action).

The homomorphism here is:

$\phi: \mathcal{K}[T] \to \mathcal{K}[A]$,

and since $A \in \text{Hom}_{\mathcal{K}}(\mathcal{K}^n,\mathcal{K}^n)$, we have a natural action of $\mathcal{K}[A]$ on $\mathcal{K}^n$ defined by:

$x\cdot f(A) = (f(A))(x)$.

We then set $x\cdot f(T) = x\cdot \phi(f(T))$ (note $\phi(f(T))$ may have much lower degree than $f$, because if we have $m(A) = 0$, and:

$f(T) =q(T)m(T) + r(T)$ with $\text{deg }r < \text{deg }m$, it follows that:

$f(A) = r(A)$).

What happens, in actual practice, is that one determines the minimal polynomial of $A$ (if $A$ has $n$ distinct eigenvalues this will be the same as the characteristic polynomial $\det(IT - A)$). Knowing the degree of this allows us to choose a *basis* (over $\mathcal{K}$) for $\mathcal{K}[A]$which means we only have to compute a finite number of powers of $A$ to know the action of *any* polynomial $f[T]$ upon a vector $x$.

Note that we get various $\mathcal{K}[T]$-modules this way, depending on *which* matrix $A$ we use. So this tells us more about $A$ than it does about the space $\mathcal{K}^n$ or the polynomial ring $\mathcal{K}[T]$ (although, depending on which field $\mathcal{K}$ we use, we will *also* get different modules, because the minimal polynomial of a matrix can depend on the field being used).

It's not really fair to say "we substitute $A$ for $T$". Polynomial expressions in matrices are a bit different that polynomial expressions in field elements (unless the matrices are 1x1).
Hi Deveno,

I need some further help ...

You write:

"... ... As you may recall, when one has a ring-homomorphism:

$\phi:R \to S$, and an $S$-module $M$, one can turn $M$ into an $R$-module like so:

$m\cdot r = m \cdot \phi(r)$ (the RHS is the right $S$-action). ... ... I am having trouble understanding what is happening here ... hope you can clarify for me ...

If $m\cdot r = m \cdot \phi(r)$ then presumably \(\displaystyle r = \phi (r)\) ... (I think ... ... )

BUT ... if that is true, then \(\displaystyle R\) must be embedded in \(\displaystyle S\) ... but this may not be the case for some S ...

Can you explain what is going on ...

Peter*** EDIT ***

Maybe you are saying ... ... Define the action of R on M by m \cdot \phi(r) ...

Is that right ... ?

Then we would have to prove that the action satsfies the relevant 'axioms' for an action ... but, presumably this is straightforward ...
 
Last edited:
  • #8
Yes!

Here is a way to "keep it straight":

Clearly, $\Bbb Z$ is an abelian group which we can define a (right) $\Bbb Z$-action on via right-multiplication (turning it into a ring).

But we cannot, in general, define a right $\Bbb Z_n$-action thereby, for example with $n= 4$:

$k \cdot ([2]_4 + [2]_4) = k \cdot [0]_4 = 0 \neq k\cdot [2]_4+ k\cdot [2]_4 = 2(k\cdot [2]_4)$.

HOWEVER, if, given a $\Bbb Z_n$-module, we can certainly turn it into a $\Bbb Z$-action by:

$m \cdot k = m \cdot [k]_n$.

In fact, this is precisely the way finite abelian groups are turned into $\Bbb Z$-modules.

So while the homomorphism goes FROM $R$ TO $S$, the induced module action goes from $\mathbf{Mod}-S$ (the category of right $S$-modules) to $\mathbf{Mod}-R$.
 

Related to Polynomials Acting on Spaces - B&K Ex. 1.2.2 (iv): An Intro by Peter

1. What is a polynomial?

A polynomial is a mathematical expression consisting of variables and coefficients, combined using the operations of addition, subtraction, multiplication, and non-negative integer exponents.

2. How do polynomials act on spaces?

Polynomials act on spaces by mapping elements of the space to other elements of the space. This is done by substituting the space's variables with values and performing the polynomial's operations on those values.

3. What is the significance of B&K Ex. 1.2.2 (iv)?

B&K Ex. 1.2.2 (iv) is a specific example problem used in Peter's introduction to demonstrate the concept of polynomials acting on spaces. It helps to illustrate how polynomials can be used to map elements of a space to other elements.

4. What is the purpose of studying polynomials acting on spaces?

Studying polynomials acting on spaces is important in many areas of science and mathematics, such as algebra, calculus, and computer science. It helps to understand the properties and behavior of polynomials, and how they can be applied to solve problems in various fields.

5. Are there any real-world applications of polynomials acting on spaces?

Yes, there are many real-world applications of polynomials acting on spaces. Some examples include using polynomials in physics to model motion and forces, in economics to analyze market trends, and in engineering to design and optimize structures and systems.

Similar threads

  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
1K
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
13
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
1K
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
12
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
1K
Back
Top