Isomorphism Between Hom_F (V,W) and M_nxn(F) - theory of vector spaces

In summary: Thus:\[\begin{bmatrix}a' + b' \\ b' - a' \\ 2c'\end{bmatrix} = \begin{bmatrix}a \\ b \\ c\end{bmatrix}\]We can now find $[\phi(f(t))]_{\mathcal{E}}$:$\phi(f(t)) = a'(1 - t) + b'(1 + t) + 2c't^2 = a'(1-t) + b'(1 + t) + c't(1 + t)$so that:$\begin{bmatrix}a' \\ b'
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am spending time revising vector spaces. I am using Dummit and Foote: Abstract Algebra (Chapter 11) and also the book Linear Algebra by Stephen Freidberg, Arnold Insel and Lawrence Spence.

I am working on Theorem 10 which is a fundamental theorem regarding an isomorphism between the space of all linear transformations from a vector space \(\displaystyle V\) to a vector space \(\displaystyle W\), \(\displaystyle Hom_F(V, W)\) and the space of \(\displaystyle m \times n \) matrices with coefficients in \(\displaystyle F\), \(\displaystyle M_{m \times n} (F)\).

I need help to fully understand the proof of Theorem 10.

Theorem 10 and its proof (D&F page 416) read as follows:View attachment 3029
Now to define the terminology for a formal and rigorous proof of Theorem 10 we have:

V, W are vector spaces over a field F.

\(\displaystyle \mathcal{B} = \{ v_1, v_2, ... \ ... v_n \} \text{ is an ordered basis of } V \)

\(\displaystyle \mathcal{E} = \{ w_1, w_2, ... \ ... w_m \} \text{ is an ordered basis of } W \)

Let \(\displaystyle \phi, \psi \in Hom_F(V,W)\) be linear transformations from \(\displaystyle V\) to \(\displaystyle W\).

For each \(\displaystyle j = \{ 1, 2, ... \ ... n \}\) write the image of \(\displaystyle v_j\) under \(\displaystyle \phi, \psi\) in terms of the basis \(\displaystyle \mathcal{E}\) as follows:

\(\displaystyle \phi (v_j) = \sum_{i = 1}^m \alpha_{ij} w_i\)

\(\displaystyle = \alpha_{1j}w_1 + \alpha_{2j}w_2 + ... \ ... \alpha_{mj}w_m
\)

and

\(\displaystyle \psi (v_j) = \sum_{i = 1}^m \beta_{ij} w_i
\)

\(\displaystyle = \beta_{1j}w_1 + \beta_{2j}w_2 + ... \ ... \beta_{mj}w_m \)We define the coordinates of \(\displaystyle v_j\) relative to the basis \(\displaystyle \mathcal{E}\) as follows:\(\displaystyle [ \phi (v_j) ]_{\mathcal{E}} = \begin{bmatrix} \alpha_{1j} \\ \alpha_{2j} \\ . \\ . \\ . \\ \alpha_{mj} \end{bmatrix}
\)

and

\(\displaystyle [ \psi (v_j) ]_{\mathcal{E}} = \begin{bmatrix} \beta_{1j} \\ \beta_{2j} \\ . \\ . \\ . \\ \beta_{mj} \end{bmatrix}
\)

Now Theorem 10 concerns the following map:

\(\displaystyle \Phi \ : \ Hom_F(V, W) \to M_{m \times n} (F)\)

where

\(\displaystyle \Phi ( \phi ) = M_\mathcal{B}^\mathcal{E} ( \phi )\) for all \(\displaystyle \phi \in Hom_F (V, W)\)

where \(\displaystyle M_\mathcal{B}^\mathcal{E} ( \phi) \) is the matrix of the linear transformation \(\displaystyle \phi\) with respect to the bases \(\displaystyle \mathcal{B}\) and \(\displaystyle \mathcal{E}\).

Further, Theorem 10 asserts that \(\displaystyle \Phi\) is a vector space isomorphism.

So, the first thing to demonstrate is that for \(\displaystyle \phi, \psi \in Hom_F (V, W)
\) we have:

\(\displaystyle \Phi ( \phi + \psi ) = \Phi ( \phi ) + \Phi ( \psi ) \) ... ... ... ... ... (1)

and

\(\displaystyle \Phi ( c \phi) = c \Phi ( \phi)\) ... ... ... ... ... (2)
In respect of proving (1), (2) above - that is, proving that \(\displaystyle \Phi\) is a linear transformation D&F (page 416) say the following:

" ... ... The columns of the matrix \(\displaystyle M_\mathcal{B}^\mathcal{E}\) are determined by the action of \(\displaystyle \phi\) on the basis \(\displaystyle \mathcal{B}\) as in Equation (3). This shows in particular that the map \(\displaystyle \phi \to M_\mathcal{B}^\mathcal{E} ( \phi )\) is an \(\displaystyle F\)-linear map since \(\displaystyle \phi\) is \(\displaystyle F\)-linear ... ... ... "

[Equation (3) is the following:

\(\displaystyle \phi (v_j) = \sum_{i = 1}^m \alpha_{ij} w_i\) ]
I do not follow this argument ... can anyone help me frame an explicit, formal and rigorous demonstration/proof that \(\displaystyle \Phi\) is a linear transformation?

I note that in an explicit and formal proof we would need, firstly to show that:

\(\displaystyle \Phi ( \phi + \psi ) = \Phi ( \phi ) + \Phi ( \psi ) \)

... ... reflecting ... ... to do this we need to express \(\displaystyle \Phi, \Phi ( \phi ) , \Phi ( \psi ), \Phi ( \phi + \psi ) \) ... in terms of the notation above, that is in terms of the notation of D&F Section 11.2 (see below) and ... we need a basis for \(\displaystyle Hom_F (V, W)\) and a basis for \(\displaystyle M_{m \times n} (F)\) ... but what is the nature/form of such bases ...

Can someone help ...?

I would appreciate the help, especially as Theorem 10 seems so fundamental!Peter
***NOTE***

The relevant text in D&F introducing the definitions and notation for the matrix of a linear transformation is as follows:
View attachment 3030
 
Last edited:
Physics news on Phys.org
  • #2
OK, we are going to take $\mathcal{E}$ and $\mathcal{B}$ as given.

Suppose that we have:

$\displaystyle \phi(v_j) = \sum_{i = 1}^m \alpha_{ij}w_i$
$\displaystyle \psi(v_j) = \sum_{i = 1}^m \beta_{ij}w_i$

then:

$\displaystyle (\phi + \psi)(v_j) = \phi(v_j) + \psi(v_j) = \sum_{i = 1}^m \alpha_{ij}w_i + \sum_{i = 1}^m \beta_{ij}w_i$

$\displaystyle = \sum_{i = 1}^m (\alpha_{ij} + \beta_{ij})w_i$.

If we set, for every pair, $(i,j)$, $\gamma_{ij} = \alpha_{ij} + \beta_{ij}$, we have:

$\Phi(\phi + \psi) = (\gamma_{ij}) = (\alpha_{ij} + \beta_{ij}) = (\alpha_{ij}) + (\beta_{ij}) = \Phi(\phi) + \Phi(\psi)$

(we add matrices entry-by-entry).

Similarly, we have:

$\displaystyle (c\phi)(v_j) = c(\phi(v_j)) = c\left(\sum_{i = 1}^m \alpha_{ij}w_i\right) = \sum_{i=1}^m c\alpha_{ij}w_i$.

If we set: $\eta_{ij} = c\alpha_{ij}$ for each pair $(i,j)$, we have:

$\Phi(c\phi) = (\eta_{ij}) = (c\alpha_{ij}) = c(\alpha_{ij}) = c(\Phi(\phi))$

This IS explicit, and formal.

But let's choose a field, $F$ and $U$ with dimension $n$, and $V$ with dimension $m$.

To make things interesting, I'll use a non-standard space for $V$, with a non-standard basis, with a similar space for $W$ with a more familiar basis, and we'll look at two linear maps.

For, $F$, let's use the field of rational numbers. This will suffice, for our purposes. For $V$, we'll use the subspace of $\Bbb Q[t]$:

$V = \{f(t) \in \Bbb Q[t]: \text{deg}(f) \leq 2\}$.

For our basis $\mathcal{B}$ we will use: $\mathcal{B} = \{1-t,1+t,2t^2\}$

For $W$, we will use the subspace of $\Bbb Q[t]$:

$W = \{g(t) \in \Bbb Q[t]: \text{deg}(g) \leq 1\}$.

For this vector space, we will use the basis $\mathcal{E} = \{1,t\}$.

We will define:

$\phi(f(t)) = f'(t)$
$\psi(f(t)) = f''(t)$

First, we want to create matrices, for each of these linear transformations, with respect to the given bases.

So let $f(t) = a + bt + ct^2$. The matrices we are after turn the $n$-tuple of $\mathcal{B}$-coordinates corresponding to $f(t)$ into $m$-tuples of $\mathcal{E}$-coordinates corresponding to $\phi(f(t)),\psi(f(t))$, respectively.

Let's first find out what $[f(t)]_{\mathcal{B}}$ is, in terms of "actual rational numbers". So we want to find:

$a + bt + ct^2 = a'(1 - t) + b'(1 + t) + c'(2t^2)$

Expanding the expression on the right:

$a'(1 - t) + b'(1 + t) + c'(2t^2) = (a' + b') + (b' - a')t + 2c't^2$, so that:

$a = a' + b'$
$b = b' - a'$
$c = 2c'$, since these must be the same polynomial.

The third equation quickly gives $c' = \dfrac{c}{2}$, and the first two tell us:

$b' = \dfrac{a+b}{2}$

$a' = a - b' = a - \dfrac{a+b}{2} = \dfrac{a - b}{2}$.

Hence:

$[f(t)]_{\mathcal{B}} = \begin{bmatrix}\dfrac{a-b}{2}\\ \dfrac{a+b}{2}\\ \dfrac{c}{2} \end{bmatrix}$.

For example, the polynomial $1 + 2t + t^2$ becomes: $\begin{bmatrix}-\frac{1}{2}\\ \frac{3}{2}\\ \frac{1}{2}\end{bmatrix}$.

To get the matrix we want, we have to find $\phi(v_j)$ and $\psi(v_j)$ for:

$v_1 = 1 - t$
$v_2 = 1 + t$
$v_3 = 2t^2$

and express these in $\mathcal{E}$-coordinates. Now:

$\phi(v_1) = -1$
$\phi(v_2) = 1$
$\phi(v_3) = 4t$

In $\mathcal{E}$-coordinates, these are:

$\begin{bmatrix}-1\\0 \end{bmatrix},\begin{bmatrix}1\\0 \end{bmatrix},\begin{bmatrix}0\\4 \end{bmatrix}$

So $\Phi(\phi) = \begin{bmatrix}-1&1&0\\0&0&4 \end{bmatrix}$

Note that:

$\Phi(\phi)[f(t)]_{\mathcal{B}} = \begin{bmatrix}-1&1&0\\0&0&4 \end{bmatrix}\begin{bmatrix}\dfrac{a-b}{2}\\ \dfrac{a+b}{2}\\ \dfrac{c}{2} \end{bmatrix} = \begin{bmatrix}b\\2c \end{bmatrix}$

which corresponds to the polynomial $b(1) + (2c)(t) = b + 2ct = f'(t)$.

Similarly, we find that:

$[\psi(v_1)]_{\mathcal{E}} = \begin{bmatrix}0\\0 \end{bmatrix}, [\psi(v_2)]_{\mathcal{E}} = \begin{bmatrix}0\\0 \end{bmatrix}, [\psi(v_3)]_{\mathcal{E}} = \begin{bmatrix}4\\0 \end{bmatrix}$

so that:

$\Phi(\psi) = \begin{bmatrix} 0&0&4\\0&0&0 \end{bmatrix}$

The whole POINT of linearity, and why we care about it, is that it does not matter if we "evaluate first" and then sum, or sum first, and then evaluate. I leave it to you as an exercise, to verify that:

if $L(f(t)) = f'(t) + f''(t)$, that the matrix for $L$ in these bases is:

$\Phi(\phi) + \Phi(\psi)$.

How will you do this? Compute the matrix for $L$ using the above procedure, and then see if when you add the two matrices I have given you, this is the same matrix.
 
  • #3
Deveno said:
OK, we are going to take $\mathcal{E}$ and $\mathcal{B}$ as given.

Suppose that we have:

$\displaystyle \phi(v_j) = \sum_{i = 1}^m \alpha_{ij}w_i$
$\displaystyle \psi(v_j) = \sum_{i = 1}^m \beta_{ij}w_i$

then:

$\displaystyle (\phi + \psi)(v_j) = \phi(v_j) + \psi(v_j) = \sum_{i = 1}^m \alpha_{ij}w_i + \sum_{i = 1}^m \beta_{ij}w_i$

$\displaystyle = \sum_{i = 1}^m (\alpha_{ij} + \beta_{ij})w_i$.

If we set, for every pair, $(i,j)$, $\gamma_{ij} = \alpha_{ij} + \beta_{ij}$, we have:

$\Phi(\phi + \psi) = (\gamma_{ij}) = (\alpha_{ij} + \beta_{ij}) = (\alpha_{ij}) + (\beta_{ij}) = \Phi(\phi) + \Phi(\psi)$

(we add matrices entry-by-entry).

Similarly, we have:

$\displaystyle (c\phi)(v_j) = c(\phi(v_j)) = c\left(\sum_{i = 1}^m \alpha_{ij}w_i\right) = \sum_{i=1}^m c\alpha_{ij}w_i$.

If we set: $\eta_{ij} = c\alpha_{ij}$ for each pair $(i,j)$, we have:

$\Phi(c\phi) = (\eta_{ij}) = (c\alpha_{ij}) = c(\alpha_{ij}) = c(\Phi(\phi))$

This IS explicit, and formal.

But let's choose a field, $F$ and $U$ with dimension $n$, and $V$ with dimension $m$.

To make things interesting, I'll use a non-standard space for $V$, with a non-standard basis, with a similar space for $W$ with a more familiar basis, and we'll look at two linear maps.

For, $F$, let's use the field of rational numbers. This will suffice, for our purposes. For $V$, we'll use the subspace of $\Bbb Q[t]$:

$V = \{f(t) \in \Bbb Q[t]: \text{deg}(f) \leq 2\}$.

For our basis $\mathcal{B}$ we will use: $\mathcal{B} = \{1-t,1+t,2t^2\}$

For $W$, we will use the subspace of $\Bbb Q[t]$:

$W = \{g(t) \in \Bbb Q[t]: \text{deg}(g) \leq 1\}$.

For this vector space, we will use the basis $\mathcal{E} = \{1,t\}$.

We will define:

$\phi(f(t)) = f'(t)$
$\psi(f(t)) = f''(t)$

First, we want to create matrices, for each of these linear transformations, with respect to the given bases.

So let $f(t) = a + bt + ct^2$. The matrices we are after turn the $n$-tuple of $\mathcal{B}$-coordinates corresponding to $f(t)$ into $m$-tuples of $\mathcal{E}$-coordinates corresponding to $\phi(f(t)),\psi(f(t))$, respectively.

Let's first find out what $[f(t)]_{\mathcal{B}}$ is, in terms of "actual rational numbers". So we want to find:

$a + bt + ct^2 = a'(1 - t) + b'(1 + t) + c'(2t^2)$

Expanding the expression on the right:

$a'(1 - t) + b'(1 + t) + c'(2t^2) = (a' + b') + (b' - a')t + 2c't^2$, so that:

$a = a' + b'$
$b = b' - a'$
$c = 2c'$, since these must be the same polynomial.

The third equation quickly gives $c' = \dfrac{c}{2}$, and the first two tell us:

$b' = \dfrac{a+b}{2}$

$a' = a - b' = a - \dfrac{a+b}{2} = \dfrac{a - b}{2}$.

Hence:

$[f(t)]_{\mathcal{B}} = \begin{bmatrix}\dfrac{a-b}{2}\\ \dfrac{a+b}{2}\\ \dfrac{c}{2} \end{bmatrix}$.

For example, the polynomial $1 + 2t + t^2$ becomes: $\begin{bmatrix}-\frac{1}{2}\\ \frac{3}{2}\\ \frac{1}{2}\end{bmatrix}$.

To get the matrix we want, we have to find $\phi(v_j)$ and $\psi(v_j)$ for:

$v_1 = 1 - t$
$v_2 = 1 + t$
$v_3 = 2t^2$

and express these in $\mathcal{E}$-coordinates. Now:

$\phi(v_1) = -1$
$\phi(v_2) = 1$
$\phi(v_3) = 4t$

In $\mathcal{E}$-coordinates, these are:

$\begin{bmatrix}-1\\0 \end{bmatrix},\begin{bmatrix}1\\0 \end{bmatrix},\begin{bmatrix}0\\4 \end{bmatrix}$

So $\Phi(\phi) = \begin{bmatrix}-1&1&0\\0&0&4 \end{bmatrix}$

Note that:

$\Phi(\phi)[f(t)]_{\mathcal{B}} = \begin{bmatrix}-1&1&0\\0&0&4 \end{bmatrix}\begin{bmatrix}\dfrac{a-b}{2}\\ \dfrac{a+b}{2}\\ \dfrac{c}{2} \end{bmatrix} = \begin{bmatrix}b\\2c \end{bmatrix}$

which corresponds to the polynomial $b(1) + (2c)(t) = b + 2ct = f'(t)$.

Similarly, we find that:

$[\psi(v_1)]_{\mathcal{E}} = \begin{bmatrix}0\\0 \end{bmatrix}, [\psi(v_2)]_{\mathcal{E}} = \begin{bmatrix}0\\0 \end{bmatrix}, [\psi(v_3)]_{\mathcal{E}} = \begin{bmatrix}4\\0 \end{bmatrix}$

so that:

$\Phi(\psi) = \begin{bmatrix} 0&0&4\\0&0&0 \end{bmatrix}$

The whole POINT of linearity, and why we care about it, is that it does not matter if we "evaluate first" and then sum, or sum first, and then evaluate. I leave it to you as an exercise, to verify that:

if $L(f(t)) = f'(t) + f''(t)$, that the matrix for $L$ in these bases is:

$\Phi(\phi) + \Phi(\psi)$.

How will you do this? Compute the matrix for $L$ using the above procedure, and then see if when you add the two matrices I have given you, this is the same matrix.

Thanks so much for the help and guidance on this matter, Deveno ...

Just working through you post in detail now ... ...

Thanks again,

Peter
 
  • #4
Deveno said:
OK, we are going to take $\mathcal{E}$ and $\mathcal{B}$ as given.

Suppose that we have:

$\displaystyle \phi(v_j) = \sum_{i = 1}^m \alpha_{ij}w_i$
$\displaystyle \psi(v_j) = \sum_{i = 1}^m \beta_{ij}w_i$

then:

$\displaystyle (\phi + \psi)(v_j) = \phi(v_j) + \psi(v_j) = \sum_{i = 1}^m \alpha_{ij}w_i + \sum_{i = 1}^m \beta_{ij}w_i$

$\displaystyle = \sum_{i = 1}^m (\alpha_{ij} + \beta_{ij})w_i$.

If we set, for every pair, $(i,j)$, $\gamma_{ij} = \alpha_{ij} + \beta_{ij}$, we have:

$\Phi(\phi + \psi) = (\gamma_{ij}) = (\alpha_{ij} + \beta_{ij}) = (\alpha_{ij}) + (\beta_{ij}) = \Phi(\phi) + \Phi(\psi)$

(we add matrices entry-by-entry).

Similarly, we have:

$\displaystyle (c\phi)(v_j) = c(\phi(v_j)) = c\left(\sum_{i = 1}^m \alpha_{ij}w_i\right) = \sum_{i=1}^m c\alpha_{ij}w_i$.

If we set: $\eta_{ij} = c\alpha_{ij}$ for each pair $(i,j)$, we have:

$\Phi(c\phi) = (\eta_{ij}) = (c\alpha_{ij}) = c(\alpha_{ij}) = c(\Phi(\phi))$

This IS explicit, and formal.

But let's choose a field, $F$ and $U$ with dimension $n$, and $V$ with dimension $m$.

To make things interesting, I'll use a non-standard space for $V$, with a non-standard basis, with a similar space for $W$ with a more familiar basis, and we'll look at two linear maps.

For, $F$, let's use the field of rational numbers. This will suffice, for our purposes. For $V$, we'll use the subspace of $\Bbb Q[t]$:

$V = \{f(t) \in \Bbb Q[t]: \text{deg}(f) \leq 2\}$.

For our basis $\mathcal{B}$ we will use: $\mathcal{B} = \{1-t,1+t,2t^2\}$

For $W$, we will use the subspace of $\Bbb Q[t]$:

$W = \{g(t) \in \Bbb Q[t]: \text{deg}(g) \leq 1\}$.

For this vector space, we will use the basis $\mathcal{E} = \{1,t\}$.

We will define:

$\phi(f(t)) = f'(t)$
$\psi(f(t)) = f''(t)$

First, we want to create matrices, for each of these linear transformations, with respect to the given bases.

So let $f(t) = a + bt + ct^2$. The matrices we are after turn the $n$-tuple of $\mathcal{B}$-coordinates corresponding to $f(t)$ into $m$-tuples of $\mathcal{E}$-coordinates corresponding to $\phi(f(t)),\psi(f(t))$, respectively.

Let's first find out what $[f(t)]_{\mathcal{B}}$ is, in terms of "actual rational numbers". So we want to find:

$a + bt + ct^2 = a'(1 - t) + b'(1 + t) + c'(2t^2)$

Expanding the expression on the right:

$a'(1 - t) + b'(1 + t) + c'(2t^2) = (a' + b') + (b' - a')t + 2c't^2$, so that:

$a = a' + b'$
$b = b' - a'$
$c = 2c'$, since these must be the same polynomial.

The third equation quickly gives $c' = \dfrac{c}{2}$, and the first two tell us:

$b' = \dfrac{a+b}{2}$

$a' = a - b' = a - \dfrac{a+b}{2} = \dfrac{a - b}{2}$.

Hence:

$[f(t)]_{\mathcal{B}} = \begin{bmatrix}\dfrac{a-b}{2}\\ \dfrac{a+b}{2}\\ \dfrac{c}{2} \end{bmatrix}$.

For example, the polynomial $1 + 2t + t^2$ becomes: $\begin{bmatrix}-\frac{1}{2}\\ \frac{3}{2}\\ \frac{1}{2}\end{bmatrix}$.

To get the matrix we want, we have to find $\phi(v_j)$ and $\psi(v_j)$ for:

$v_1 = 1 - t$
$v_2 = 1 + t$
$v_3 = 2t^2$

and express these in $\mathcal{E}$-coordinates. Now:

$\phi(v_1) = -1$
$\phi(v_2) = 1$
$\phi(v_3) = 4t$

In $\mathcal{E}$-coordinates, these are:

$\begin{bmatrix}-1\\0 \end{bmatrix},\begin{bmatrix}1\\0 \end{bmatrix},\begin{bmatrix}0\\4 \end{bmatrix}$

So $\Phi(\phi) = \begin{bmatrix}-1&1&0\\0&0&4 \end{bmatrix}$

Note that:

$\Phi(\phi)[f(t)]_{\mathcal{B}} = \begin{bmatrix}-1&1&0\\0&0&4 \end{bmatrix}\begin{bmatrix}\dfrac{a-b}{2}\\ \dfrac{a+b}{2}\\ \dfrac{c}{2} \end{bmatrix} = \begin{bmatrix}b\\2c \end{bmatrix}$

which corresponds to the polynomial $b(1) + (2c)(t) = b + 2ct = f'(t)$.

Similarly, we find that:

$[\psi(v_1)]_{\mathcal{E}} = \begin{bmatrix}0\\0 \end{bmatrix}, [\psi(v_2)]_{\mathcal{E}} = \begin{bmatrix}0\\0 \end{bmatrix}, [\psi(v_3)]_{\mathcal{E}} = \begin{bmatrix}4\\0 \end{bmatrix}$

so that:

$\Phi(\psi) = \begin{bmatrix} 0&0&4\\0&0&0 \end{bmatrix}$

The whole POINT of linearity, and why we care about it, is that it does not matter if we "evaluate first" and then sum, or sum first, and then evaluate. I leave it to you as an exercise, to verify that:

if $L(f(t)) = f'(t) + f''(t)$, that the matrix for $L$ in these bases is:

$\Phi(\phi) + \Phi(\psi)$.

How will you do this? Compute the matrix for $L$ using the above procedure, and then see if when you add the two matrices I have given you, this is the same matrix.

Hi Deveno,

Thanks again for the help ... but ... just a clarification ...

In your post you write:

" ... ... then:

$\displaystyle (\phi + \psi)(v_j) = \phi(v_j) + \psi(v_j) = \sum_{i = 1}^m \alpha_{ij}w_i + \sum_{i = 1}^m \beta_{ij}w_i$

$\displaystyle = \sum_{i = 1}^m (\alpha_{ij} + \beta_{ij})w_i$. ... ... ... "


But, ... how do you justify the following critical step ...

\(\displaystyle (\phi + \psi)(v_j) = \phi(v_j) + \psi(v_j) \)

Could you please explain why this follows.

Peter

***EDIT***

Been thinking/reflecting ... ...

Presumably

\(\displaystyle (\phi + \psi)(v_j) = \phi(v_j) + \psi(v_j) \)

follows simply by the definition of the addition of functions?

Is that correct?

Peter
 
Last edited:
  • #5
Yes for any $F$-linear transformation (indeed for any group homomorphism of abelian groups) we define:

$S+T$ to be the function that takes $v \to S(v) + T(v)$, which gives a well-defined function $S+T$.

It is trivial to verify that this function is $F$-linear if $S,T$ are:

$(S+T)(\alpha u + \beta v) = S(\alpha u + \beta v) + T(\alpha u + \beta v)$

$= \alpha S(u) + \beta S(v) + \alpha T(u) + \beta T(v)$ (because $S,T$ are linear)

$= \alpha S(u) + \alpha T(u) + \beta S(v) + \beta T(v)$ (note how essential commutativity of + is here)

$= \alpha(S(u) + T(u)) + \beta(S(v) + T(v))$ (since these are vectors)

$= \alpha((S+T)(u)) + \beta((S+T)(v))$, QED.

This is often called "point-wise" addition, because we evaluate $S(v),T(v)$ first (at a point), and then add the resulting "values".

This agrees with the "usual" definition of polynomials when we talk about polynomial FUNCTIONS:

If $f(x) = a_0 + a_1x + \cdots + a_nx^n$ and $g(x) = b_0 + b_1x +\cdots + b_nx^n$ (for example),

then:

$(f+g)(x) = a_0 + a_1x + \cdots + a_nx^n + b_0 + b_1x +\cdots + b_nx^n$

$= (a_0 + b_0) + (a_1 + b_1)x +\cdots + (a_n + b_n)x^n$

in other words, the "point-wise" sum of the functions $f$ and $g$ has the same value at $x$ as the polynomial sum $f+g$

(note: I am not implying polynomial functions are linear, they typically are NOT).

This is a "common construction" of many areas of mathematics: if we don't have any idea of how to sum two objects, we find some expression related to the object we CAN sum, and sum that, and then "pull it back".
 
  • #6
Deveno said:
Yes for any $F$-linear transformation (indeed for any group homomorphism of abelian groups) we define:

$S+T$ to be the function that takes $v \to S(v) + T(v)$, which gives a well-defined function $S+T$.

It is trivial to verify that this function is $F$-linear if $S,T$ are:

$(S+T)(\alpha u + \beta v) = S(\alpha u + \beta v) + T(\alpha u + \beta v)$

$= \alpha S(u) + \beta S(v) + \alpha T(u) + \beta T(v)$ (because $S,T$ are linear)

$= \alpha S(u) + \alpha T(u) + \beta S(v) + \beta T(v)$ (note how essential commutativity of + is here)

$= \alpha(S(u) + T(u)) + \beta(S(v) + T(v))$ (since these are vectors)

$= \alpha((S+T)(u)) + \beta((S+T)(v))$, QED.

This is often called "point-wise" addition, because we evaluate $S(v),T(v)$ first (at a point), and then add the resulting "values".

This agrees with the "usual" definition of polynomials when we talk about polynomial FUNCTIONS:

If $f(x) = a_0 + a_1x + \cdots + a_nx^n$ and $g(x) = b_0 + b_1x +\cdots + b_nx^n$ (for example),

then:

$(f+g)(x) = a_0 + a_1x + \cdots + a_nx^n + b_0 + b_1x +\cdots + b_nx^n$

$= (a_0 + b_0) + (a_1 + b_1)x +\cdots + (a_n + b_n)x^n$

in other words, the "point-wise" sum of the functions $f$ and $g$ has the same value at $x$ as the polynomial sum $f+g$

(note: I am not implying polynomial functions are linear, they typically are NOT).

This is a "common construction" of many areas of mathematics: if we don't have any idea of how to sum two objects, we find some expression related to the object we CAN sum, and sum that, and then "pull it back".
Thanks again for your help Deveno ... especially the excellent example ... so helpful in getting a sense of the Theorem! ...

Now in your post above the mapping:

\(\displaystyle \Phi \ : \ Hom_F(V, W) \to M_{m \times n} (F) \)

where

\(\displaystyle \Phi ( \phi ) = M_\mathcal{B}^\mathcal{E} ( \phi )\) for all \(\displaystyle \phi \in Hom_F (V, W)\)

has been shown to be an F-linear map. Now we have to show that \(\displaystyle \Phi\) is surjective and injective.

On this matter D&F write:

View attachment 3044

I do not follow the logic of D&F's statement ... can you help ... I am trying to formulate an explicit and formal argument ...For surjectivity, we have to show that for \(\displaystyle M \in M_{m \times n} (F)\) there exists a F-linear transformation \(\displaystyle \phi\) such that \(\displaystyle \Phi(\phi) = M\)

For injectivity, we have to show that if \(\displaystyle \Phi(\phi) = \Phi(\psi)\) then \(\displaystyle \phi = \psi\).

Can you help?

Peter
 
Last edited:
  • #7
Let's do something easier, let's show that we can provide a mapping to each element of some basis for $\text{Mat}_{m \times n}(F)$.

If this basis is $\mathcal{A} = \{A_1,A_2,\dots,A_{mn}\}$, then since any matrix is:

$\displaystyle M = \sum_{k=1}^{mn} c_kA_k$,

if we have $\phi_k \in \text{Hom}_{\ F}(V,W)$ with:

$\Phi(\phi_k) = A_k$, it follows that:

$\displaystyle \Phi\left(\sum_{k=1}^{mn} c_k\phi_k\right) = \sum_{k=1}^{mn} c_k(\Phi(\phi_k)) = \sum_{k=1}^{mn} c_kA_k = M$, so that $\Phi$ is surjective. Let's use the same bases $\mathcal{B}$ and $\mathcal{E}$ as before.

For each FIXED pair $(i,j)$ define $\phi_{ij} \in \text{Hom}_{\ F}(V,W)$ by:

$\phi_{ij}(v_j) = w_i$
$\phi_{ij}(v_k) = 0_W$, for $k \neq j$ (Verify these ARE elements of $\text{Hom}_{\ F}(V,W)$ !).

We then have $\Phi(\phi_{ij}) = E_{ij}$, the elementary matrices. Since these form a basis for $\text{Mat}_{m \times n}(F)$, it follows $\Phi$ is surjective.

To show that $\Phi$ is injective, it suffices to show that the kernel of $\Phi$ is the 0-map.

Now, if $\Phi(\phi) = (0)$ (the 0-matrix), we have:

$\phi(v_j) = 0_W$, for every $j = 1,2,\dots,n$.

Hence for any $\displaystyle v = \sum_{j=1}^n a_iv_j \in V$:

$\displaystyle \phi(v) = \phi\left(\sum_{j=1}^n a_jv_j\right) = \sum_{j=1}^n a_j\phi(v_j) = \sum_{j=1}^n a_j0_W = 0_W$, so that $\phi$ is the 0-map.

*****************

Let's bring this down to Earth for a second. Suppose we have a linear function:

$L: \Bbb R^3 \to \Bbb R^2$.

So we have $L(x,y,z) = (u,v)$. Let's write $u = L_1(x,y,z)$ and $v = L_2(x,y,z)$ (these are often called the coordinate functions of $L$). I claim:

$L_1,L_2:\Bbb R^3 \to \Bbb R$ are linear functions themselves.

To see this, note that each $L_j = p_j \circ L$, where $p_j$ is the standard linear projection function onto the $j$-th coordinate, and is thus a composition of linear functions.

For each $L_j$ to be linear (prove this!), we must have:

$L_j((x,y,z)) = a_jx + b_jy + c_jz$, so we can write:

$L((x,y,z)) = (ax+by+cz,a'x+b'y+c'z)$ <---this is the "general form" of an element of $\text{Hom}_{\ \Bbb R}(\Bbb R^3,\Bbb R^2)$.

Now a basis (not the ONLY one, but a "convenient one") of $\Bbb R^3$ is $\mathcal{B} = \{(1,0,0),(0,1,0),(0,0,1)\}$.

Note we have:

$L((1,0,0)) = (a,a')$
$L((0,1,0)) = (b,b')$
$L((0,0,1)) = (c,c')$.

If we use the basis $\mathcal{C} = \{(1,0),(0,1)\}$ for $\Bbb R^2$ (again, we could use some OTHER basis, we'd get "different numbers"), then then matrix:

$\Phi(L)$ is:

$\begin{bmatrix}a&b&c\\a'&b'&c' \end{bmatrix}$

and we have:

$\Phi(L)[(x,y,z)]_{\mathcal{B}} = [L(x,y,z)]_{\mathcal{C}}$ (the expression on the left is matrix multiplication),

that is:

$\begin{bmatrix}a&b&c\\a'&b'&c' \end{bmatrix} \begin{bmatrix}x\\y\\z \end{bmatrix} = \begin{bmatrix}ax+by+cz\\a'x+b'y+c'z \end{bmatrix}$.

Again, ONCE WE FIX THE BASES, this is the ONLY matrix that represents $L$.
 
  • #8
Deveno said:
Let's do something easier, let's show that we can provide a mapping to each element of some basis for $\text{Mat}_{m \times n}(F)$.

If this basis is $\mathcal{A} = \{A_1,A_2,\dots,A_{mn}\}$, then since any matrix is:

$\displaystyle M = \sum_{k=1}^{mn} c_kA_k$,

if we have $\phi_k \in \text{Hom}_{\ F}(V,W)$ with:

$\Phi(\phi_k) = A_k$, it follows that:

$\displaystyle \Phi\left(\sum_{k=1}^{mn} c_k\phi_k\right) = \sum_{k=1}^{mn} c_k(\Phi(\phi_k)) = \sum_{k=1}^{mn} c_kA_k = M$, so that $\Phi$ is surjective. Let's use the same bases $\mathcal{B}$ and $\mathcal{E}$ as before.

For each FIXED pair $(i,j)$ define $\phi_{ij} \in \text{Hom}_{\ F}(V,W)$ by:

$\phi_{ij}(v_j) = w_i$
$\phi_{ij}(v_k) = 0_W$, for $k \neq j$ (Verify these ARE elements of $\text{Hom}_{\ F}(V,W)$ !).

We then have $\Phi(\phi_{ij}) = E_{ij}$, the elementary matrices. Since these form a basis for $\text{Mat}_{m \times n}(F)$, it follows $\Phi$ is surjective.

To show that $\Phi$ is injective, it suffices to show that the kernel of $\Phi$ is the 0-map.

Now, if $\Phi(\phi) = (0)$ (the 0-matrix), we have:

$\phi(v_j) = 0_W$, for every $j = 1,2,\dots,n$.

Hence for any $\displaystyle v = \sum_{j=1}^n a_iv_j \in V$:

$\displaystyle \phi(v) = \phi\left(\sum_{j=1}^n a_jv_j\right) = \sum_{j=1}^n a_j\phi(v_j) = \sum_{j=1}^n a_j0_W = 0_W$, so that $\phi$ is the 0-map.

*****************

Let's bring this down to Earth for a second. Suppose we have a linear function:

$L: \Bbb R^3 \to \Bbb R^2$.

So we have $L(x,y,z) = (u,v)$. Let's write $u = L_1(x,y,z)$ and $v = L_2(x,y,z)$ (these are often called the coordinate functions of $L$). I claim:

$L_1,L_2:\Bbb R^3 \to \Bbb R$ are linear functions themselves.

To see this, note that each $L_j = p_j \circ L$, where $p_j$ is the standard linear projection function onto the $j$-th coordinate, and is thus a composition of linear functions.

For each $L_j$ to be linear (prove this!), we must have:

$L_j((x,y,z)) = a_jx + b_jy + c_jz$, so we can write:

$L((x,y,z)) = (ax+by+cz,a'x+b'y+c'z)$ <---this is the "general form" of an element of $\text{Hom}_{\ \Bbb R}(\Bbb R^3,\Bbb R^2)$.

Now a basis (not the ONLY one, but a "convenient one") of $\Bbb R^3$ is $\mathcal{B} = \{(1,0,0),(0,1,0),(0,0,1)\}$.

Note we have:

$L((1,0,0)) = (a,a')$
$L((0,1,0)) = (b,b')$
$L((0,0,1)) = (c,c')$.

If we use the basis $\mathcal{C} = \{(1,0),(0,1)\}$ for $\Bbb R^2$ (again, we could use some OTHER basis, we'd get "different numbers"), then then matrix:

$\Phi(L)$ is:

$\begin{bmatrix}a&b&c\\a'&b'&c' \end{bmatrix}$

and we have:

$\Phi(L)[(x,y,z)]_{\mathcal{B}} = [L(x,y,z)]_{\mathcal{C}}$ (the expression on the left is matrix multiplication),

that is:

$\begin{bmatrix}a&b&c\\a'&b'&c' \end{bmatrix} \begin{bmatrix}x\\y\\z \end{bmatrix} = \begin{bmatrix}ax+by+cz\\a'x+b'y+c'z \end{bmatrix}$.

Again, ONCE WE FIX THE BASES, this is the ONLY matrix that represents $L$.

Thanks so much for the help Deveno ... just working carefully through your post now

Peter
 
  • #9
Deveno said:
Let's do something easier, let's show that we can provide a mapping to each element of some basis for $\text{Mat}_{m \times n}(F)$.

If this basis is $\mathcal{A} = \{A_1,A_2,\dots,A_{mn}\}$, then since any matrix is:

$\displaystyle M = \sum_{k=1}^{mn} c_kA_k$,

if we have $\phi_k \in \text{Hom}_{\ F}(V,W)$ with:

$\Phi(\phi_k) = A_k$, it follows that:

$\displaystyle \Phi\left(\sum_{k=1}^{mn} c_k\phi_k\right) = \sum_{k=1}^{mn} c_k(\Phi(\phi_k)) = \sum_{k=1}^{mn} c_kA_k = M$, so that $\Phi$ is surjective. Let's use the same bases $\mathcal{B}$ and $\mathcal{E}$ as before.

For each FIXED pair $(i,j)$ define $\phi_{ij} \in \text{Hom}_{\ F}(V,W)$ by:

$\phi_{ij}(v_j) = w_i$
$\phi_{ij}(v_k) = 0_W$, for $k \neq j$ (Verify these ARE elements of $\text{Hom}_{\ F}(V,W)$ !).

We then have $\Phi(\phi_{ij}) = E_{ij}$, the elementary matrices. Since these form a basis for $\text{Mat}_{m \times n}(F)$, it follows $\Phi$ is surjective.

To show that $\Phi$ is injective, it suffices to show that the kernel of $\Phi$ is the 0-map.

Now, if $\Phi(\phi) = (0)$ (the 0-matrix), we have:

$\phi(v_j) = 0_W$, for every $j = 1,2,\dots,n$.

Hence for any $\displaystyle v = \sum_{j=1}^n a_iv_j \in V$:

$\displaystyle \phi(v) = \phi\left(\sum_{j=1}^n a_jv_j\right) = \sum_{j=1}^n a_j\phi(v_j) = \sum_{j=1}^n a_j0_W = 0_W$, so that $\phi$ is the 0-map.

*****************

Let's bring this down to Earth for a second. Suppose we have a linear function:

$L: \Bbb R^3 \to \Bbb R^2$.

So we have $L(x,y,z) = (u,v)$. Let's write $u = L_1(x,y,z)$ and $v = L_2(x,y,z)$ (these are often called the coordinate functions of $L$). I claim:

$L_1,L_2:\Bbb R^3 \to \Bbb R$ are linear functions themselves.

To see this, note that each $L_j = p_j \circ L$, where $p_j$ is the standard linear projection function onto the $j$-th coordinate, and is thus a composition of linear functions.

For each $L_j$ to be linear (prove this!), we must have:

$L_j((x,y,z)) = a_jx + b_jy + c_jz$, so we can write:

$L((x,y,z)) = (ax+by+cz,a'x+b'y+c'z)$ <---this is the "general form" of an element of $\text{Hom}_{\ \Bbb R}(\Bbb R^3,\Bbb R^2)$.

Now a basis (not the ONLY one, but a "convenient one") of $\Bbb R^3$ is $\mathcal{B} = \{(1,0,0),(0,1,0),(0,0,1)\}$.

Note we have:

$L((1,0,0)) = (a,a')$
$L((0,1,0)) = (b,b')$
$L((0,0,1)) = (c,c')$.

If we use the basis $\mathcal{C} = \{(1,0),(0,1)\}$ for $\Bbb R^2$ (again, we could use some OTHER basis, we'd get "different numbers"), then then matrix:

$\Phi(L)$ is:

$\begin{bmatrix}a&b&c\\a'&b'&c' \end{bmatrix}$

and we have:

$\Phi(L)[(x,y,z)]_{\mathcal{B}} = [L(x,y,z)]_{\mathcal{C}}$ (the expression on the left is matrix multiplication),

that is:

$\begin{bmatrix}a&b&c\\a'&b'&c' \end{bmatrix} \begin{bmatrix}x\\y\\z \end{bmatrix} = \begin{bmatrix}ax+by+cz\\a'x+b'y+c'z \end{bmatrix}$.

Again, ONCE WE FIX THE BASES, this is the ONLY matrix that represents $L$.

A very helpful post! Indeed your posts on vector spaces and spaces of matrices have been more informative, helpful and clear than any of the 'excellent' texts I have referenced! Further the posts have included some simple but excellent examples which build a real feeling and understanding for the topic in question. There is a veritable treasure of knowledge in your posts. Thank you!

Peter
 
Last edited:

FAQ: Isomorphism Between Hom_F (V,W) and M_nxn(F) - theory of vector spaces

What is an isomorphism between Hom_F (V,W) and M_nxn(F)?

An isomorphism between Hom_F (V,W) and M_nxn(F) is a bijective linear transformation that preserves the algebraic structure of vector spaces. This means that for every linear transformation from V to W, there is a corresponding matrix in M_nxn(F) that represents the same transformation. In other words, the two vector spaces are essentially identical in terms of their linear structures.

What is the significance of an isomorphism between vector spaces?

An isomorphism between vector spaces is significant because it allows us to translate problems and concepts in one vector space to another. This can be extremely useful in simplifying and solving complex problems in linear algebra.

How do you prove that two vector spaces are isomorphic?

To prove that two vector spaces are isomorphic, you must show that there exists a bijective linear transformation between them. This can be done by demonstrating that the transformation is both one-to-one and onto, and that it preserves the algebraic structure of the vector spaces.

Can two vector spaces be isomorphic if they have different dimensions?

No, two vector spaces cannot be isomorphic if they have different dimensions. The dimension of a vector space is a fundamental property that is preserved under isomorphism. If two vector spaces have different dimensions, it means that they have different numbers of basis vectors, and therefore cannot be bijectively mapped onto each other.

How does isomorphism relate to the theory of vector spaces?

Isomorphism is a foundational concept in the theory of vector spaces. It allows us to define and understand vector spaces in terms of their linear structures, rather than specific sets of vectors. This allows for more general and powerful mathematical techniques to be applied to problems in linear algebra.

Similar threads

Replies
24
Views
1K
Replies
1
Views
1K
Replies
39
Views
3K
Replies
1
Views
1K
Replies
12
Views
1K
Replies
2
Views
2K
Replies
9
Views
1K
Back
Top