Basic Exercise in Vector Spaces - Cooperstein Exercise 2, page 14

In summary, the first section of Cooperstein's book is focused on vector spaces over an arbitrary field. In particular, it covers the basics of A1-A4, and then provides a proof of the associativity of addition and the commutativity of multiplication.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Bruce Cooperstein's book: Advanced Linear Algebra ... ...

I am focused on Section 1.3 Vector Spaces over an Arbitrary Field ...

I need help with Exercise 2 of Section 1.3 ...

Exercise 2 reads as follows:View attachment 5109Hope someone can help with this exercise ...

Peter*** EDIT ***

To give MHB
readers an idea of Cooperstein's notation and approach I am providing Cooperstein's definition of a vector space ... as follows:View attachment 5110
 
Physics news on Phys.org
  • #2
The first thing you have to ask yourself is: given $v$, how do I show *any* element (say $x$) of $V$ is $-v$?

The answer lies in A4: if $x = -v$, then $x + v= 0$ (and also $v + x= 0$ by A1).

So to show something is $-(-v)$, what do you suppose you have to add it to, and show the sum is $0$?
 
  • #3
Deveno said:
The first thing you have to ask yourself is: given $v$, how do I show *any* element (say $x$) of $V$ is $-v$?

The answer lies in A4: if $x = -v$, then $x + v= 0$ (and also $v + x= 0$ by A1).

So to show something is $-(-v)$, what do you suppose you have to add it to, and show the sum is $0$?
Thanks for the help, Deveno ...

I guess that based on what you have said, we can proceed as follows:We know that for \(\displaystyle x \in V\) we have that \(\displaystyle x + (-x) = 0 \)... ... (1) ... ... ... by (A4)

Now put \(\displaystyle x = -v\) in (1) ... then we have

\(\displaystyle (-v) + -(-v) = 0\)

so ... adding \(\displaystyle v\) to both sides we have ...

\(\displaystyle v + {(-v) + [-(-v)]} = v\)

\(\displaystyle \Longrightarrow { v + (-v)} + [- (-v)] = v\) ... ... by associativity of addition

\(\displaystyle \Longrightarrow 0 + [-(-v)] = v\) ... by A4

\(\displaystyle \Longrightarrow [-(-v)] = v\) ... by A3
Is that correct? Can you confirm that the above proof is correct?

Peter
 
  • #4
Peter said:
Thanks for the help, Deveno ...

I guess that based on what you have said, we can proceed as follows:We know that for \(\displaystyle x \in V\) we have that \(\displaystyle x + (-x) = 0 \)... ... (1) ... ... ... by (A4)

Now put \(\displaystyle x = -v\) in (1) ... then we have

\(\displaystyle (-v) + -(-v) = 0\)

so ... adding \(\displaystyle v\) to both sides we have ...

\(\displaystyle v + {(-v) + [-(-v)]} = v\)

\(\displaystyle \Longrightarrow { v + (-v)} + [- (-v)] = v\) ... ... by associativity of addition

\(\displaystyle \Longrightarrow 0 + [-(-v)] = v\) ... by A4

\(\displaystyle \Longrightarrow [-(-v)] = v\) ... by A3
Is that correct? Can you confirm that the above proof is correct?

Peter

Yep, that's the ticket. Note you never used anything but A2-A4, so this proof is valid in any group, and is usually written like so:

In a group $(G,\ast)$, we have:

$(a^{-1})^{-1} = a$.

All A1-A4 say is that in a vector space $V$, we have an abelian group under vector addition. Thus many of the basic theorems in linear algebra are simply consequences of this.

The "other" main ingredient in a vector space is the scalar-multiplication, or $F$-action. Usually this is written as a left-action. We can view this in two main ways:

1. A "mixed map" $\mu: F \times V \to F$. This is a more "intuitive" view.

(EDIT: As Peter points out below, this should read: "$\mu: F\times V \to V$").

2. An "induced map" $a \mapsto \phi_a$ from $F \to \text{End}_{\Bbb Z}(V)$ that for every $a \in F$, we get a map $V \to V$ defined by:

$\phi_a(v) = av$.

This is the more "advanced" view.

In the second view what M1 says is:

$\phi_a$ is an abelian group homomorphism, for every $a$. This is an endomorphism, since the domain and co-domain are the same ($V$).

What M2 says is: the map $a \mapsto \phi_a$ (let's call this map $\Phi$) is an abelian group homomorphism from the additive group of the field $F$, to the additive group of the ring of abelian group endomorphisms (of the abelian group $V$).

Recall that an endomorphism is a map $V \to V$, and that we add such maps by:

$(\phi_a + \phi_b)(v) = \phi_a(v) + \phi_b(v)$ (the addition on the LHS is the "addition of maps", and the addition on the RHS is the "addition of vectors").

So $(a + b)v = av + bv$ simply states that $\Phi(a+b) = \Phi(a) + \Phi(b)$.

M3 is a bit more subtle, it says that $\Phi$ is a semi-group homomorphism from $F$ to $\text{End}_{\Bbb Z}(V)$ with the operation being the field multiplication in $F$, and *composition* in the ring of endmorphisms:

$\Phi(ab) = \Phi(a) \circ \Phi(b)$, that is:

$(ab)v = a(b(v))$.

Together, M2 and M3 say we have a ring-homomorphism from $F \to \text{End}_{\Bbb Z}(V)$.

M4 then says this ring-homomorphism is a *unity-preserving* ring homomorphism, that is, $1_F$ induces the identity endomorphism of $V$. This ensures that, for a fixed $v \in V$ the map:

$a \mapsto av$ is an *embedding* of the field $F$ into the one-dimensional subspace ("line") $\{av: a \in F\}$, the subspace generated by $v$. This is where the "linear" comes from in "linear algebra".

Most of the "meat" of linear algebra (at least in the finite-dimensional case) can be understood by a thorough grasp of Euclidean 2-space and 3-space. For example, in Euclidean 3-space, we have 3 copies of $\Bbb R$ (one for each spatial dimension). These are commonly referred to in physical situations as "axes". Although it is most convenient for these axes to be "orthogonal", this need not be the case. One axis determines a line, two (provided the second isn't on the "same line" as the first) determine a PLANE, three (provided the third isn't on the plane determined by the first two) determine a "space". Calculations in a 3-space can thus be reduced to calculations with 3 field elements (called the "coordinates in the respective axes"), thus giving us an ARITHMETIC to go along with the ALGEBRA (just as using rational approximations to real numbers gives us "numbers" we can use to solve "equations" in ordinary "high-school algebra").

There is one "catch". The arithmetic (numerical calculations) aren't uniquely determined by the vector space itself. We have to impose "units" upon it. For example, in the plane, distance might be measured in miles east-west, and miles north-south, or perhaps in feet east-west, and kilometers northwest-by-southeast. So the exact same point on a map, might have different "numbers" (coordinates) attached to it, even when using a common origin as a reference.
 
Last edited:
  • #5
Deveno said:
Yep, that's the ticket. Note you never used anything but A2-A4, so this proof is valid in any group, and is usually written like so:

In a group $(G,\ast)$, we have:

$(a^{-1})^{-1} = a$.

All A1-A4 say is that in a vector space $V$, we have an abelian group under vector addition. Thus many of the basic theorems in linear algebra are simply consequences of this.

The "other" main ingredient in a vector space is the scalar-multiplication, or $F$-action. Usually this is written as a left-action. We can view this in two main ways:

1. A "mixed map" $\mu: F \times V \to F$. This is a more "intuitive" view.

2. An "induced map" $a \mapsto \phi_a$ from $F \to \text{End}_{\Bbb Z}(V)$ that for every $a \in F$, we get a map $V \to V$ defined by:

$\phi_a(v) = av$.

This is the more "advanced" view.

In the second view what M1 says is:

$\phi_a$ is an abelian group homomorphism, for every $a$. This is an endomorphism, since the domain and co-domain are the same ($V$).

What M2 says is: the map $a \mapsto \phi_a$ (let's call this map $\Phi$) is an abelian group homomorphism from the additive group of the field $F$, to the additive group of the ring of abelian group endomorphisms (of the abelian group $V$).

Recall that an endomorphism is a map $V \to V$, and that we add such maps by:

$(\phi_a + \phi_b)(v) = \phi_a(v) + \phi_b(v)$ (the addition on the LHS is the "addition of maps", and the addition on the RHS is the "addition of vectors").

So $(a + b)v = av + bv$ simply states that $\Phi(a+b) = \Phi(a) + \Phi(b)$.

M3 is a bit more subtle, it says that $\Phi$ is a semi-group homomorphism from $F$ to $\text{End}_{\Bbb Z}(V)$ with the operation being the field multiplication in $F$, and *composition* in the ring of endmorphisms:

$\Phi(ab) = \Phi(a) \circ \Phi(b)$, that is:

$(ab)v = a(b(v))$.

Together, M2 and M3 say we have a ring-homomorphism from $F \to \text{End}_{\Bbb Z}(V)$.

M4 then says this ring-homomorphism is a *unity-preserving* ring homomorphism, that is, $1_F$ induces the identity endomorphism of $V$. This ensures that, for a fixed $v \in V$ the map:

$a \mapsto av$ is an *embedding* of the field $F$ into the one-dimensional subspace ("line") $\{av: a \in F\}$, the subspace generated by $v$. This is where the "linear" comes from in "linear algebra".

Most of the "meat" of linear algebra (at least in the finite-dimensional case) can be understood by a thorough grasp of Euclidean 2-space and 3-space. For example, in Euclidean 3-space, we have 3 copies of $\Bbb R$ (one for each spatial dimension). These are commonly referred to in physical situations as "axes". Although it is most convenient for these axes to be "orthogonal", this need not be the case. One axis determines a line, two (provided the second isn't on the "same line" as the first) determine a PLANE, three (provided the third isn't on the plane determined by the first two) determine a "space". Calculations in a 3-space can thus be reduced to calculations with 3 field elements (called the "coordinates in the respective axes"), thus giving us an ARITHMETIC to go along with the ALGEBRA (just as using rational approximations to real numbers gives us "numbers" we can use to solve "equations" in ordinary "high-school algebra").

There is one "catch". The arithmetic (numerical calculations) aren't uniquely determined by the vector space itself. We have to impose "units" upon it. For example, in the plane, distance might be measured in miles east-west, and miles north-south, or perhaps in feet east-west, and kilometers northwest-by-southeast. So the exact same point on a map, might have different "numbers" (coordinates) attached to it, even when using a common origin as a reference.
Hi Deveno,

Thanks for the significant help!

Will work through your post in detail shortly ...

... but ... just a quick clarifying question ...

... writing about the scalar-multiplication or F-action in V, you write:

"... ... ... 2. An "induced map" $a \mapsto \phi_a$ from $F \to \text{End}_{\Bbb Z}(V)$ that for every $a \in F$, we get a map $V \to V$ defined by:

$\phi_a(v) = av$. ... ... ... My question is as follows:

What is the significance of the subscript \(\displaystyle \mathbb{Z}\) in \(\displaystyle \text{End}_{\Bbb Z}(V)\)? Can you please explain?

Hope you can help ...

Peter*** EDIT ***

Just noticed something else I need to ask you about ... in the above post, you write:"... ... The "other" main ingredient in a vector space is the scalar-multiplication, or $F$-action. Usually this is written as a left-action. We can view this in two main ways:

1. A "mixed map" $\mu: F \times V \to F$. This is a more "intuitive" view. ""
Shouldn't the last sentence of the above quote from you actually read:

1. A "mixed map" \(\displaystyle \mu: F \times V \to V\). This is a more "intuitive" view. ""

Peter
 
Last edited:
  • #6
Peter said:
Hi Deveno,

Thanks for the significant help!

Will work through your post in detail shortly ...

... but ... just a quick clarifying question ...

... writing about the scalar-multiplication or F-action in V, you write:

"... ... ... 2. An "induced map" $a \mapsto \phi_a$ from $F \to \text{End}_{\Bbb Z}(V)$ that for every $a \in F$, we get a map $V \to V$ defined by:

$\phi_a(v) = av$. ... ... ... My question is as follows:

What is the significance of the subscript \(\displaystyle \mathbb{Z}\) in \(\displaystyle \text{End}_{\Bbb Z}(V)\)? Can you please explain?

Hope you can help ...

Peter*** EDIT ***

Just noticed something else I need to ask you about ... in the above post, you write:"... ... The "other" main ingredient in a vector space is the scalar-multiplication, or $F$-action. Usually this is written as a left-action. We can view this in two main ways:

1. A "mixed map" $\mu: F \times V \to F$. This is a more "intuitive" view. ""
Shouldn't the last sentence of the above quote from you actually read:

1. A "mixed map" \(\displaystyle \mu: F \times V \to V\). This is a more "intuitive" view. ""

Peter

Yes, good catch on that typo.

The subscript $\Bbb Z$ in $\text{End}_{\Bbb Z}(V)$ is to indicate this are only abelian group homomorphisms. By contrast, $\text{End}_F(V)$ (also written as $\text{Hom}_F(V,V)$) is the set of all $F$-linear maps, which is a "smaller" set.

As you may or may not recall, a (unital, associative) *algebra* over a field $F$, is something that is both a ring, *and* a vector space over $F$ such that the scalar multiplication is "compatible" with the ring multiplication. Another way to say this, is we have a ring-homomorphism:

$\eta: F \to Z(A)$.

(this makes $A$ into an extension ring of a field, which is *automatically* a vector space, with the scalar multiplication given by the ring-multiplication in $A$).

In this case, it turns out that if we take $A = \text{Hom}_F(V,V)$ that the maps $\phi_a$ form the entire center $Z(A)$. This is the "algebra" part of linear algebra. A basic theorem of linear algebra, is that given a basis for $V$ (where $V$ has dimension $n$), we have an algebra isomorphism between:

$\text{Hom}_F(V,V)$ and $\text{Mat}_n(F)$

that is, "coordinatizing" vectors turns our linear algebra into the arithmetic of a *particular* algebra, the algebra of $n \times n$ matrices with entries in $F$.

Thus, almost as soon as we learn about vectors, our attention shifts from the vector themselves, to linear transformations, which we "turn into numbers" by studying *matrices*. This is often students' first exposure to an algebraic object that behaves "differently" than the fields they are used to, the most obvious "different" properties being:

Matrix multiplication is not commutative,
Not all matrices have an inverse.

These "defects" lead to some of the more interesting properties of linear algebra-such as quantifying just how bad from being a "good matrix" any given matrix is-information which we can "lift" (via our basis) to the abstract structure.
 

FAQ: Basic Exercise in Vector Spaces - Cooperstein Exercise 2, page 14

What is a vector space?

A vector space is a mathematical structure that consists of a set of objects, called vectors, and two operations, vector addition and scalar multiplication, that satisfy certain properties. These properties include closure, associativity, commutativity, and the existence of an identity element.

What is a basis of a vector space?

A basis of a vector space is a set of vectors that are linearly independent and span the entire vector space. This means that any vector in the space can be written as a linear combination of the basis vectors. The number of vectors in a basis is called the dimension of the vector space.

How do you find the dimension of a vector space?

The dimension of a vector space can be found by counting the number of vectors in a basis of that space. For example, if a basis of a vector space consists of 3 vectors, then the dimension of that space is 3.

What is the difference between a vector and a scalar?

A vector is a mathematical object that has both magnitude and direction, and is represented by an arrow. A scalar, on the other hand, is a single numerical value, such as a real number, that has magnitude but no direction. In vector spaces, vectors can be added and multiplied by scalars, but scalars cannot be added or multiplied by other scalars.

Can a vector space have more than one basis?

Yes, a vector space can have more than one basis. In fact, any two bases of a vector space will have the same number of vectors, which is equal to the dimension of the space. However, the specific vectors in each basis may be different.

Back
Top