Coefficients of a vector regarded as a function of a vector

In summary, Buzz Bloom is reading Segei Winitzki's book: Linear Algebra via Exterior Products and is struggling to understand the concept of coefficients of a vector v as linear functions (covectors) of the vector v. He asks for help in order to get a clear understanding of the notion or concept of coefficients of a vector v. Winitzki's text reads as follows: " ... So the coefficients v_k, \ 1 \leq k \leq n, are linear functions of the vector v ; therefore they are covectors ... ... " and offers a helpful analogy of the function v_i(v) as the image of the function v_i(v_i).
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Segei Winitzki's book: Linear Algebra via Exterior Products ...

I am currently focused on Section 1.6: Dual (conjugate) vector space ... ...

I need help in order to get a clear understanding of the notion or concept of coefficients of a vector [itex]v[/itex] as linear functions (covectors) of the vector [itex]v[/itex] ...

The relevant part of Winitzki's text reads as follows:
?temp_hash=4082c7aaf4ddda88f4ad9183a4813769.png

In the above text we read:" ... ... So the coefficients [itex]v_k, \ 1 \leq k \leq n[/itex], are linear functions of the vector [itex]v[/itex] ; therefore they are covectors ... ... "Now, how and in what way exactly are the coefficients [itex]v_k[/itex] a function of the vector [itex]v[/itex] ... ... ?To indicate my confusion ... if the coefficient [itex]v_k[/itex] is a linear function of the vector [itex]v[/itex] then [itex]v_k(v)[/itex] must be equal to something ... but what? ... indeed what does [itex]v_k(v)[/itex] mean? ... further, what, if anything, would [itex]v_k(w)[/itex] mean where [itex]w[/itex] is any other vector? ... and further yet, how do we formally and rigorously prove that [itex]v_k[/itex] is linear? ... what would the formal proof look like?... ...

Hope someone can help ...

Peter

============================================================================

*** NOTE ***

To indicate Winitzki's approach to the dual space and his notation I am providing the text of his introduction to Section 1.6 on the dual or conjugate vector space ... ... as follows ... ...
?temp_hash=4082c7aaf4ddda88f4ad9183a4813769.png

?temp_hash=4082c7aaf4ddda88f4ad9183a4813769.png
 

Attachments

  • Winitzki - Coeficient as a function of a vector        ....png
    Winitzki - Coeficient as a function of a vector ....png
    28.8 KB · Views: 789
  • Winitzki - 1 - Section 1.6 - PART 1     ....png
    Winitzki - 1 - Section 1.6 - PART 1 ....png
    42.6 KB · Views: 664
  • Winitzki - 2 - Section 1.6 - PART 2      ....png
    Winitzki - 2 - Section 1.6 - PART 2 ....png
    39.1 KB · Views: 641
Last edited:
Physics news on Phys.org
  • #2
Hi Math:

One way to look at the vi components of v WRT the basis vectors ei is:
vi = v DOT ei .​
where DOT is the dot product operator.

Hope that helps.

Regards,
Buzz
 
  • Like
Likes Math Amateur
  • #3
Thanks for the help Buzz Bloom ...

... BUT ... while your interpretation [itex] v_i (v) = v \cdot e_i [/itex] works in a way ...

... it then defines [itex]v_i[/itex] as a function with only one domain value, namely [itex]v[/itex] ... and only one image namely [itex]v \cdot e_i = v_i[/itex] ...

Is that right?

Peter
 
  • #4
Hi Math:

I am not sure I understand what is puzzling you. What other domain value than v do you think might be plausible? Your use of the term "image" also seems odd.
The following are quotes from Wikipedia.
In mathematics, an image is the subset of a https://www.physicsforums.com/javascript:void(0) 's codomain which is the output of the function on a subset of its domain.
In mathematics, the codomain or target set of a https://www.physicsforums.com/javascript:void(0) is the https://www.physicsforums.com/javascript:void(0) Y into which all of the https://www.physicsforums.com/javascript:void(0) of the function is constrained to fall.​
As I interpret these definitions, for a single valued function the image is always unique. Do you think it might be possible for vi(v) to be a multi-valued function?

Regards,
Buzz
 
Last edited by a moderator:
  • Like
Likes Math Amateur
  • #5
A function on a vector space is linear if ##L(aV + bW) = aL(V) + bL(W)## for arbitrary scalars ##a## and ##b## and arbitrary vectors ##W## and ##V##.

If one has a basis for the vector space then a linear function is determined completely by its values on the basis vectors. For instance the function that assigns zero to all but the i'th basis vector and 1 to the i'th is an example. It just picks out the i'th coefficient of a vector with respect to this basis.
 
  • Like
Likes Math Amateur
  • #6
Given a basis, each coordinate ##v_k## is uniquely determined by ##\mathbf v##. This means that it is a function of ##\mathbf v##. The purpose of the ##\mathbf u+\lambda \mathbf v## -line is to prove that each of these functions is linear. They actually only prove this for the first coordinate, but the same argument would work for each ##k##.

The author either defines a linear transformation ##T:U\to V## by the condition ##T(\mathbf u + \lambda \mathbf v)=T(\mathbf u)+\lambda T(\mathbf v)##, for all ##\mathbf u,\mathbf v\in U## and ##\lambda \in \Bbb C## (or ##\Bbb R##), or assumes it is known that this condition is equivalent to ##T## being a linear transformation.
 
  • Like
Likes Math Amateur
  • #7
Thanks Buzz, Lavinia and Erland ... you have helped me gain an understanding of the issue that was bothering me ...

I also had a helpful post from Deveno on MHB ... so I thought I'd share part of it with you ...

The start of Deveno's post which contains the essence of his post reads as follows:" ... ... The way I am used to seeing this "co-vector" defined is like so:

Suppose [itex]v = \sum\limits_j v_je_j [/itex], where [itex]\{e_j\}[/itex] is a basis (perhaps the standard basis, perhaps not). We define:

[itex]\pi_i(v) = v_i[/itex]

(Note we have as many [itex]\pi[/itex]-functions, as we have coordinates).

Thus [itex]\pi_i: V \to F[/itex], since [itex]v[/itex] is a vector, and [itex]v_i[/itex] is a scalar... ... "Another important point is made later in his post ... where he writes:

" ... ... Note that Winitzki is just naming the function by its image, something that is often done with functions (we often talk about "the function [itex]x^2[/itex]" when what we really MEAN is "the squaring function"). What he really means is the function:

[itex]v \mapsto v_i[/itex] (function that returns the [itex]i[/itex]-th coordinate of [itex]v[/itex] in some basis).

It is also important to note here that the function(s) we have defined here *depend on a choice of basis*, because the CO-ORDINATES of a vector depend on the basis used. ... ... "

There is more to Deveno's post, but I have mentioned the main two points ...

Peter
 
Last edited:
  • #8
Erland said:
The author either defines a linear transformation ##T:U\to V## by the condition ##T(\mathbf u + \lambda \mathbf v)=T(\mathbf u)+\lambda T(\mathbf v)##, for all ##\mathbf u,\mathbf v\in U## and ##\lambda \in \Bbb C## (or ##\Bbb R##), or assumes it is known that this condition is equivalent to ##T## being a linear transformation.
At a closer thought, I realize that this condition is in fact not equivalent to the standard definirtion of linear transformation (for example the one given by Lavinia in Post #5). The condition does not imply that ##T(\lambda \mathbf u)=\lambda T(\mathbf u)## which is included in the ordinary definition. So either the author quoted in the OP made a mistake or some advanced reasoning.
 
  • #9
Erland said:
At a closer thought, I realize that this condition is in fact not equivalent to the standard definirtion of linear transformation (for example the one given by Lavinia in Post #5). The condition does not imply that ##T(\lambda \mathbf u)=\lambda T(\mathbf u)## which is included in the ordinary definition. So either the author quoted in the OP made a mistake or some advanced reasoning.
Are you sure that they are not equivalent?

Taking ##u=0, v=0:\ T(0)=T(0+\lambda 0)=T(0)+\lambda T(0)=(1+\lambda )T(0)## for any ##\lambda##.
So ##T(0)=0##.
Then ##T(\lambda v)=T(0 +\lambda v)=T(0)+\lambda T(v)=\lambda T(v)## for all ##v \in U## and all scalars ##\lambda##.
 
  • Like
Likes Math Amateur
  • #10
Yes, you are right... I guess I was tired :oops:
 
  • #11
Math Amateur said:
The start of Deveno's post which contains the essence of his post reads as follows:

" ... ... The way I am used to seeing this "co-vector" defined is like so:

Suppose [itex]v = \sum\limits_j v_je_j [/itex], where [itex]\{e_j\}[/itex] is a basis (perhaps the standard basis, perhaps not). We define:

[itex]\pi_i(v) = v_i[/itex]

(Note we have as many [itex]\pi[/itex]-functions, as we have coordinates).

Thus [itex]\pi_i: V \to F[/itex], since [itex]v[/itex] is a vector, and [itex]v_i[/itex] is a scalar... ... "

It is also important to note here that the function(s) we have defined here *depend on a choice of basis*, because the CO-ORDINATES of a vector depend on the basis used. ... ... "

This description is the same as already explained. IMO the best way to think of a co-vector is as a linear map from a vector space into the field of scalars. This idea is independent of any basis.

However, if one has a basis then any covector is determined by its values on the basis vectors. This follows directly from the condition that the covector is a linear map.

If one writes the vector ##v## in terms of a basis as ##v = Σ_{i}v_{i}e_{i}## and if ##L## is a covector, then ##L(v) = Σv_{i}L(e_{i})## and this shows that if one knows the values of ##L## on the ##e_{i}##'s one knows ##L## on any vector, ##v##.

It is important to notice that covectors form a vector space of their own - often called the dual space. If ##L## and ##H## are covectors then any linear combination of them ##aL + bH## is also a covector.

If one has a basis ##e_{i}## for the vectors space, then a basis for the vector space of covectors - called the dual basis are the linear maps ##π_{i}## defined by ##π_{i}(e_{j}) = δ_{ij}## This is the covector that assigns 1 to the i'th basis vector and zero to all of the others - as mentioned already above. For each choice of basis ##e_{i}## one has a corresponding choice of basis ##π_{i}## for the vector space of covectors.

The covectors ##v_{i}## mentioned above are the same as the covectors ##π_{i}##. So the function that picks out the i'th coordinate of a vector with respect to a basis is a covector.

The dual space to the space of covectors is also a vector space. One might call it the space of covectors of covectors. If one writes a covector as ##Σ_{i}l_{i}π_{i}## then the ##l_{i}##'s are a basis for the space of covectors of covectors. A standard theorem says that this space is naturally isomorphic to the original vector space. Otherwise said, the dual space of the dual space of a vector space is naturally isomorphic to the vector space itself. One can see this by observing that the vector ##v## defines a linear map on covectors by ##v(L) = L(v)##.

One final but crucial point: A vector space and its dual space (space of covectors) are isomorphic but not naturally isomorphic. There is no handy isomorphism between them the way that there is a natural isomorphism between the vector space and its double dual. One way to define an isomorphism is with an inner product. The covector corresponding to the vector ##v## is ##L_{v}(w) = <v,w>##.
 
Last edited:
  • Like
Likes Math Amateur
  • #12
Thanks Lavinia ... very clear and VERY helpful ...

Peter
 
  • #13
I'll just add that if you ever have occasion to deal with an infinite-dimensional vector space V (for instance, a countable-dimensional vector space having as basis

B = {ej | j = 1, 2, 3,...}​

), then the (ordinary algebraic) dual is not isomorphic to the original vector space. Instead, the dimension of the dual has a larger cardinality than the dimension of V:

dim(V*) > dim(V).​

Also, note that in many cases when an infinite dimensional vector space V has a topology, the only dual vector space one is interested is the vector space of continuous linear functions on V. In this case, the continuous dual Vc* might be the same dimension as the original vector space.

For details on both the algebraic dual and the continuous dual, this is a good reference: https://en.wikipedia.org/wiki/Dual_space.
 
  • Like
Likes Math Amateur
  • #14
Thanks for the post zinq ... definitely helpful and interesting as I do want to try to cover the case of infinite dimensional vector spaces ... ...

Thanks for the useful reference ...

Peter
 

FAQ: Coefficients of a vector regarded as a function of a vector

1. What are coefficients of a vector?

Coefficients of a vector refer to the numerical values that are multiplied by the corresponding components of a vector. They represent the magnitude and direction of the vector.

2. How are coefficients of a vector calculated?

The coefficients of a vector are calculated by multiplying each component of the vector by its corresponding coefficient. For example, if the vector is (x, y, z) and its coefficients are a, b, and c, the resulting vector would be (ax, by, cz).

3. Can coefficients of a vector change?

Yes, coefficients of a vector can change if the vector itself changes. For example, if the vector is scaled or rotated, the coefficients will also change.

4. What does it mean for coefficients of a vector to be a function of a vector?

When coefficients of a vector are considered as a function of a vector, it means that the coefficients can vary depending on the values of the vector. In other words, the coefficients are not fixed and can change based on the input vector.

5. How are coefficients of a vector used in scientific research?

Coefficients of a vector are commonly used in mathematical and scientific research to represent physical quantities such as force, velocity, and acceleration. They are also used in areas such as physics, engineering, and economics to model and analyze various systems and phenomena.

Similar threads

Replies
24
Views
1K
Replies
9
Views
1K
Replies
3
Views
1K
Replies
9
Views
2K
Replies
32
Views
3K
Replies
10
Views
1K
Back
Top