How Are Coefficients of a Vector Linear Functions of the Vector?

In summary, Winitzki explains that the coefficients of a vector, denoted by v_k, are linear functions (covectors) of the vector v. These functions are dependent on a choice of basis, and are defined as the function that returns the k-th coordinate of v in that basis. Winitzki also introduces the idea of a dual basis, denoted by \epsilon_i, which is used to show that the function \pi_i is equivalent to \epsilon_i.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Segei Winitzki's book: Linear Algebra via Exterior Products ...

I am currently focused on Section 1.6: Dual (conjugate) vector space ... ...

I need help in order to get a clear understanding of the notion or concept of coefficients of a vector \(\displaystyle v\) as linear functions (covectors) of the vector \(\displaystyle v\) ...

The relevant part of Winitzki's text reads as follows:
View attachment 5344In the above text we read:" ... ... So the coefficients \(\displaystyle v_k, \ 1 \leq k \leq n\), are linear functions of the vector \(\displaystyle v\) ; therefore they are covectors ... ... "Now, how and in what way exactly are the coefficients \(\displaystyle v_k\) a function of the vector \(\displaystyle v\) ... ... ? To indicate my confusion ... if the coefficient \(\displaystyle v_k\) is a linear function of the vector \(\displaystyle v\) then \(\displaystyle v_k(v)\) must be equal to something ... but what? ... indeed what does \(\displaystyle v_k(v)\) mean? ... further, what, if anything, would v_k(w) mean where w is any other vector? ... and further yet, how do we formally and rigorously prove that \(\displaystyle v_k\) is linear? ... what would the formal proof look like?... ...

Hope someone can help ...

Peter

===========================================================*** NOTE ***To indicate Winitzki's approach to the dual space and his notation I am providing the text of his introduction to Section 1.6 on the dual or conjugate vector space ... ... as follows ... ... View attachment 5345
View attachment 5346
 
Last edited:
Physics news on Phys.org
  • #2
Re: Coefficients of a vector regarded as a fiunction of a vector ... clarification needed ...

The way I am used to seeing this "co-vector" defined is like so:

Suppose $v = \sum\limits_j v_je_j$, where $\{e_j\}$ is a basis (perhaps the standard basis, perhaps not). We define:

$\pi_i(v) = v_i$

(Note we have as many $\pi$-functions, as we have coordinates).

Thus $\pi_i: V \to F$, since $v$ is a vector, and $v_i$ is a scalar.

For EACH $i$, we have $\pi_i$ is linear: suppose $u = \sum\limits_j u_je_j$ and $v = \sum\limits_j v_je_j$.

Then $u+v = \sum\limits_j (u_j+v_j)e_j$ because for any vector $v$, and any two scalars $a,b$ we have:

$(a+b)v = av + bv$ (this is one of the axioms that define a vector space).

In particular, this is true for each of the basis vectors $e_j$.

So $\pi_i(u+v) = u_i+v_i = \pi_i(u) + \pi_i(v)$.

We also have, for any $a \in F$, that $a\left(\sum\limits_j v_je_j\right) = \sum\limits_j (av_j)e_j$, since for any two vectors $u,v$, and any scalar $a$ we have:

$a(u+v) = au + av$ (this is another vector space axiom), so we can distribute the $a$ over the linear combination of basis vectors.

So $\pi_i(av) = av_i = a(\pi_i(v))$, and we have a linear function.

Note that Winitzki is just naming the function by its image, something that is often done with functions (we often talk about "the function $x^2$" when what we really MEAN is "the squaring function"). What he really means is the function:

$v \mapsto v_i$ (function that returns the $i$-th coordinate of $v$ in some basis).

It is also important to note here that the function(s) we have defined here *depend on a choice of basis*, because the CO-ORDINATES of a vector depend on the basis used.

Put another way, the choice of a basis $\{e_j\}$ creates an isomorphism $\phi:V \to F^{\dim(V)}$, the $\dim(V)$-fold direct sum of $F$. Explicitly, this isomorphism is:

$\phi(v_1e_1 + \cdots + v_ne_n) = (v_1,\dots,v_n)$.

GIVEN the basis $\{e_j\}$ we can specify a *dual basis* $\{\epsilon_i\}$ by:

$\epsilon_i(e_j) = \delta_{ij}$ (this is 1 if $i = j$, and 0 otherwise). It is not hard to see that $\epsilon_i = \pi_i$, for each $i$, by examining $\pi_i(e_j)$ and extending by linearity.

I have tried to illustrate what is happening here by using "different letters" for the $i$-th coordinate function, and the $i$-th coordinate.
 
Last edited:
  • #3
Re: Coefficients of a vector regarded as a fiunction of a vector ... clarification needed ...

Deveno said:
The way I am used to seeing this "co-vector" defined is like so:

Suppose $v = \sum\limits_j v_je_j$, where $\{e_j\}$ is a basis (perhaps the standard basis, perhaps not). We define:

$\pi_i(v) = v_i$

(Note we as many $\pi$-functions, as we have coordinates).

Thus $\pi_i: V \to F$, since $v$ is a vector, and $v_i$ is a scalar.

For EACH $i$, we have $\pi_i$ is linear: suppose $u = \sum\limits_j u_je_j$ and $v = \sum\limits_j v_je_j$.

Then $u+v = \sum\limits_j (u_j+v_j)e_j$ because for any vector $v$, and any two scalars $a,b$ we have:

$(a+b)v = av + bv$ (this is one of the axioms that define a vector space).

In particular, this is true for each of the basis vectors $e_j$.

So $\pi_i(u+v) = u_i+v_i = \pi_i(u) + \pi_i(v)$.

We also have, for any $a \in F$, that $a\left(\sum\limits_j v_je_j\right) = \sum\limits_j (av_j)e_j$, since for any two vectors $u,v$, and any scalar $a$ we have:

$a(u+v) = au + av$ (this is another vector space axiom), so we can distribute the $a$ over the linear combination of basis vectors.

So $\pi_i(av) = av_i = a(\pi_i(v))$, and we have a linear function.

Note that Winitzki is just naming the function by its image, something that is often done with functions (we often talk about "the function $x^2$" when what we really MEAN is "the squaring function"). What he really means is the function:

$v \mapsto v_i$ (function that returns the $i$-th coordinate of $v$ in some basis).

It is also important to note here that the function(s) we have defined here *depend on a choice of basis*, because the CO-ORDINATES of a vector depend on the basis used.

Put another way, the choice of a basis $\{e_j\}$ creates an isomorphism $\phi:V \to F^{\dim(V)}$, the $\dim(V)$-fold direct sum of $F$. Explicitly, this isomorphism is:

$\phi(v_1e_1 + \cdots + v_ne_n) = (v_1,\dots,v_n)$.

GIVEN the basis $\{e_j\}$ we can specify a *dual basis* $\{\epsilon_i\}$ by:

$\epsilon_i(e_j) = \delta_{ij}$ (this is 1 if $i = j$, and 0 otherwise). It is not hard to see that $\epsilon_i = \pi_i$, for each $i$, by examining $\pi_i(e_j)$ and extending by linearity.

I have tried to illustrate what is happening here by using "different letters" for the $i$-th coordinate function, and the $i$-th coordinate.

Well! ... thanks so much Deveno ... that was extremely helpful ...

Indeed while I can see straight away that \(\displaystyle \pi_i\) is a function ... I was having real trouble with Winitzki asserting that \(\displaystyle v_i\) was a linear function of \(\displaystyle v\) (or at least could be understood as a linear function of \(\displaystyle v\)) ...

But then ... the key for me was your explanation ...

" ... ... Note that Winitzki is just naming the function by its image, something that is often done with functions (we often talk about "the function $x^2$" when what we really MEAN is "the squaring function"). What he really means is the function:

$v \mapsto v_i$ (function that returns the $i$-th coordinate of $v$ in some basis). ...

... ... ... "

I still feel a bit uncomfortable with this understanding ... but at least I understand what Winitzki means ... I note that you say this is often done ... I think another example is when we identify the Jacobian matrix with the total derivative or differential in the analysis of vector functions ...

Thanks again,

Peter
 
Last edited:

FAQ: How Are Coefficients of a Vector Linear Functions of the Vector?

What are coefficients of a vector?

Coefficients of a vector are scalar values that are multiplied to each component of a vector to determine its magnitude and direction. They represent the relative weight or influence of each component in the overall vector.

How are coefficients of a vector calculated?

Coefficients of a vector can be calculated by dividing each component of the vector by its magnitude. This will result in a unit vector, where the coefficients represent the direction and the magnitude represents the weight or influence.

What is the significance of regarding coefficients of a vector as a function of a vector?

Regarding coefficients of a vector as a function of a vector allows us to express the relationship between the two vectors in a mathematical form. It helps us analyze and manipulate the vector and its components in a more systematic way.

Can coefficients of a vector be negative?

Yes, coefficients of a vector can be negative. A negative coefficient indicates that the component of the vector is pointing in the opposite direction of the vector itself. This is important to consider when performing vector operations and calculations.

How can understanding coefficients of a vector as a function of a vector be helpful in real-world applications?

Understanding coefficients of a vector as a function of a vector is essential in various fields such as physics, engineering, and computer graphics. It allows us to model and analyze the physical forces and movements of objects, design and optimize structures, and create realistic 3D graphics and animations.

Similar threads

Back
Top