Confusion between vector components, basis vectors, and scalars

In summary, @kuruman says that a vector component is the sum of its components times the corresponding basis vectors, which is a scalar. The components are invariant under the Einstein summation convention and are obtained from the scalar product.
  • #1
e2m2a
359
14
TL;DR Summary
Confused about vector components, basis vectors and scalars.
There is an ambiguity for me about vector components and basis vectors. I think this is how to interpret it and clear it all up but I could be wrong. I understand a vector component is not a vector itself but a scalar. Yet, we break a vector into its "components" and then add them vectorially to get the vector. But if vector components are scalars we cannot add scalars to get a vector. So is the solution to this confusion as follows. The vector component indeed is only a scalar but when we multiply vector component, the scalar, times the basis vector we get a vector. (A scalar times a vector is a vector. And I assume basis vectors are vectors.) Hence, what we are doing to get the vector is not adding the vector components but the vector components times the basis vectors and then add the product of the vector component times the basis vector vectorially to get a vector. Is this how it's done?
 
Physics news on Phys.org
  • #2
Maybe it is more clear if you think of vector as invariants objects.
So the components of the vector are contravariant tensor, and the basis is a covariant tensor.
$$\vec {v} = V^{i}\vec{e_{i}}$$

You didn't add components to got the vector, you multiply the components with the basis and them get the vector. I think it is a little dangerous to think of the components of vectors separatelly. It is like to suppose a matrix is a scalar because $a_{11}$ is a complex number, say $3i$. Now while it is true that $3i$ is $3i$ in any space/reference, it is not true that $a_{11}$ is the same in any reference frame.

So, in the case here, say ##V = (3,4,3)##, using the known cartesian coordinates. In the same way, while it is true that ##3=3## in another frame, it is not true that ##a_{11}=a_{1'1'}## in another frame. So while it is true that we use scalar to represent the components of the vector, this does not mean it is a scalar.
 
  • #3
Your thinking is correct. Basis vectors are usually normalized, so you can think of them as unit vectors. In the simple case of 3d Cartesian space when you write ##\vec A=A_1~\hat x_1+A_2~\hat x_2+A_3~\hat x_3,## the ##\hat x_i## form an orthonormal basis set, ##\hat x_i \cdot \hat x_j=\delta_{ij}##. The components are scalars and are obtained from the scalar product, ##A_i=\vec A \cdot \hat x_i##.

And yes, to get vector ##\vec A## in this case, you are adding components times basis vectors which you can think of as the addition of three vectors having the form ##(A_i~\hat x_i)##.
 
  • Like
Likes vanhees71
  • #4
e2m2a said:
I understand a vector component is not a vector itself but a scalar.
It is under a certain interpretation of the term "scalar", but you have to be careful.

Strictly speaking, a "scalar" is a number that is invariant, i.e., it doesn't depend on your choice of coordinates (or, equivalently, on your choice of basis vectors). But a vector component, as normally understood, is dependent on your choice of basis vectors, which would mean it would not be a scalar.

However, if you think of a vector component in terms of the sum you describe, where a particular vector ##V## is expressed as the sum of its components times their corresponding basis vectors, then we can think of the components as scalars because we have fixed the choice of basis vectors when we construct the sum. Or, to put it another way, we can think of a vector component, as @kuruman says, as the scalar product (inner product, or dot product) of the vector ##V## and the specific basis vector corresponding to the component. The dot product of two specific vectors is an invariant--it will be the same regardless of your choice of basis. (But of course the "basis" vector in the dot product will only be a basis vector for one choice of basis.) So in this sense, yes, vector components are scalars.
 
  • Like
Likes PhDeezNutz and vanhees71
  • #5
This works as follows: Let ##\vec{b}_k## (##k \in \{1,\ldots,d \}##, where ##d## is the dimension of the (real) vector space). Then you can define a matrix
$$g_{jk} = \vec{b}_j \cdot \vec{b}_k.$$
By assumption this is a symmetric non-degenerate matrix with only positive eigenvalues, because the scalar product is by definition a positive definite bilinear form.

In this context the basis vectors with lower indices are called co-variant basis vectors. If you have another basis ##\vec{b}_k'##, then there is a invertible matrix ##{T^j}_k## such that
$$\vec{b}_k' = {T^j}_k \vec{b}_j,$$
where the Einstein summation convention is used. According to this convention it is understood that over any index which occurs twice in a formula you have to sum, and there must always be one of these indices a lower and the other be an upper index (two equal indices at the same vertical place are strictly forbidden; if such a thing occurs in the formulas of this socalled Ricci calculus, you made a mistake!).

Now vectors are invariant objects, i.e., you can decompose any vector uniquely wrt. to any basis with the corresponding vector components,
$$\vec{V}=V^j \vec{b}_j = \vec{b}_k' V^{\prime k} = {T^{j}}_ k V^{\prime k} \vec{b}_k.$$
I.e., the components transform "covariantly",
$$V^j={T^j}_k V^{\prime k},$$
or defining
$${\tilde{T}^k}_j={(T^{-1})^k}_j \; \Rightarrow \; {\tilde{T}^k}_j {T^j}_l=\delta_j^l = \begin{cases} 1 & \text{if} \quad j=l, \\ 0 & \text{if} \quad j \neq l. \end{cases}$$
Then you have
$${\tilde{T}^l}_j V^j ={\tilde{T}^l}_j {T^j}_k V^{\prime k}=\delta_k^l V^{\prime k}=V^{\prime l}.$$
For the scalar products of vectors you find
$$\vec{V} \cdot \vec{W} =(V^j \vec{b}_j) \cdot (W^k \vec{b}_k) = V^j W^k \vec{b}_j \cdot \vec{b}_k = g_{jk} V^j W^k.$$
The same holds for the components wrt. the other basis
$$\vec{V} \cdot \vec{W}=g_{jk}' V^{\prime j} V^{\prime k},$$
where the transformation law for the metric components read
$$g_{jk}' = \vec{b}_j' \cdot \vec{b}_{k}' = (\vec{b}_l {T^l}_j) \cdot (\vec{b}_m {T^m}_k) = {T^l}_j {T^m}_k \vec{b}_l \cdot \vec{b}_m = {T^l}_j {T^m}_k g_{lm},$$
i.e., you have to apply the rule for covariant transformations to each lower index of ##g_{lm}##. The inverse of this formula is of course
$$g_{lm} = {\tilde{T}^j}_l {\tilde{T}^k}_m g_{jk}'.$$
Now you can also introduce a contravariant basis ##\vec{b}^j## for each given covariant basis by demanding that
$$\vec{b}^j \cdot \vec{b}_k=\delta_{k}^j.$$
To find them we need to inverse of the matrix ##g_{jk}## which we denote as ##g^{lm}##, i.e., we have
$$g_{jk} g^{kl}=\delta_j^l.$$
The matrix ##g_{jk}## is invertible because the scalar product is non-degenerate, because it's positive definite, i.e., the matrix has only positive eigenvalues and thus it's determinant is non-zero. Indeed, defining
$$\vec{b}^{\prime j}=g^{jk} \vec{b}_k$$
does the job, because
$$(g^{jk} \vec{b}_k) \cdot \vec{b}_l = g^{jk} \vec{b}_k \cdot \vec{b}_l = g^{jk} g_{kl}=\delta_k^l.$$
Then you have
$$\vec{V} \cdot \vec{b}^j=(V^k \vec{b}_k) \cdot \vec{b}^j = V^k \vec{b}_k \cdot \vec{b}^j = V^k \delta_k^j = V^j.$$
So you get the contravariant vector components by multiplying with the co-basis vectors. On the other hand you have
$$\vec{V} = V^j \vec{b}_j = V^j g_{jk} \vec{b}^k=V_k \vec{b}^k.$$
So you get covariant components of ##\vec{V}## as
$$V_k = g_{jk} V^j.$$

The ##\vec{b}^j## are contravariant, i.e., they transform analogously to the contravariant vector components:
$$\vec{b}^{\prime l} \cdot \vec{b}_m'=\delta_m^l.$$
From this you get
$$\vec{b}^{\prime l} \cdot ({T^j}_m \vec{b}_j)={T^j}_m \vec{b}^{\prime l} \cdot \vec{b}_j =\delta_m^l = \vec{b}^m \cdot \vec{b}_j.$$
From this we have
$$\vec{b}^{\prime l} \cdot \vec{b}_j={\tilde{T}^l}_j,$$
because the inverse matrix of ##{T^{j}}_k## is uniquely given by the matrix ##{\tilde{T}^l}_m##. So we have
$$\vec{b}^{\prime l} = \vec{b}^{\prime l} \cdot \vec{b}_j \vec{b}^j={\tilde{T}^l}_j \vec{b}_j,$$
i.e., the transform contravariantly, as claimed above.
 
  • Like
Likes PhDeezNutz

FAQ: Confusion between vector components, basis vectors, and scalars

What are vector components?

Vector components refer to the individual parts or elements that make up a vector. They are typically represented by numbers or variables and describe the magnitude and direction of a vector.

What are basis vectors?

Basis vectors are a set of linearly independent vectors that can be used to represent any vector in a given vector space. They are typically orthogonal (perpendicular) to each other and have a unit magnitude.

What is the difference between vector components and basis vectors?

Vector components are the numerical values that describe the magnitude and direction of a vector, while basis vectors are the actual vectors that make up the coordinate system used to represent the vector. In other words, vector components are the coordinates of a vector, while basis vectors are the axes on which those coordinates are plotted.

What is a scalar?

A scalar is a quantity that has magnitude but no direction. In the context of vector components and basis vectors, scalars are typically used to represent the numerical values of the vector components.

How can I avoid confusion between vector components, basis vectors, and scalars?

To avoid confusion, it is important to understand the definitions and roles of each concept. Vector components are the numerical values, basis vectors are the vectors used to represent those values, and scalars are the numerical quantities themselves. It can also be helpful to practice using these concepts in calculations and graphical representations.

Similar threads

Back
Top